ADO.NET :: Reading A Segment Of Huge Varbinaty (max) Field By DataReader
Oct 19, 2010
I have a table with just a column and a row in a table that it save just a file with size 1.5 GB ! C# application and sql server are in different machines. I want to read that file by DataReader every 100 MB then save all 100 MB files to disk by "FileMode.Append" for file stream and collect them to one file.
I'm trying to read data from a text file into a DataReader object using the following code but I get an exception:
OleDBException was unhandled.
Cannot update. Database or object is read-only.
Also, I would like to know if it is efficient to read data of a text file into a datatable instead of using StreamReader object and traversing through the recordset.
I'm Using a DataReader to Read about 100,000 records, my code look like :
[Code]....
I have a per row computation in reading the rows on GetMyType2Info() method, and this method also another method that I return only one record by ExecuteReader(). the problem is that when the MyTypeList (List<T>) has about 21,000 Items of MyType (<T>) System.OutOfMemoryException is thrown. wanted to know how to resolve this and wanted to know more about the maximum size of List<T> capacity. in addition the execution of the main Query with 100,000 records take about 4 seconds in SQLServer a record has only types of nvarchar , float,datetime and int
I have a project which generates snapshots of a database, converts it to XML and then stores the XML inside a separate database. Unfortunately, these snapshots are becoming huge files, and are now about 10 megabytes each. Fortunately, I only have to store them for about a month before they can be discarded again but still, a month of snapshots turn out to become real bad for it's performance...I think there is a way to improve performance a lot. No, not by storing the XML in a separate folder somewhere, because I don't have write access to any location on that server. The XML must stay within the database. But somehow, the field [Content] might be optimized somehow so things will speed up...I won't need any full-text search options on this field. I will never do any searching based on this field. So perhaps by disabling this field for search instructions or whatever?The table has no references to other tables, but the structure is fixed. I cannot rename things, or change the field types. So I wonder if optimizations is still possible.Well, is it?The structure, as generated by SQL Server:
CREATE TABLE [dbo].[Snapshots]( [Identity] [int] IDENTITY(1,1) NOT NULL, [Header] [varchar](64) NOT NULL,[code]....
Performance isn't just slow when selecting data from this table but also when selecting or inserting data in one of the other tables in this database! When I delete all records from this table, the whole system is fast. When I start adding snapshots, performance starts to decrease. After about 30 snapshots, performance becomes bad and the risk of connection timeouts increase.Maybe the problem isn't in the database itself, although it's still slow when used through the management tool. (Fast when Snapshots is empty.) I mainly use ASP.NET 3.5 and the Entity Framework to connect to this database and then read the multiple tables. Maybe some performance can be gained here, although that wouldn't explain why the database is also slow from the management tools and when used through other applications with a direct connection...
I have text field on a sql server table and I retrieve it on string variable using datareader : string result = reader["MyTextfied"] but I have this errors ( text or binary field cannot be troncated ) My text fied contains a large of text
i have a datareader who read from data field from sql server, this field is datetime format now, when this field is null it returns 01/01/1900!! how can i manage it without if or some check, is there a property in datareader or sqlserver properties who i can return null value from datetime field?
I use a SqlDataReader to get fron sqlServer a row af a table. This table have a text field where I store a Xml configuration. In one case this Xml grow up to 650Kb. When I get the field from the datareader it cost amost 2 seconds:
INFO 2010-04-16 09:46:40,559 [12] Cms.dataContenido - readed INFO 2010-04-16 09:46:42,356 [12] Cms.dataContenido - XMLContent
first of all, sorry if my title isn confusing as i dont really sure the specify description regarding what i wan to do. Now what i trying to do is, I getting 5 random images from the database, this is the code for me to get random 5 records from database:
[Code]....
I dont really know how to use array but willing to try if it is needed.
I'm having a method that exports content from the database to excel files. The method taks as paramaters a DataReader param and a int param - the number of rows. For the number of rows i'm using a dataset, wich i fill using the same query as for the datareader. So I'm executing it twice... Is there a way I can avoid that? get the number of rows from the datareader?
I am getting an error that an open DataReader associated with this Command, when I'm not using datareader(though probably executereader() is the same thing) how would I close this if I don't have a datareader present?
using (SqlConnection conn = new SqlConnection(ConnectionString)) { SqlCommand cmd = new SqlCommand("spSelectAllTypes",conn); cmd.CommandType = CommandType.StoredProcedure; [code]...
I just want to be able to databind a bunch of dropdownlist in one open connection. (before I had multiple open and closes for each control)
I'm trying to develop local website that records and save to sqlserver the private ip of person who will login to my website, Im using Windows Server r2 as my server, Now my problem is it only records the IP with same segment of my server which is segment zero like(192.168.0.83), if the IP of user is segment 1 or 2 the only the Default gateway of their IP are being saved, Correct me if I am wrong if it is defends on segment or gateway.
I have a base project (with all its glory, dlls, resources etc) which is a CMS. I need to use this project as a base for othe custom bake projects. This base project is to be maintained and updated among all custom bake projects. I use subversion (Collabnet and Tortise SVN) I have two questions:
1 - Can I use subversion to share the base project among other projects What I mean here is can I "Checkout" the base project into another "Checked Out" project and have both update and commit seperatley. So, to paint a picture, let's say I am working on a custom project and I modify the core/base prject in some way (which I know will suit the others) can I then commit those changes and upon doing so when I update the base project in the other "Checked out" resources will it pull the changes? In short, I would like not to have to manually deploy updated core files whenever I make changes into each seperate project.
2 - If I create a custom file (let's say an webcontrol or aspx page etc) can I have it compile seperatley from the base project Another tricky one to explain. When I publish my web application it creates DLLs based on the namespaces of projects attached to it. So I may have a number of DLLs including the "Website's" namespace DLL, which could simply be website. I want to be able to make a seperate, custom, control which does not compile into those DLLs as the custom files should not rely on those DLLS to run. Is it as simple to set a seperate namespace for those files like CustomFiles.ProjectName for example? Think of the whole idea as adding modules to the .NET project, I don't want the module's code in any of the core DLLs but I do need for module to be able to access the core dlls.
(There is no need for the core project to access the module code as it should be one way only in theory, though I reckon it woould not be possible anyway without using JSON/SOAP or something like that, maybe I am wrong.) I want to create a pluggable environment much like that of Joomla/Wordpress as since PHP generally doesn't have to be compiled first I see this is the reason why all this is possible/easy. The idea is to allow pluggable themes, modules etc etc. (I haven't tried simply adding .NET themes after compile/publish but I am assuming this is possible anyway? OR does the compiler need to reference items in the files?) I posted a similar question with a little more detail for question 2 on Experts-Exchange. I don't want to post all that info here as it just will be too messy but it explains question 2 in greater detail.
It should store plain HTML fragments of a page like the standard Output cache in asp.net .The HTML may contain dynamic content from a database.When an object is updated in the database all the cached HTML fragment containing that particular object should be destroyed and re-cached next time it will be requested.
There is a separate admin tool to handle all data in the database so I can easy store the Id's in a cachetable when an object is invalid. I can also make a request to a page that destroy all cached HTML fragment for that object.
But when I write the markup, how could I do to store and retrieve a particular segment from the cache? Of cause I could do this in code behind and have the markup in a string but I don't want that. I want to have the markup as intact as possible.
I have this dynamic url and part of it will going to have paging enabled ike, "http://localhost:96556/MVC_Application/proceedings/url_link/url_section/url_item/url_position/page/2"But I'm getting this error at my route below "A catch-all parameter can only appear as the last segment of the route URL.Parameter name: routeUrl"
I know this is probably a pretty easy thing to do and it is if I can upload the file and store it onto the hard drive of the server. What I need to do is read the text file into memory and then parse through it one line at a time. Anyone have any code that demonstrates that?
Does it have any major effect on performance/ memory if my web.config is really huge (say, 1000+ entries in <appSettings>)? Is it a good idea to maintain a different custom xml config file for all business specific settings for my app?
I am workin on a web projects which has arround 25000 xml files. Some where in application we need to compare some part of a xml file with all 25000 xml files. Although it is working but takin arround 25 minutes to compare. how can i reduce this time and can compare files in less time.
We have a asp.net application that has this report that shows about 1000 rows as of right now and can grow up potentially up to 20,000. Dont ask me why, but out client does not like paging and does not like filtering, they like to see everything on a single page. Our obvious problem is the load its putting on the server, in terms of memory (also the factor that the client browser may crash as well).
My question is: If I provide a custom desktop application only for this report, that can display thousands and thousands of rows (through web services or remotting), would it clog up the server? On the server the worker process of the IIS basically eats up memory in case of a the asp.net application, but if I have this desktop app running seperating calling the same data base on the application server,
I have tried to get upto 200,000 record after that iis get exited, So anybody having the solution to get huge records of data and i need to get the data quick as well what is the tune up i have to do in my WCF application
When exporting a lot of data to a string (csv format), I get a OutOfMemoryException. What's the best way to tackle this? The string is returned to a Flex Application.What I'd do is export the csv to the server disk and give back an url to Flex. Like this, I can flush the stream writing to the disk.Update:String is build with a StringBuilder:
I want to upload video files may be more than 500mb, simultanously in which if 500mb files are uploaded and stopped in middle of it and i want to continue the progress where it stops