The above throws this error: CS1502: The best overloaded method match for 'System.Web.WebPages.WebPageExecutingBase.Write(System.Web.WebPages.HelperResult)' has some invalid arguments
i am trying to have a download on my page. I have implemented the code below but code execution stops after the download message pops up, even if i press save or cancel, nothing happens after that.
[Code]....
I have tried it a few times without Response.Clear() and Response.End() but it didn't make any difference to take these off. Is there anyway I can achieve this without having to using an IFrame?
I have an ASP.NET (C#) page that has a long load time (like 2 minutes). The user is presented with a little animation and a "please wait" message. If the user accidentally loads this page, they need to wait for it to load.
My question is: Is there a way to stop the page load?
So imagine a piece of Javascript as the first script on a page along the lines of
var MySuperObject = new (function () { this.SuperObjectInit(); })();
Now imagine that everything that proceeds this script (or a large portion therein) requires the SuperObject to have met its load conditions and loaded correctly.
Assuming for whatever reason the loading of the object fails I need to abort loading the rest of the page and the scripts in particular.
I know the majority of you are going to scream why not have your function issue a callback onSuccess and onFailed but the problem is this is in a ASP.Net project with masterpages, nestedmasterpages, usercontrols and so forth (each of which have their own dependencies and scripts); rendering such an approach problematic.
The other option (I assume) is to use window.location = "myErrorPage.html"; but I dont like the idea of having to create another page for an error message or the fact that it causes a redirect.
What I am hoping to do is something along the lines of
StopLoadingPage(); document.write("Error has occurred");
I have noticed two time-related problems with my website:
[1] Slowness of loading the graphics for each page in the browser; [2] Receiving a "Runtime Error" page in the browser when I do take action on the site (e.g. navigate to another page) within approx. 30 seconds.
Are these problems related to limitations of my web host?
A portion of my site requires data from a web service, with takes 7-8 seconds. How can I load the rest of the website, show a Updateprogress for the data which comes from the service, than show the data once obtained trough a updatepannel. Some working project would be great.I need the server to retun the page before the data from service is obtained, and add that data later using ajax
I am using ASP.NET 2.0 with C#(No AJAX) in my project. In a particular web page, when the a button is clicked, some server intensive processing occurs before the same page is displayed again with the results. While code execution happens on the server in response to Button_Click event, a blank white page is shown to user on his browser in between post backs.
How do i show a message in this case, that the processing is still going on and ask the user to wait.
I have used javascript to show a message on page unload. But this message is also erased when the page is posted back to the server and the user sees a blank white page on his browser. How do i avoid this white page? Is there a way to show a message in the blank white page ?
I have a website where users can login and they have different roles/privileges. I want to have it where the user's view of the webpage is determined by their role. Right now I am storing the role in the UserData property of the FormsAuthenticationTicket class (which is retrieved from a database during login). When the main page is loading, I want it to check the user's role and then only show the controls/portions of the page that are for that role. For example, if the user is not an administrator, they shouldn't be able to click on a button to delete a record. At the moment I am using labels to hide or show areas depending on the user's role. Something like this:
[Code]....
[Code]....
And here is a portion of the markup code with the Labels:
[Code]....
This works, but doesn't seem to be a very good way to handle this type of thing. Is there a cleaner, more elegant way of doing this? Something similar to the LoginView control, but which I can use for roles?
I am using a master page with a treeview control. I have 2 other pages Summary.aspx and Home.aspx
I want the Summary page to be loaded when I run the application but afterwards I want to load the Home.aspx page based on the selected Node change event of the treeview control.
I have placed the treeview control in master page because it has to remian the same for all my pages.
Did I do something wrong? The problem is I am not able to load the pages based on events occuring for Tree view control.
We have an existing, proprietary data processing application that runs on one of our servers and we wish to expose it as a web service for our clients to submit jobs remotely. In essence, the system takes a set of configuration parameters and one or more data files (the number of files depends on the particular configuration template, but the normal config is 2 files). The application then takes the input files, processes them, and outputs a single result data file (all files are delimited text / CSV or tab).
We want to now expose this process as a service. Based on our current setup and existing platforms, we are fairly confident that we want to go with WCF 4.0 as the framework and likely REST for the service format, though a SOAP implementation may be required at some point.
Although I am doing a lot of reading on SOA, WCF and REST, I am interested in other thoughts on how to model this service. In particular, the one-to-many relationship of job to required files for input. It seems pretty trivial to model a "job" in REST with the standard CRUD commands. However, the predefined "job type" parameter defines the number of files that must be included. A job type of "A" might call for two input files, while "B" requires 3 before the job can run.
Given that, what is best way to model the job? Do I include the multiple files in the initial creation of the job? Do I create a job and then have an "addFile" method where by I can then upload the necessary number of files?
The jobs will then have to run asynchronously because they can take time. Once complete, is it best to then just have a status field in the job object and require the client to regularly query the system for job status, or perhaps have the client provide a URL to "ping" when the job is complete?
I have looked at nearly every single WCF Rest PUT/POST issues on SO here and have still been unable to determine why I am unable to PUT or POST to my web service but am able to call a test GetTime method via GET. This particular service exchanges custom credential information (username, password and some other information) for a token. This token information is encrypted and added to the header of subsequent requests to other web services so that username/passwords don't have to be passed around everywhere. All web service calls are still treated as stateless, but require this auth header information rather username/passwords to access secured service operations.
[Code]....
I have also checked in IIS 7 the handler mappings for *.svc and checked that 'all verbs' are enabled.
I need to run a method before my 90% of the views in my project run. I thought about adding the code to the _ViewStart.cshtml file, but this rule does not apply to all views. If I could subclass the view class then I could base my views on that class.
Is it possible to somehow output some content based upon some conditional check in Razor? If not, I hope this possibility will be added in the future. What I want to do is the following:
We have installed a web site written by others which is compiled with Visual Studio 2008 and hosted in Windows server 2008 R2.
The IIS connection timeout is set to 120 seconds. But for some pages, the first page loading fails with HTTP 404 error but sequential refresh can bring the page up. The same problem happens for some images which fail to load in web pages. We are not very sure it is network related issue or hosting issue.
I am makiing a web application and this application will run only in LAN enviorement.what i want to do is-
there is a textbox and a button start. in this textbox i want to give IP address and on button click i want to run a exe on the particluer system ( ip given). and on stop click i want to stop the exe.
I've got a form with a button on it. You have to be logged in to see this form. I use this code to check if the user is logged in and act appropriatley.
Dim isLoggedIn As Boolean = CType(Session("LoggedIn"), Boolean) If isLoggedIn = False Then Response.Redirect("Login.aspx") End If
The button runs a report that takes anywhere from 0.5 to 20 seconds. For testing it just sleeps for 10 seconds.
Threading.Thread.Sleep(10000)
The problem is this. Say User1 comes to the form, logs in and runs the report. If User2 comes to the form, the page will not load until User1's report is done.
Now, the bonus is if I comment out the response.redirect line, everything works fine. But if that line is in there I will get this problem.
have a website with MasterPages, on load master page has many database roundtrips and logics implemented.Now, if move to one page to another, the MasterPage get reloaded and all the above mentioned procedures processed again, which again takes a lot of time and as a result my website's response is very slow.For eaxmple, once the default page is loaded, all the child pages can load within the centent area (in ajax style).
tell me some workaround to optimize the site so that it doesn't this procedures repetitively.
my current question is tightly related to this one, but is far more specific. We have to plan a design strategy for the objective described in that question. We want to do this by rewriting HTML on ASP.NET web forms. My question is: which strategy is the best according to parameters of feasibility, performance impact and implementation effort on legacy applications.
what I have to do is to basically get the HTML output of a Web Form, parse it, and replace certain URLs according to user-defined rules. In that example, I would rewrite all static content to CDN URLs, but it can be easily extended to URL rewriting techniques. I found lots (and I really mean lots) of articles about URL rewriting from the perspective of having URLs like [URL]interpreted as [URL] but I found none showing me how to smartly format old-style URLs to shorter format right from inside HTML (so the page will render the short-form URL directly) [edit] without deep code intervention.
Strategy 1
Like suggested in an answer of the above question, write an HTTP Module that intercepts the HTML and rewrites it. Actually, I looked around and saw I can set a Response.Filter stream object that performs the HTML filtering.
Pros: I can inject the HTTP Module on a legacy application, configure rewriting rules via XML and have the oldest CRM/ecommerce application load static content from a CDN without touching its code at all.
Cons: I suspected that (and a comment here confirms my suspects) having to reimplement a Stream's Write method, which operates on a partial buffer in the general case, can result in bad replacements. Suppose the Write method is first called with a chunk like [URL] (where I assume <img src="h was already written before) and later ge.png" /> (so guess the final URL :-P) with a rewrite rule that regexes [URL][^"]* into [URL] the substitution is not done. To workaround that, I could use a MemoryStream or something like that to buffer the complete set of data and then perform the substitutions, but it could cause troubles on highly loaded servers
Strategy 2
Overriding Page's Render method in a way such as described here
Pros: doesn't suffer the chunking problem
Cons: requires defining a base class for all pages. Feasible on new applications, not sure for maintaining legacy applications. Seems has a problem as you cannot instantiate HttpTextWriter directly Obviously, for the new webapps we'll have to develop, I would adopt strategy 2, but I really like using dynamic components a lot, as they can be plugged with ease when application requires them (so if our new app will be installed without a CDN the feature is turned off).
Briefly, my questions are How would you fix both strategies' cons (particularly 1st)? And, of course, do you have other strategies to suggest to achieve this objective?