Let's say a security tester uses a proxy, say Fiddler, and records an HTTPS request using the administrator's credentials-- on replay of the entire request (including session and auth cookies) the security tester is able to succesfully (re)record transactions. The claim is that this is a sign of a CSRF vulnerability.
What would a malicious user have to do to intercept the HTTPS request and replay it? It this a task for script kiddies, well funded military hacking teams or time-traveling-alien technology? Is it really so easy to record the SSL sessions of users and replay them before the tickets expire?
No code in the application currently does anything interesting on HTTP GET, so AFAIK, tricking the admin into clicking a link or loading a image with a malicious URL isn't an issue.
This involves using the Membership provider to add a comment to users server side records when they log in and out, and then confirming that when a cookie is used to authenticate, that the user hasn't logged out. This makes perfect sense to me. Where this starts to fall apart is that we do not currently use a membership provider, and so it seems like I face reimplementing all our authentication code to use a membership provider. We currently check authentication in a controller, and make a call to FormsAuthentication.SetAuthCookie() once we know the user exists. It would be a lot of work to force a membership provider in.
Is all this work really neccesary. Can I roll my own key value store of cookie values to logged in users and just make sure I clear this when a user hits the logout button. If this seems unsafe is there a way of implementing a minimal Membership provider in order to make these checks without handing off all authentication code to it?
I guess my main problem here is that we decided a long time ago that the membership provider model doesnt fit with the model we use for locking and unlocking accounts, and chose not to use it. Now we find that the MS recommendations specifically mention a membership provider, and as this is security I need to be sure that not using it as they recommend isn't going to cause troubles.
I have a ASP.NET XML web service (asmx) running on .NET 3.5. I am trying to figure out how best to prevent replay attacks. Is there any inherent security by .NET 3.5 that should mitigate this issue, or do I need some kind of SOAP header token value?
What's the simplest and most effective way to selectively redirect HTTP requests to your ASP.NET page to its HTTPS equivalent? For example, if my page site URL is [URL], I want to redirect some (or all) page requests to [URL] What's the easiest way to do that?
I have web services built with ASP.NET and ASP.NET clients consuming them. When consuming the webservices, how would I to force the clients to use https?
I don't want to force the whole site to use https by turning on require SSL in IIS.
Can I use the IIS7 URL rewrite module to re-route http requests to https?
I would now like to check if my code is still open to SQL Injections after this work. I believe the code is now working as it should, but any blinding errors that you see i'd love to hear about too. My code is now looking like:
When a user presses Button1 on the Webpage, I would like to copy slightly modified string from txt1 (Text) into txt2 (Text). The problem is sometimes I get an error "a potentially dangerous request.form value was detected from the client validaterequest". I get this error when special symbols llike "<" or ">" are in txt1.Text.I've read about that problem. That error is to prevent from hackers who can input scripts into the txt1.All I did is:
1) Put validateRequest="false" into <%@ Page Language="VB" validateRequest="false" at Default.aspx.
Now it works and allows to take any data from txt1, slightly modify it and put into txt2.So, my question is: Did a level of security was reduced after I wrote validateRequest="false" ? Any code should be added to keep the good level of security? Or, I'd better use another way to copy txt1 to txt2?
using a linqDataSource control... in the selecting event I have code like the following for a simple search feature:
[Code]....
In general, would dynamically building the 'Where' property of a linqDataSource be vulnerable to sql injection? Or does the control protect against this internally?
where does following HTTP error message come from:
Due to the presence of characters known to be used in Cross Site Scripting attacks, access is forbidden. This web site does not allow Urls which might include embedded HTML tags.
We're using dynamically generated URLs and in this specific case the URL contains the characters '<' or '>'. We do URL encode the generated URL (so '%3C' appeary instead of '<') but it doesn't Our setup is ASP.NET MVC / IIS 7.5 / IE8. It's strange but it looks like the error appears only on some machines. So it could be that the IE internet zone settings are playing a role.
I am using microsoft visual web developer 2010 to build and publish my website, I am facing a security problem. My website has authentication service for my clients, each one he has his own user name and password. After I introduced a new member, my database collapsed, may be this last member is a hacker. Is their a way to improve security vulnerabilities to prevent future attacks. May be through web.config, could be encrypted.
We have an application built on ASP.NET MVC 1.0 which, once deployed, should be accessed with HTTPS. I tried few approaches for HTTPS but I have a few questions.:My home page does not need to be Secured (HTTPS), but rest of the hyperlinks following it will be Secured.I read about the action method attribute [requiresHTTPS] however I want to understand what happens to that tag during development on local machine. In a development enviroment, how do I install a certificate on a dev machine/virtual directory to code and test my changes.So this application is complex in nature and we have around 13 controllers and 50 action methods. This application will have information like Credit card numbers since we do accept payment through this website.
I have experienced some troubles when using the IIS server at my workstation with Windows 7. This is a development machine and I don't need to use it as a production server or anything, but for some tests it's quite usefull to see what happens when a lot of requests comes concurrently (in this case even in the same session).
I have learned that with my edition of Windows 7, the limit of requests is 10, but I thought it only means the limit of requests that can be served at any point of time. What I am experiencing instead, is that after firing 10 requests one by one, if the first one didn't complete before the last one was fired, it never completes. The whole IIS is dead, no further requests are put in the worker process queue (there are already 10 requests there hanging so it kinda makes sense) and the only way to go on is to restart.
Is this a standard behavior that cannot be changed on Windows 7 and does firing 10 requests really have to kill IIS (or at least the current worker process) ? Is there some way to change the configuration to fix it (without compromising the setup by creating bunch of worker processes etc.) ?
Let's imaging there are 2 pages on the web site: quick and slow. Requests to slow page are executed for a 1 minute, request to quick 5 seconds.Whole my development career I thought that if 1st started request is slow: he will do a (synchronous) call to DB... wait answer... If during this time request to quick page will be done, this request will be processed while system is waiting for response from DB.[URL] One instance of the HttpApplication class is used to process many requests in its lifetime. However, it can process only one request at a time. Thus, member variables can be used to store per-request data.Does it mean that my original thoughts are wrong?Could you please clarify what they mean? I am pretty sure that thing are as I expect...
When I open my page in Chrome and use the Resource Tracker, at the bottom of the list of requests, there are two GET requests to the aspx file. They take about 2 seconds each. Each request also causes a warning:
Resource interpreted as image but transferred with MIME type text/html.
why a page may be requesting itself, and why it is trying to use it as an image?
we are using the WCF service and hosted using a console application, now we are using LoadRunner and hitting it concurrently
Where giving 50 hits at a time few requests (26) gets completed successfully and remaining gives error TCP error code 10061: No connection could be made because the target machine actively refused it.
I have developed some ASP.NET server controls which include their own javascript and css files. A lot of these controls use jQuery extensions which, as you know, often include their own css files.
I'm using Telerik's RadScript manager which combines the javascript like a boss. However, I'm using the AjaxToolkit's ClientCssResource attribute to include the css files in my server controls, and I have noticed that the CSS files are not getting combined at all. My pages have 10-15 WebResource.axd requests for css files for my server controls.
Everything I find only is about combining javascript, and nothing tells me how I can combine the CSS files. Does anyone know if there is a way to combine the CSS dynamically (I don't want to manually combine as each page might use a different subset of the server controls)?
i am planning to host my all images from another domain which is cookiless but i dont want to alter my all image locations can i do this without changing image locations ?
And immediately after you hit any other page that this session ID would be passed to, doesn't matter if it does anything, let's call it Test.aspx. The sequence for loading is like so.
I guess my question is how do I disable this feature. I understand it's useful to have so that session state can be more predictable, however in my case a long running reports page load is killing users' ability to multitask.
I am calling a web service in my aspx page. the web service (written in java) is acting as a middleware between my system and another system (Siebel) to which I send services and get response for these services. some requests are synchronous. Sometimes when invoking a method the response takes a long time to respond, so a time out exception is thrown.
The problem is that the web service is receiving the same request many times despite I am calling it only once.In my log file and database entries it is clear that the request is called only once. but in the middleware and Siebel side they are receving four or five requests for the same request sent by me.
Is this a bug in asp.net. is it possible that the server where my application is deployed is sending the request many times when not getting the response.Note: iam using Visual studion 2005. the application is deployed to windows server 2003.Iam not discussing the problem of time out. iam asking about the duplicate issue.
I am trying to create a 404 handling page but I am now stuck with the page only working for .aspx files, which isn't really what I need.I am running on IIS6.The site has a wildcard mapping,for extensionless URLs.All requests go through Application_BeginRequest in Global.asax but not all errors go through Application_Error.Is there a way where I can get the Application_Error to raise for non .aspx files?