Category Archives: Uncategorized

Debugging with HTTP Authentication Modules

I’ve gotten a bunch of e-mail recently, asking how to use the debugger when using my Basic or Digest authentication HTTP modules.  The problem is, if you disable Integrated Windows authentication on a virtual directory, the VS.NET debugger will be unable to start debugging within that vdir.

Here’s what I do. Say I have a virtual directory FooService; I create a subdirectory (not a vdir) under it called ‘basiconly’, for example.

   /FooService
      /basiconly

Inside basiconly, put the pages that you want to be secured.

The FooService vdir has Anonymous and Integrated Windows Auth turned on. The basiconly directory has only anonymous enabled.

The debugger will start successfully because it can access the root of the vdir using windows auth. And when you browse to the secured pages within basiconly, access will be authenticated and you can watch the whole thing in the debugger.

Authentication and .NET Remoting

Prompted by a few questions, I thought I’d post about using my Basic and Digest authentication modules with .NET remoting. As you’re probably aware, .NET remoting does not include (out of the box) any built-in authentication mechanism. However, you can leverage IIS to authenticate requests if you wish; Ingo discusses using the IIS implementations of Basic and Integrated Windows authentication in his book. However, as he points out, Basic requires using SSL, and IWA will not work with most typical firewall configurations. Well, as I’ve said in the past, Digest is a good alternative to these two.

I will describe here the necessary code to use the Basic or Digest authentication modules with .NET remoting in IIS. Some code has been omitted for brevity, but all the important parts are here. :-)

Imaging a remote object implementation in RemoteServer.dll, containing the following:

namespace RemoteServer
{
public class Scoop : MarshalByRefObject
{
public Scoop()
{
}

public string GetTheScoop(string name)
{
return “Here’s the scoop, ”
+ Thread.CurrentPrincipal.Identity.Name + “!”;
}
}
}

To host this in IIS, your web.config file might look something like:

<configuration>
<system.runtime.remoting>
<application>
<service>
<wellknown mode=”SingleCall”
type=”RemoteServer.Scoop,RemoteServer”
objectUri=”Scoop.soap” />
</service>
</application>
</system.runtime.remoting>
</configuration>

If you want to use the Digest authentication module, all you need to do is add the required items to this file:

<configuration>
<system.runtime.remoting>
<application>
<service>
<wellknown mode=”SingleCall”
type=”RemoteServer.Scoop,RemoteServer”
objectUri=”Scoop.soap” />
</service>
</application>
</system.runtime.remoting>
<system.web>
<authentication mode=”None” />
<authorization>
<deny users=”?” />
</authorization>
<httpModules>
<add name=”DigestAuthenticationModule”
type
=”Rassoc.Samples.DigestAuthenticationModule,DigestAuthMod” />
</httpModules>
</system.web>
<appSettings>
<add key=”Rassoc.Samples.DigestAuthenticationModule_Realm”
value
=”RassocDigestSample” />
<add key=”Rassoc.Samples.DigestAuthenticationModule_UserFileVpath”
value
=”~/users.xml” />
</appSettings>
</
configuration>

And finally, modify the client code which accesses the service to provide credentials:

static void Main(string[] args)
{
ChannelServices.RegisterChannel(new HttpClientChannel());
RemotingConfiguration.RegisterWellKnownClientType(
typeof(RemoteServer.Scoop),
http://localhost/ScoopRemoting/Scoop.soap“);

Scoop srv = new Scoop();

IDictionary props = ChannelServices.GetChannelSinkProperties(srv);
props[“username”] = “test”;
props[“password”] = “test”;

string s = srv.GetTheScoop(“Greg”);
Console.WriteLine(s);
}

In general, the complete ASP.NET HTTP pipeline is available for your use when hosting remote objects in IIS.

Web Services and Schema Validation

There was a lot of talk at the DevCon about writing your web service interface first in WSDL, and then implementing it in code.  This is in contrast to the method that the current tools tend to encourage; for example, with VS.NET, you can write your code, decorate the methods with [WebMethod], and automatically generate the WSDL.  I wholeheartedly agree with the first way – define your interface first, and implement second.  There are things you can do this way that the tools just won’t do for you.

In the old days of COM, we might design an interface method in IDL like the following:

HRESULT IsOldGeezer([in] int age, [out] VARIANT_BOOL* pIsGeezer);

All we can tell about the method is it takes an age as input, which must be an integer value, and returns an HRESULT and a boolean value.  Valuable information – but what are the valid values for age?  Hmm…we have no way to tell, without reading the documentation (if there even is any).

When you design your interface in WSDL, you define the parameters for your service and their types.  The types are usually defined with XML-Schema.  So the input age parameter might be defined as something like:

<s:simpleType name=”AgeType”>
<s:restriction base=”s:int”>
<s:minInclusive value=”1″ />
<s:maxInclusive value=”120″ />
</s:restriction>
</s:simpleType>

Which gives us a lot more information – namely, the age must be an integer between 1 and 120.  Wow, this is great stuff – lots of information for our clients.  We had to write the above schema by hand (the .NET WSDL-generation tools will not generate such a restriction in the schema), but it’s well worth it just for the additional information we give to the client.  [It’s worth noting that most auto-generated client proxies will also ignore the schema restrictions, and just type the age as an int value.]

Now we’ve got a nice descriptive schema, and we implement the web service around it.  If you implement this service with .NET, you’re going to end up with a function signature that looks something like (a few attributes omitted for brevity):

[WebMethod]
bool IsOldGeezer(int age) {}

And yep, we could certainly get called with an invalid age of, say, 421.  This is because the ASP.NET plumbing, by default, does not validate the incoming message against the schema.

With .NET, a good way around this problem is to write a SoapExtension to validate against the schema.  I wrote such an extension, and I was going to post it with this note, when I noticed that Aaron Skonnard wrote such an extension for an MSDN Magazine article.  I hate it when people beat me to it!  At any rate, though, you can just add an attribute to the web method to force a validation of the incoming message against the schema.

This brings me to my main point of this post (took me long enough!).  I’ve said in the past to be extensible with your service interfaces, but to draw the line in the sand with your schema.  Use it to your advantage!  Use the great features of XML-Schema (like parameter value ranges, optional elements, substitution, etc.) which aren’t readily available when defining interfaces in other ways.  And as long as you have the CPU cycles to spare, validate the requests against the schema.

 

Groove Experiments – First Public Report

A dozen or so of us have been tossing around a lot of great ideas in the Groove Experiments shared space. One of our concerns, of course, is how to seemlessly share our findings publically with a wide public mechanism. Tonight, we decided to re-focus completly in a new direction, one direction. We felt that instead of continuing to be somewhat abstract that it would be better to take one of our ideas, discuss it, form requirements, and start writing code! We have decided to focus on a Groove to Weblog interface. We do realize that there have been two previous partial implementations that we will be looking at: Tim Knipp’s Blogger Tool and the Agora Groovelog. One of the members is looking into those two. We realize that this kind of dump from me here now is not optimal. Ideally we would like to have things available in real-time as they happen publically. Maybe this Tool or Solution will go a long way toward that. [reposted from Sam Gentile’s Radio Weblog]

Web Services in the Travel Industry

Similarly, Sabre Inc. (www.sabre.com) provides customers access to its Web services by supplying a module that runs on its partners’ Web sites. The XML Travel Toolkit module is designed to let partner companies build Web sites to allow their customers to book travel. For instance, a brick-and-mortar travel agent would build a Web site, and then, to allow its customers to make last-minute changes to an itinerary, the travel agent would connect its Web site to the Sabre system using the XML Travel Toolkit.

“The travel agent can extend loyalty for a customer who might otherwise sneak over to a travel dot-com,” said Cheryl Hutchinson, product manager the XML Travel Toolkit. The reason Sabre chose to distribute software, rather than run it all locally at Sabre, is that Sabre communicates with its module using its own lightweight, structured communications language, rather than XML, which requires more bandwidth.  [SD Times] via [.netWire]

Hmm.  So Sabre has services available, which require proprietary software on the client end, use a proprietary transport protocol, and a proprietary data format (the XML mentioned is only used between the client and the proprietary client library).  Does this have anything to do with web services?  Galileo has had a similar product available for several years, called XML Select.  This is nothing new!

However, Galileo also has “real” web services, using industry standard protocols and data formats (you know, SOAP and HTTP).  They even have huge customers (like AAA) using them.  Good stuff.

Disclaimer – I worked on both XML Select and the web services products at Galileo…so I’m prone to shamelessly plug them whenever possible.  However, if you want to keep track of who’s really doing it with web services, Google will tell you (for Sabre and Galileo).  The top hit for Sabre is about the XML Toolkit described above…

WS-Security

Microsoft Corp. and IBM, which, along with VeriSign Inc., published the original Web Services-Security specification, are now in two camps that have contrasting views over what should be done with the specification, also known as WS-Security.  [eWeek]

Microsoft and IBM disagreeing?  Ah, the world is coming back to normal.  Maybe the stock market will bounce back now, too.

Seriously, though, there is something missing from the WS-Security spec, as I see it.  When I want to access a web service secured via WS-Security, I have no way of knowing that it uses WS-Security, and even if I do, I don’t necessarily know what type of credentials I need to supply (username/password, certificate, etc.), and I don’t know if I am required to encrypt and/or sign the message.  Perhaps this information should be published as WSDL extensions.  The article referenced mentions IBM wants encryption-related WSDL extensions, but doesn’t go into any more details.

Web Services Threat Detection

Justin writes:

Greg is asking about something that is near to my heart – Web Services Threat Detection. Or as I’ve heard marketing people call it (and this is long folks) – how the hell do we keep people from accessing the web services who are deliquent on payment and aren’t deliquent because they are disatisfied with the service?

That is one of the problems I tried to solve at my ex-gig. Although it was always unofficial because no one wanted to sanction that project.

Unfortunately the best answer I ever came up with is this – I couldn’t do it in 100% code. It requires people. I could monitor everything that comes in (who, what, when, and from where). I could check whether or not they were supposed to be sending the particular type of message at that particular moment in time. I could shut that message down or route it to what Greg calls a honey-pot (never heard that before). But when it comes to intelligently doing it – I could never figure out how to tie the above logic into the CRM system. Or if I would really want to. Should my system really try to be smart enough to detect AND take action other than notification?

So what did I end up with? Nothing tangible. Just a bunch of ideas. Maybe I’ll get a chance to implement them at the next gig…

But I do agree with Greg. A system that can do all that he lists is going to be a specialized system. You might be able to find something like what I built on the market. But the intelligence you will have to integrate into your CRM somehow.  [News from the Forest]

Well, actually the specific problem you mention there in marketing-ese :-) seems like the simplest subset of the problem.  If a customer is delinquent on their payment, but they’re not dissatisfied, then presumably your automated billing system has already sent them warnings; and at some point the web services infrastructure should just shut them off.  Disable their account.  Unless I’m misinterpreting the described scenario, this is the trivial case.  This would presumably be driven from the CRM system into the web services infrastructure, so there isn’t a difficult CRM tie-in required.

The more difficult scenarios, in my mind, are as follows:

1. You have a customer who has accidentally coded up a bug which calls your service in a very fast loop.  Instead of getting 1 request per second from this customer, you’re now getting 25 requests per second.  We need to, at some level, slow down or disable this customer’s access until they get it fixed.  Note that discerning this case from the case where the customer is having an unusually busy day is non-trivial, and will probably be business-specific.  As Justin says, this might require input from the CRM system, to look at typical usage patterns for this specific customer, and see if something appears to be amiss.

2. You have a customer who, potentially through no fault of their own, is repeatly making valid requests which trigger a bug in your system, possibly crashing it, or causing abnormally high resource usage.  If the customer keep retrying this request, for example, until he gets a valid response, this might consume huge amounts of resources.

3. You have a malicious customer, with a valid account, who decides that now is a good time to see if he can take down your system by swamping it with requests.  Now, this will probably be punishable legally, but punishment after the fact is not sufficient to keep our system running today.  We need to stop this attack as it is happening, to protect our system.

4. The malicious customer in (2) above becomes a bit more clever; instead of swamping you with requests, he repeatedly decrements your inventory, for example. 

5. A malicious attacker, who has somehow gained access to your system, doing any of the above attacks.

There are also lots of business-specific threats.  For example, if I’m selling cookies with my service, and you don’t have to pay until you take delivery, then an obvious attack is someone falsely reserving one box of cookies per minute for a week.  Difficult to detect real-time, but arguably necessary.

Oh and the honeypot thing – the idea here is if you detect an unauthorized intruder, either attempting to gain access or browsing around in your system after gaining access – you redirect him to a “honeypot”.  This would be a dedicated cluster of systems, which look just like your production systems, but are not actually production systems.  The attacker can do all the damage he wants, and you can even entice him with seemingly valuable data.  While he’s attacking these honeypot systems, you and/or the authorities can begin tracing his location and identity.

Web Services Threat Detection

A while back in another life, we considered the idea of a real-time threat detection system for our web services. The idea was we could build (or buy) an infrastructure component that could analyze the incoming bit stream, detect anomalies, and react appropriately. The “anomalies” detected would be along the lines of:

1. Unauthorized customer repeatedly attempting to gain access
2. Repeated requests causing errors
3. Unusually heavy volume of requests coming from a specific customer (have to be careful here, can’t shut down amazon.com at christmas time)
4. Repeated malformed requests
5. Lots of other items…

Reactions would be along the lines of:

A. Cut off a customer’s access (realtime)
B. Re-route requests to a “honeypot” for analysis
C. Notifications (event log, email) for certain detected events
D. Other options (throttle requests, notify firewall to block IP, etc.)

This system would have to be blazingly fast; large added latency to the overall request/response could not be tolerated for many applications.  Building a system like this is highly complex and application-dependent; for example, the very existence of a credible threat might depend on the cost to service a request. If it costs more to turn away an evil (but authorized) request than it does to just process it, then you have to make a call.

It also ties in with things like guaranteed levels of service. If you have to guarantee some subset of your customers a certain response time, but you only have so many cycles to spare, then you’ve got to prioritize your traffic; but your threat detection system must, at the same time, analyze the traffic and help do this prioritization.

Everything I have mentioned here is outside the scope of a “typical” off-the-shelf intrusion detection system, I believe, as it must incorporate logic that knows about my services, the resources they use, and the cost of certain requests. The IDS is still required to detect “normal” attacks; but for service-dependent threats, you need a new system.

Any thoughts about this?

Announcing Groove Experiments shared space and work

Announcing Groove Experiments shared space and work

Greg Reinacker and Sam Gentile would like to announce our creation of “Groove Experiments”. We have been working and Groovin to flesh out some ideas.This dynamic shared Groove space has three main purposes:

  1.  Groove and Blogs. There are certain synergies between Groove spaces and weblogs. Let’s enumerate and expand a bit on the possibilities. In particular, one great synergy, especially in the Web Services and .NET communities, is the initial posting of some technical idea or topic on a Blog and then wanting to get into a more “direct” and detailed discussion or interaction about that. Groove is excellent for that immediate phase of fleshing out and discussing the idea(s). So the first main area and disussion is around providing the ability to have detailed disucssions about blog topics and explore the synergy further.
  2. Groove and Web Services – We believe that there are some great ideas to be explored in this area. Lets get them on the table. Maybe have a groovespace aggregator that lets you see new entries for multiple groove spaces, so you can keep up with all the the activity for topics on your blog. There are also plenty of areas of discussion on Web Services in general and where they should go.
  3. Groove Platform Applications – We believe that the value of Groove is not in the existing tools but in the underlying Platform and the decentralized peer-to-peer communications mechanism that can create some really interesting applications. Lets explore.

This is something Sam and I have been working on. I want to emphasize that neither of us is an employee of Groove Networks and that this shared space is not speaking for Groove or sanctioned by Groove. We believe that there are fine areas of interesting research to be done here. If you are interested in particpating, please email Greg or Sam to be sent an invitation. Ideally, we would like to keep it to under 12 particpants.