Category Archives: Uncategorized

UDDI – content itself, or directory?

UDDI – content itself, or directory?

[…] The problem is that we’ve placed UDDI at the wrong point in the protocol stack. It’s not the equivalent of a Google; it’s the equivalent of the Web. UDDI is the “place” where anyone can post structured references to anything. This is as opposed to unstructured information on the Web. As with the Web, there’s another layer of value-added services that will organize, rank and otherwise make sense of this data. It’s Google on the Web. It will be services like SalCentral for UDDI.  [Doug Kaye: Web Services Strategies]

Interesting idea, but I don’t think I agree.  The web services make up the content in this scenario; UDDI provides a directory of information about the services, including interfaces, category membership, service contract locations, etc.  UDDI should be able to provide the services to organize the data; we shouldn’t need a Google for UDDI.  Doug continues:

SalCentral has been testing a new in-house tool that crawls UDDI the same way Google crawls the Web. Mike says they’ve found ~250 valid web services in UDDI, or 42% of those that claim to be there. Many more cool things to come from Lucin and SalCentral. Stay tuned.

Why do we want separate applications to “wash” the data we get from the UDDI directory?  It seems to me that the UDDI hosts should define a mechanism by which content is periodically scanned, and “dead” information is purged.  This would probably require some changes, but I think it would be well worth it.  In my opinion, a directory full of crap is no directory at all.

WS-Security

WS-Security

Putting credentials in a SOAP header without an ecrypted channel is a waste of time, but if you have an encrypted channel, you might as well use the channel’s authentication support. WS-Security only starts to make sense [much the same as SOAP] when you have intermediaries. [Simon Fell]

But part of the potential of WS-Security is to be able to have an authenticated (and possibly signed) transmission without having to encrypt the entire message.  Important, I think, for both the point-to-point case and with intermediaries.

REST vs. SOAP

REST vs. SOAP… Each of these battles has the sides well dug
in, though the resistance from the non-adopters seems to be stronger than the
push from the early adopters.
[Gordon Weakliem]


Careful… make sure that you count for the
large contingent of people who are simply focused on getting their job done.
[Sam Ruby]


I first heard of REST when Gordon sent me a couple of links
(here
and here) about it. I wrote some thoughts
down then, so I thought I’d post them here and get comments. “He” refers to the author
of the linked article.

1. I’m not sure URI-addressability of services is really that important in
real life. In the “old” web model, where documents are served up with HTTP and
are hyperlinked to other documents, then sure – everything needs to be addressable.
But in the “new” model, where services are available on the net, does it really
matter for real-life applications whether I need to write a little SOAP code,
or whether I code a URL into my XML document?

2. He downplays the protocol-independence thing by basically saying, “why would
we ever need anything more than HTTP? Look how good it’s worked up until now!”
Well, in the “new” web, there’s a lot of things we need to be able to do that HTTP
in its current form just isn’t good at, and web services really do need to be able
to run over multiple protocols, in my opinion.

A good example of this is transactional services; real-time transaction coordination
typically needs to have at least some support built into the protocol.

Another good example is delivery guarantees. If my message absolutely, positively
needs to be there, HTTP isn’t making any promises for me. But another protocol such
as MQ (is that a protocol? Well, close enough for my argument) can make these guarantees
for me.

And one last good example is delayed or non-real-time messaging. If I need to
get a message out, but I don’t necessarily need it to be delivered right this second,
HTTP (as a connection-oriented protocol) isn’t really going to help me. However, a
protocol such as SMTP or MQ could help me out here.

3. I didn’t see anything about complex input documents. If I’m sending a “plan a trip”
request, I sure don’t want to have to build a query string with all the parameters in it;
and in fact, it might not even be possible to do so. Now, in fairness, they do mention a
POST method for REST, so maybe it’s possible; however, all of the benefits they discuss of
the GET-based services are no longer applicable should you go to POST, and hence there is
essentially no difference between REST and SOAP at that point.

4. The envelope-based nature of SOAP allows you to add infrastructure capabilities
such as WS-Routing to your system, allowing a message to be routed through multiple points
on the network, with each of them potentially operating on the message and passing it on.
With a strict URI-based addressing mechanism such as described in these REST articles,
such a capability becomes difficult if not impossible.

Comments?

Will Web Services aid DOS attacks?

This widely published article basically claims that Web Services will be subject
to DOS attacks and claims “UDDI will help your attacker to find any services
that have been published within the networked environment and then WSDL will
provide the details required to make it work.” Some fairly serious comments. But
do they have grounding?


[WebServices.org]


I think this isn’t nearly as “new” of an issue as they make it sound. I think there are two classes
of public web service publishers. First there are those who have already been publishing their
information on the internet, and are moving to web services for easier programmatic interoperability; these
companies already have to deal with DOS attacks on their exposed systems. The other class
of service publishers are those who have been publishing their information in some other way (not on
a public network), and are moving to web services and the internet. These companies simply have to
learn about and deal with
the same issues that everyone else on the web has been dealing with for years. A production web services
security infrastructure should at least protect your internal business systems from such a DOS attack; as
for the edge servers, the same issues apply here as a typical web server exposing business functions.


As for UDDI helping the attackers find the services, I think that’s kind of like publishing your number
in the phone book. It might make it easier to find, but a determined attacker is going to find your
service anyway.

Computer Science Education

Computer Science Education

I had an email exchange recently with a fairly popular CTO and author. His position is the university computer science department should be a training ground for the current software trend. So the decision CS departments should be making now is dotNET or J2EE.

Of course I disagree. But I am more than apparently in the minority.  [Patrick Logan]

I’ll join that minority.  I sometimes say, only half joking, that the best training for a wannabe programmer is to go work on a construction site.  [Gordon Weakliem]

Hmm…that’s an interesting idea!  As far as the CS curriculum goes, however, I think every CS student should have to learn some flavor of assembly language, and should also have to learn C.  After that, sure, teach them J2EE or .NET or the next great thing; but if you don’t learn the lower level stuff, in my opinion, then you’re really never going to completely get it.

Web Services Security

Ok, I couldn’t stand it any more.  Why does every article I read on this subject lately advocate using custom SOAP headers to transmit user credentials for web services?  I just don’t get it.  I started writing a whole rant on it, but it got a bit long; so see my story here.

Comments?

Fire in Colorado – yikes

Fire in Colorado – yikes

Fire forces 4,000 to evacuate [USA Today : Front Page]

That fire is here in Colorado. They’ve closed down a 50+ mile stretch of I-70, which is undoubtedly one of the most major east/west expressways to span the United States. The fire season here has already been bad, and a mega-drought just promises an entire summer of this.

[The .NET Guy]

Even after all weekend, this is still pretty bad.  Here’s a picture from my way home at about 7:00pm tonight.  It usually doesn’t get dark until about 9:00pm.

 

Web Services Security

Why does every article I read (here’s the most recent) on this subject lately advocate using custom SOAP headers to transmit user credentials for web services? I just don’t get it. With the current state of affairs with web service security, you have 2 choices; you can use transport authentication (i.e. HTTP authentication mechanisms), or you can roll your own. Until WS-Security or SAML becomes a widely adopted reality (read: supported by multiple toolkits), this is pretty much it. Let’s analyze this choice in a bit more detail.

HTTP Security

For public web services, we can use Basic, Digest, or a certificate-based scheme. The first thing we should get straight here is that you don’t necessarily have to let IIS authenticate against your Windows Active Directory for these mechanisms to work; you can implement the authentication mechanism yourself, and use any credential store you like. I’ve written this code both in an unmanaged ISAPI filter, and in managed .NET HTTP Module code; it’s certainly possible.

Basic is easy to implement, requires only one round trip, but carries with it the requirement to encrypt every call since the credentials are sent in essentially clear text (actually base-64 encoded). The big win here is that just about every web service client can work with Basic.

Digest is a little tougher to implement, but can still be done in a single round trip after the initial authentication. Passwords are no longer sent clear text (they are used to generate a hash), so this mechanism does not necessarily require transport encryption. Not as many clients understand this, but the client side is pretty easy to implement. Interestingly, Microsoft chose Digest authentication for their MapPoint.NET service.

The big win to using one of these methods is that there is a good chance that a client will understand how to authenticate with your web service without having to study your API and write special code to authenticate with your service. This is what standards are for, no?

Custom Security

I’ve seen 2 mechanisms for this proposed in many places, and I don’t like either one of them. In particular:

SOAP header carries credentials

Well, a good idea in principle. After all, this is essentially what WS-Security is doing (more or less). It allows you transport independence and it allows you to encrypt or hash credentials without necessarily encrypting your whole message. The problem is that no one understands your API without reading the documentation, and then they have to carefully construct your custom SOAP headers by hand. Why not use Digest authentication until WS-Security is implemented by a critical mass of clients? Simpler, well-documented, and several SOAP toolkits already implement it out of the box.

Login method

The idea here is that you call a login method on the web service with your credentials, and you’re passed back a token which you send with every call, perhaps in a custom SOAP header. Yuck. I can think of lots of reasons not to like this one: a) it’s not self documenting; b) it requires a minimum of 2 round trips for any single web service call (3 if a logout method exists to clean up the session); c) implies a sessioned model on the services, which in many cases map better to a sessionless model. And what really gets me is this mechanism is usually justified with an argument like “this way you don’t have to authenticate every call” – well, in the real world, it is typically about as expensive to validate that a security session token is indeed valid and extract the information from it, as it is to just re-authenticate. Unless your authentication is incredibly expensive, I just don’t buy this argument.

Thoughts?

[related: Basic authentication sample | Digest authentication sample]


Since writing the above, I got some interesting comments from Justin Rudd. Below are excerpts from his comments (indented in blue), and my observations.

HTTP Security – I agree with your statements in this section. I’ve implemented all 3 types as well, and I don’t want to ever implement them again :-) Especially since Microsoft now has the Internet Connection license which allows for unlimited CALs for internet users.

True, but the internet connector only gets you by for authenticating against Active Directory. If you want to authenticate against, say, another LDAP directory, or a database, then you’re back to writing the code.

Soap Headers – here is where we are going to disagree. I guess I’m not convinced by your argument that “no one understands your API without reading the documentation”. Granted it is yet another aspect to learn, but unless you have a different security mechanism for every API I don’t see the big deal.

Good point. However, from the client’s perspective, if I’m trying to glue together several web services, and they all use their own headers, I’m going to have to implement different code to authenticate with each of them. Different to the point of (in .NET):

(service1)
servProxy1.AuthHeader.User = “greg”;
servProxy1.AuthHeader.Password = “password”;

(service2)
servProxy1.CommonHeader.Credentials.Username = “greg”;
servProxy1.CommonHeader.Credentials.Pass = “password”;

Somewhat unfortunate to my eye.

You aren’t going to know I’m using HTTP Digest Authentication if you don’t look at the documentation. So I’m not sold on the documentation argument.

Well, actually I will. If I make a call to your service without any credentials, I’m going to see from the challenge that you support Digest. In that sense, it’s self-documenting. Not to mention my code can now look like:

servProxy3.Credentials = new NetworkCredential(“greg”,”password”);

And I could have written this code without knowing anything about your authentication mechanisms, and if you’re using a transport mechanism that my toolkit understands, then I don’t have to worry about it.

Carefully crafting the SOAP headers is another false argument. It isn’t like people are going to have to work with streams directly to do this. Most of the major toolkits out there have very easy ways to deal with SOAP headers.

Agreed; I guess my point (not very carefully stated) was that the headers would all have to look different (see example above), and I must understand your service to craft the header.

The more complicated part is going to be what algorithm you chose. For example, RSA signatures are very common right now so they have a lot of support in Java, .NET, etc. But if I were to choose PGP signatures, that would cause more problems than crufting up some SOAP headers.

Login – I agree that it isn’t the best thing in the world, but you are describing how WS-Security and WS-License will work with Kerberos. You’ll logon somehow (MS Passport or Project Liberty), and then take that binary ticket and transport it with each request. Maybe you don’t like this aspect of WS-Security :-)

Interesting point; I hadn’t thought about that when I wrote this section. In this case, though, you can log in with some other external mechanism (Passport/Liberty), then pass along the ticket with your calls. I think this is subtly different that the web service itself exposing a “login” method, in that the “login” cost can be amortized across all of your web services you intend to communicate with.

I must confess I haven’t done a lot of research into looking at how we will authenticate web service calls with Passport/Liberty; so I might have more/different comments once I see that in more detail. ;-)

I still don’t agree with your self-documenting comment. But I described my reasons for that above. Your 2 round trips argument sounds to me like you logon and get a token before EVERY web service call. If that is the case, then yes this will be very, very wasteful.

Well, I was assuming there would be 1 login call, and multiple service calls using that one token. But here’s the thing – I think something like this makes much more complex on the client side. A typical client will login, save the session (so he can save the potential round-trips later), and call the service he wants to use. Sometime in the future, he’s going to make another service call, so he’s going to pass the token along again. The problem is, the token (by it’s nature, in a secure system) is going to time out eventually. So now the client must have logic to catch this case, call “login” again, and start working. It sounds trivial, but it’s a pretty big hassle.

Compare this with the case where credentials are sent on every call. The client must still catch the access denied case, but at least he doesn’t have to retry at this point – he knows it’s just not going to work.

I also agree with your sessioned model on the services argument. I’ve actually seen security systems where it was MORE expensive to validate the token then just to reauthenticate :-)

All in all I would love to see WS-Security being implemented in more the of the major toolkits. But I also fear the day that this happens because people will start black boxing it. And I believe we’ll be back to the days of DCOM where when everything works, it works great. But when something breaks…boy does it break :-)

I totally agree. And you couldn’t be more right about DCOM!