The OpenSAML V3 software has reached its End of Life and is no longer supported. This space is available for historical purposes only.

Going forward, a single wiki space will be maintained for future information about all versions.

SAML Metadata Resolver and Provider Discussion Document

The info on the proposed v3 design is out of date.  The 2-component design is abandoned in favor of a single component based on the MetadataResolver interface.

This document is a mutable input to developer design discussions and should not be considered a final design.

Interfaces

v2 Java interface

v2 C++ interface

 

Chad's proposed new v3 design.  There are no impls or proofs of concept of these yet.

Notes on v2 Design

  • The part of v2 interfaces getMetadata() returning XMLObject makes it difficult to implement a dynamic provider
    • The data source is not known in advance
    • Dynamic provider represents many entities, not just a single document
    • Question: are these not resolvable by just having the dynamic provider create and manage a synthetic EntitesDescriptor, into which dynamically resolved EntityDescriptors are inserted?  Caching and reloading individual entities might be tricky, but seems do-able. Reloading (or loading the initial) individual EntityDescriptor means managing the inmemory document, possibly also with some per-entity "metadata" about the metadata, e.g. when to next reload in the background thread.

Notes On Proposed v3 Design

  • MetadataProvider simply provides an iteration of EntityDescriptors.
    • It represents a concrete specific metadata source that is known in advance and configured in statically, e.g. file, URL, DOM
    • Its getMetadata() method takes no inputs/params, it just iterates over its source, returning a stream of EntityDescriptor instances
    • At first glance, this is the place to do many of the things our old MetadataProvider did, e.g. filtering, enforce notions of validity, etc.  (Or is it?  Resolver might have to duplicate this sort of thing)
      • So impls (possibly defined by subinterfaces), would specify settings, composed components like a MetadataFilter, etc.
    • Possible impls
      • (Perhaps even the primary impl?) Base on the MetadataAggregator, or at least on common or shared code structured on the notion of a processing pipeline.
      • Base on a Resource, similar to the old Resource-based metadata provider
      • Base on a specific specialization, like file, HTTP, etc, like the old types

  • MetadataResolver is a Resolver specialization that resolves and returns EntityDescriptors based on an input CriteriaSet
    • One primary impl would take a collaborating/injected MetadataProvider and search against that.  In this sense it is primarily a filtering engine in that it merely filters the Iterable<EntityDescriptor> stream against the criteria indicated in the CriteriaSet
    • Another primary impl would NOT take a MetadataProvider, but would resolve metadata dynamically based on the CriteriaSet alone
      • Example: attempt to resolve by assuming the entityId passed in the EntityIdCriterion is a URL endpoint at which the entity's metadata is published and fetch it
        • This impl would presumably cache the resolved metadata internally for efficiency, based on configured policy + metadata document properties (validUntil, cacheDuration)

 

Questions

  • Fundamentally
    1. Precisely what problem(s) was the new v3 2-part resolver-based design trying to solve?
    2. Does the proposed design actually solve them?
    3. What new problems does it introduce?


  • Dynamic resolver would presumably cache based on policy. MetadataProviders would also do caching based on policy (and re-fetching in the background).  Need a caching policy Strategy that is common to both?  Or is this an indication that the design is wrong?
  • Similar to point above, there are many things that a provider would do (and that a provider-taking resolver would not), BUT that a non-provider-taking resolver (e.g. dynamic fetching one) would have to do after it fetches, e.g. signature validation, validity checking. Basically, a dynamic fetching resolver is similar to a fetching provider, except it just doesn't have the metadata source configured in advance.  Is this an indication that the design is wrong? Or is there just another plugin abstraction that is common to and injected into specific impls of both types that need it?  Maybe both just take a MetadataFilter to implement processing of fetched metadata?
  • Would remote fetching dynamic resolver re-fetch in the background once it has resolved something and knows about it, or only on demand?
  • Old MetadataProvider could resolve RoleDescriptors with input of entityId+role QName+protocol.  Where/what replaces this functionality?
  • Do we not expose or care about EntitiesDescriptors any longer?  MetadataAggregator (I think) flattens out everything to a collection of EntityDescriptors. At this fundamental interface level, do we want to bake in the assumption that EntitiesDescriptors would never be used or need to be seen?
    • Ian mentions idea of tagging EntityDescriptors during processing so can answer questions like "is entityId in entity group X".  Could probably reconstruct (in a lossy fashion) the EntitiesDescriptors based on such tags
  • (Significant)  Similar to the point above, with the proposed API we seem to lose the original structure of metadata.
    • The provider/resolver output seems to be potentially lossy (see next sub bullet) with respect to the original hierarchical structure. What if we need it?  Can we just assume it away at such a fundamental level?
      • Although: there's nothing that says that a MetadataProvider must destroy or lose the hierarchical structure of the original document.  The most obvious and simple provider impl would in fact probably just preserve it internally, just returning per the API a flattened Iterable<EntityDescriptor> based on the results of a a tree traversal (probably cached).  However, there isn't currently anything that guarantees this behavior. Preservation of the structure was implicit in the v2 code.
    • Metadata PKIX validation info resolution depends on walking up the hierarchical tree looking for shib:KeyAuthority extensions.  The effective set of PKIXValidationInfo for an entity is scoped by what is in the ancestor nodes in the tree.  Can we even get at the shib:KeyAuthority extensions via this API?
      • (Answer: yes, if the provider doesn't actually molest the original metadata tree.  Right now it just walks up from the resolved EntityDescriptor, and can still do that as long as the reference to the parent EntitiesDescriptor is preserved).
    • In general, what happens to EntitiesDescriptors extensions?  Is there any way to get at them?  What does MetadataAggregator do?  As one potential solution: is it equivalent (and realistic from a time and/or space resource perspective) to just copy an EntitiesDescriptors Extensions to all its descendent EnittyDescriptors, as part of the "flattening out" process?
    • Does this problem go away (in an acceptable way) if we just: 1) change MetadataProvider getMetadata() to return XMLObject like it used to 2) allow for other Resolver specializations that return EntitiesDescriptors as the output type (and possibly RoleDescriptors, per previous point).  Would doing this negate the whole point of the new design? 

Related Issues

Credential and PKIX info caching

  • One major issue with the v2 metadata approach has been with the way caching of resolved info is handled by the MetadataCredentialResolver and PKIXValidationInformationResolver.
  • They cache resolved data internally.
  • An Observable/Observer pattern is used with the wrapped metadata provider to know when to clear the cache
    • This is complex and hard to understand and debug.
    • Significantly: When using a chaining metadata provider, a reload event on any of the child providers fires the event to the resolver, which dumps its whole cache, not just stuff for the thing that was reloaded.  This is very inefficient.
  • Possible Solutions
    • Chad has argued in the past for eliminating the caching in the resolvers entirely and having the XMLObjects which represent the various crypto objects (public keys, certs, CRL's) decode and cache the corresponding Java crypto object during unmarshalling.
    • Brent (and maybe Scott?) don't like b/c this seems to couple the unmarshalling with something it shouldn't be doing, i.e. is not a good separation of concerns.  And negates the plugin approach to KeyInfo processing we have.
      • How would an unmarshaller handle a decode failure (malformed cert, non-understood key type, etc)?
    • Brent's counterproposal: A middleground which seems to accomplish the same thing without violating separation of concerns (as much) would be to have the resolvers perform the decode/resolution into Java crypto and our domain objects (e.g. Credential and PKIXValidationInfo) as they do today, as needed (i.e. effectively only on the first resolve after a reload), BUT then cache them directly on the relevant corresponding instance in the metadata tree.
      • ds:KeyInfo interface gets a new Collection<Credential> property (and probably also a Collection<String> or Set<String> for the trusted names for PKIX)
      • shib:KeyAuthority gets a Collection<PKIXValidationInformation>.
      • Possibly use specialized sub-interfaces for these, and checking and casting down in the resolver to avoid polluting the main interface. Our impls would support of course.
        • Mainly concerned about ds:KeyInfo.  shib:KeyAuthority is ours, less concerned about interface here, but might be nice to just make the approach symmetrical.  
      • This localizes the cached thingy on the source thingy that it represents, so there's no explicit cache maintenance.  Resolver just always checks to see if the cached thingy is there.  If there, use it. If not, resolve and cache it. With appropriate locking/synchronization of course on the test-construct-then-set operation.
      • Has the virtue of simplicity, albeit at the expense of (possibly) polluting a couple of XMLObject interfaces with stuff unrelated to the XML data model they represent (which rubs Brent slightly the wrong way, but he can probably survive it...)
      • This is also really easy to implement, probably just a couple hours of work.