The W3C Credentials Community Group

Verifiable Claims and Digital Verification

Go Back

Credentials CG Telecon

Minutes for 2015-01-20

Mark Leuba is scribing.
Manu Sporny: Any updates/changes to the Agenda before we get started?
No changes.

Topic: Bedrock / P3 Release

Manu Sporny: Quick update on bedrock, P3, Web Payments, etc. (See agenda). Anything else should discuss? None
Manu Sporny: Here's the link to the announcement:
Manu Sporny: Announcement re: release of Payment processor
Manu Sporny: Digital Bazaar public release, "Bedrock" project, a commercial organization that has committed to the standards from this group. Better if implementation is public so other companies can see the implementations. Source code is on Github, hoping to drive through the process
Manu Sporny: Questions?

Topic: Credentials / Web Payments IG Agenda

Manu Sporny: Next topic (Cred/IG web payments agenda). I think we have the subject on IG agenda. Will go through all documents we have created. There is a non-trivial amount of people working on it, we expect a credentials working group. At Robobank in the Netherlands, many interesting large companies Bloomberg, NACS, Rabobank, US Federal Reserve, etc. a number of good people involved. Questions?

Topic: Signatures Update

Manu Sporny: Next is signatures update - Dave Longley pls take us through.
Dave Longley: Have drafted prose to describe normalization, will be broken into smaller steps for implementation.
Dave Longley: What is left is clarifying to implementors, minor sub-algorithms. Spec is coming along.
Manu Sporny: Here's the one from 2011 [scribe assist by Manu Sporny]
Manu Sporny: Looking at table of contents [scribe assist by Manu Sporny]
Manu Sporny: I have question re: the "old spec". there were a lot of sub-algorithms. For example in 2011, (showing table of contents), if you look at TOC there are a lot of sub-algorithms, has there been a big simplification of the algorithms?
Dave Longley: Yes it has been simplified. All implementations are running the new algorithm. The new algorithm is much simpler. Ways of using hashes,... Most of what's needed is in the algorithm described.
Dave Longley: What will pop out is an abstact notation. Nodes without labels, this spec should describe how to output a string, exactly the same string, every time.
elf Pavlik: BTW we use it comparing graphs in ActivityStreams 2.0 examples :) for now JSON-LD and Turtle + RDFa coming soon!
Dave Longley: This is the canonical, normalized dataset that will pop out through this spec.
Manu Sporny: Nate, the hashing process is a part of this spec:
Dave Longley: Describing other features which can be consumed to create a hash, in another spec. This spec ensures a canonical dataset output.
Nate Otto: That is what I was asking, thanks.
Dave Longley: Spec uses hashing internally, the output can use whatever hash you like
Manu Sporny: Summarizing - take some RDS input (nquad, JSON-LD,...) creates a canonical representation, Longley: will be an abstract context fully canonicalized. Can also use to get a string out in Nquad.
Manu Sporny: An abstract syntax dataset does not have form yet, compared to color Red.
Dave Longley: Something to talk about, not yet written down.
Nate Otto: The purpose of making a canonical string representation is clear to me.
Mark Leuba: It would be good to have an example of how all of this fits together. [scribe assist by Manu Sporny]
Manu Sporny: Does that make sense?
Mark Leuba: Having a brief explanatory document would help.
Manu Sporny: We have a video that can be released:
Nate Otto: Is the normalized representation also intended to be a translation step between different linked data syntaxes?
elf Pavlik: Having examples will be helpful for testing.
Dave Longley: Testing the same graph with different syntaxes?
elf Pavlik: Yes
Manu Sporny: Can't see how social web group will create something longterm w/o graph normalization signatures.
Manu Sporny: Where has digital signatures trusted messaging discussion gone elf?
elf Pavlik: We are not there yet. (described group activities). No strong commitment to RDF model but it makes sense. Still needs clarification.
Manu Sporny: The data model is confusing re: RDF.
elf Pavlik: I have same impression
Nate Otto: Is normalized representation a translation step or would people more directly translate?
Dave Longley: I would not expect normalization to translate into another syntax. Unnecessary. Only reason to use normalization is to canonicalize. Otherwise skip it.
Dave Longley: Canonicalize to compare two things.
Nate Otto: The purposes of the normalization is generally comparision (are these two things the same?)

Topic: Badge Alliance Vocabulary/Context Update

Nate Otto: Hashing, signatures, etc.
Manu Sporny: Context discussion next. Nate, can you give us an update?
Nate Otto: Provided link to document. Thanks Manu for read through and responses. Highlighting 1) Credentials Vocabulary, has mixed purposes. Built for open badges. Good to have a Credentials Vocabulary, where should it live?
Nate Otto: Should it be in open badges vocabulary context file?
Nate Otto: One main purpose of this spec was to not break existing implementations
Manu Sporny: Mindframe two, is "where do we want to go in 5-10 years if we only have one shot of getting things right?" [scribe assist by Nate Otto]
Manu Sporny: Apologies for sounding negative, first OBI has a lot of badges, need backward compatibility, priority one - don't break implementations. This serves that purpose well. Second, where to go in next 5-10 years if we have one shot to get it right? Scalable, solid security, reasonable to implement, etc. That's the approach I used when reviewing the spec. Unfair to the work I know.
Manu Sporny: How to achieve both goals? Nate: No offense of course. Agree the document aligns with the goal of backward compatibility. But the discussions were all looking forward to the long term goals.
Dave Longley: Comment, re; Credentials Vocabulary, I agree. It should be minimal, use OBI as a base. Badges context could include, One context in a badge, would be dependent on a badges context.
Dave Longley: Also, simple terms in security vocabulary that may get included in credentials vocabulary.
Dave Longley: I thing a lot of credentials can be done with a few terms. already in security context in web payments. Looking there may be helpful. Should be a tiny document. Can layer other stuff they bring on top.
Nate Otto: Is App ID the term you used there, elf?
Manu Sporny: At-id - @id - the JSON-LD thing, Nate
Manu Sporny: Not App ID :)
Dave Longley: "@Id" == "at id"
elf Pavlik: Question, how to work with @ID? Did you take a different approach? cannot use natural way that @ID is identifier. The url is the ID of the document.
Nate Otto: Legacy open badges standard was not developed for JSON-LD.
elf Pavlik: Described the issues related to @ID being absent.
Dave Longley: @Id aliasing may solve this.
Nate Otto: One of the very final changes remaining to the Open Badges spec to finalize the 1.1 version is to add @id as a property to be used in all hosted badge objects (Assertions, Badge Classes, Issuer Orgs)
Nate Otto: Currently in the assertion it's in verify.url
Manu Sporny: I understand. There are several approaches in LD world to specify the ID. JSON-LD uses @ID. Gives the object an identifier. has decided not to use @ID but a blank node identifier. The thing has no identifier. Not a very cohesive LD story, but is easy for some developers to understand.
elf Pavlik: "Url": "@id"
Dave Longley: ^JSON-LD aliasing would map "url" to "@id"
Manu Sporny: Some developers. @ID in JSON-LD, url in, and other terms in other orgs. There are 3 different ways to specify a url for the thing, if we use JSON-LD aliasing feature we can get the specs on the same level. Problem, the decided not to. JSON-LD decided to do that and map the two worlds. Which path will the groups take?
Nate Otto: We are including @ID for internally hosted properties. It will be required for OB 1.1 . We need to incluide url in the verify method. As we talk about 5-10 year plan, the method for Validation is up for review.
Manu Sporny: Questions? <none>
Kerri Lemoie: Did we come to a decsion as to whether we should have a separate credentials vocabulary doc?
Manu Sporny: Back to review of Nate's vocabulary document.
Nate Otto: I will respond in writing, anything specific? <Kerry's question>.
Manu Sporny: Always easier to have one JSON-LD context, with 5-6 vocabulary documents pulled together. My preference is 1 JSON-LD context that pulls in various OBI vocabularies, identiity credentials, security vocab etc. That' the best implem. path. Nate?
Nate Otto: I agree one JSON-LD context is the way to go.
Nate Otto: As context changes from version 1.1 to 1.2, the validation will change but the vocabulary will stay constant. An OBI context file design is under consideration. Manu: we have considered this and we will probably do a Context v1 and Context v2, to force us to not break things.
Manu Sporny: Breaking changes (v1 - v2) will become painful. We are wondering if using context for versioning is ok, we think yes for Major versions but not for minor versions.
Manu Sporny: Just a point of info if it is helpful.
Nate Otto: Thanks for the history on that.
Nate Otto: Any other feedback on whether/how to separate credentials vocabulary from OBI vocab?
Dave Longley: Need to clarify, consensus is multiple different vocabularies, a single context that pulls from those vocabularies.
Manu Sporny: +1 That
Nate Otto: ToDo: look for items in this OBI vocab draft for the very most general vocab items to generalize into a credentials vocabulary
Kerri Lemoie: +1 To migration path & legacy.
Manu Sporny: How to deal with changes, least disruptive, so all "legacy" data has migration path w/o changes. Longley: if we can't meet that is to have a clear set of translation API calls.
Manu Sporny: Good idea.
Kerri Lemoie: That sounds reasonable
Nate Otto: +1 Coherent
Manu Sporny: Looking at 5-10 years, split the vocabulary from the document,
elf Pavlik: +1
Manu Sporny: Kerry, Nate, elf said it sounds coherent

Topic: Roadmap

Mark Leuba: The Google doc URL has been moved to a public ownership - [scribe assist by Manu Sporny]
Mark Leuba: If everyone has access to the credentials spec, let me know - you should. [scribe assist by Manu Sporny]
Manu Sporny: Only people with the link can get access, it won't show up in a public search. [scribe assist by Dave Longley]
Mark Leuba: If you need access, let me know. [scribe assist by Manu Sporny]
Mark Leuba: We've incorporated most of the feedback that has been received - welcome the opportunity to review. [scribe assist by Manu Sporny]
Nate Otto: As part of this work factoring out a general credentials vocabulary from the OBI implementation, I'll contribute the credentials vocab as a draft to the Google Doc that Mark linked.
Mark Leuba: It's open to review at this stage. [scribe assist by Manu Sporny]
Mark Leuba: One large area - timeline that we want to associate with major milestones - there is a structure there, can people suggest targets for that? [scribe assist by Manu Sporny]
Mark Leuba: Developing the spec, completing activities, resolving open issues, etc. [scribe assist by Manu Sporny]
Mark Leuba: The last open area is one that Nate and Sunny are going to help with - reliant on the OBI work that Nate and Sunny are working on now - holding that in advance until previous work is done. That work is imminent, Nate? [scribe assist by Manu Sporny]
Nate Otto: Yes, the outcome of this call is to fix that - we have some direction now, I'll be doing that over the next few weeks. [scribe assist by Manu Sporny]
Mark Leuba: Any comments of question? [scribe assist by Manu Sporny]
Dave Longley: See "credential" property in there for an example of a really basic credential
Dave Longley: The claim specifics would be from some domain-specific vocabulary
Nate Otto: The idea for the future being that a legacy badge could be a type of claim used in an identity credential?
Manu Sporny: I think we want the credentials vocabulary to be a separate document that is referenced by the roadmap. [scribe assist by Dave Longley]
Nate Otto: (In any case, feel free to reach out to me off this call so we don't pollute the agenda/chat with a deep side channel dive)
Manu Sporny: At some point we should point off to the use cases document and follow the same basic approach for use cases, something we may end up calling the "Credential Agent", and the vocabulary. "If you want to learn more about the vocabulary, go here" etc. [scribe assist by Dave Longley]
Manu Sporny: The roadmap document should be really high-level and link to more detailed documents. [scribe assist by Dave Longley]
Manu Sporny: We may want a goals section, but all details should be in the other specifications. [scribe assist by Dave Longley]
Manu Sporny: Thoughts on that approach, high-level roadmap + details in other docs? [scribe assist by Dave Longley]
elf Pavlik: +1
Dave Longley: +1
elf Pavlik: I quite like or
Manu Sporny: It took us a while to come to that approach in the Web Payments group and it seems like a good idea and we can save time here. [scribe assist by Dave Longley]
Mark Leuba: I suggest we think about it. I've got placeholders in there for comments. [scribe assist by Dave Longley]
Manu Sporny: Anything else we feel is important to discuss before next week? [scribe assist by Dave Longley]
Dave Longley: Nothing
Sunny Lee: Thanks everyone
Nate Otto: Great, thanks all!