Lightweight DITA/DITA 2.0 discussions from early 2019

E-mail threads

  1. Re: [dita] Summary: Status of review of the DITA2.0/LwDITA intersection topics

  2. Interoperability of DITA and LwDITA

  3. Misalignment example: LwDITA and DITA shortdesc topics

  4. Lightweight DITA spec: development strategy

  5. Supporting LwDITA Implementors Quickly

  1. Re: [dita] Groups - DITA TC Meeting Minutes 5 February 2019 uploaded

  2. Reworked element reference topics

  3. Proposed review of DITA 2.0 elements to LwDITA components

  4. Reworked intersection topics; "Rendering expectations" and appendix topic for "Formatting expectations"

DITA TC minutes

29 January 2019:

10. Need for alignment between DITA 2.0 and LwDITA specifications
- Kris; should we defer this till 2.0 spec and LwD spec editors have discussed this?
- Carlos; should be s set of regular calls betwee 2.0 and LwD spec editors.
- Kris; yes, where TC comes into it is that TC needs to set gen'l guidelines. e.g. what material needs to be the same? just shortdescs? or do we need to make the same statements about what is normative, with only differences being examples and LwD's info about syntax for MDITA and HDITA. LwD editors don't want to have an SML-first focus...
- Carlos; that sounds fair. we need to look at it more. you provided a draft of something that might work, though we have to look at it carefully. 
- Kris; examples are non-normative; so there's no reason to share examples. I'd like to hear from implementors, processing expectations have to be the same for both or it won't be interoperalble.
- Robert; I think you can have more expectations from 2.0 than from LwD, but where they line up, they should be the same; e.g. if you have to process a particular element a specific way in LwD, it has to be done the same way as 2.0; otherwise they're not interoperable.
- Tom; we're working on adding LwD to next XMetal; but I'm not sure how it's being done; seems intuitive that we should be able to share as much processing as possible, but I don't know how much we've actually been able to do that.
- Kris; and I'm not saying we've got to single-source everything; but there are great advantages and it helps makes it more likely that what's in LwD will be in agreement with 2.0, as a true subset. But I'd like to hear from other TC member about this area
- Tom; can we hope that LwD spec itself will be small and manageable enough that we can expect everything in LwD spec is compatible? That is, is LwD small enough that the task of this is manageaable?
- Alan; I'm hopeful; LwD has about 40 elements; we didn't take stuff from DITA that is rocket science; LwD is mostly basic elements; I have a maybe naive sense that it'will be reasonable to make this work.
- Carlos; I agree; I think we can get it done.
***ActionItem: Kris will set up at least a monthly recurring call between 2.0 and LwD spec editors.

05 February 2019:

- Tom; Kris; can you give quick overwiew?
- Alan; we think the most critical part is #12, schedule strategy.
- Nancy; I think the most critical is interoperability between Lwd and full DITA.
- Kris; wrt #9, I had done a bunch of cleanup on the spec based on the web review of the LwD CN in October, of elements that will be in both DITA and LwD. Alan, in his response, brought up the difference between formatting and processing expectations, i.e., how they overlap, and mentioned particularly how the ph element in DITA and LwD are different, and the issues with single-sourcing the topics.
wrt #10, I changed the topic of the mail to interoperability, seeing misalignment based on current drafts of LwD descriptions. 
- Alan; that's an accurate summary for those 2 items.
- Alan; wrt #12, the big picture issue is that we're seeing more interest in developing LwD implementations, e.g., from Adobe and IBM,
so we want to get it out asap. We're seeing conflicting goals and constraints between our sense of when LwD needs to be available, and when DITA 2.0 is expected to be available. OTOH, LwD must be compatible with DITA, and 2.0 is on a diff time frame from what we'd like. so we're looking for the best way to get it out for public review in 2019.
- Kris; before we can talk about fast tracking, need to see an actual LwD spec. The reality is that even once TC approves a final version of something, it takes 6 months for OASIS to release. That period is extended if any changes need to be made to spec.
- Michael; maybe 'fast-track' is the wrong word; we're not asking for fast tracking, but the directive to single-source from 2.0 spec is 'slow-tracking' from our POV. If we take on the task of editing the full element info in every full DITA element so it's no longer XML-specific, and add stuff, that job is huge. The DITA topics really aren't appropriate for what LwD is trying to do, and who it's trying to appeal to. We can't take on that task.
- Kris; this is back to item #10, interoperability. We've discussed what's needed to be consistent between 2.0 and LwD topics, wrt what info other than shortdesc needs to be the same. Specifically, topcis describing common elements need to give the elements the same meaning, usage information, & processing/formatting expectations.
- Michael; reuse by conref would be one way to get there, but formatting is currently phrased as full DITA, assuming XML source. Our users don't want to see that.
- Kris; I want to hear from other TC members about interoperability and alignment between the two specs. 
- Eliot; I agree that whatever language is used in LwD has to be identical to regular DITA; i.e., there can't be anything that appears to be a semantic, normative difference.
- Alan; but you would need to qualify that for modulo subsetting.
- Eliot; shouldn't change that.
- Michael; so there shouldn't be processing expectations in LwD elements for @s that are constrained out of those elements in LwD.
- Eliot; processing expectations aren't normative, nor are formatting.
- Michael; so processing expectations can be changed by subsetting, if @s in LwD are omitted, and processing expectations mention those @s.
- Kris; for element topics common to both, we don't have processing expectations written that way.
- Tom; not clear what you mean by that.
- Kris; in 2.0, processing expectations are written so they stand alone in standalone block, not cast in the way that Michael was concerned about. 
- Michael; yes, it could be handled with an exclusion, but it is a difference.
- Tom; it's obvious there will be differences related to omissions. 
- Eliot; LwD can't change processing expectations; if it's irrelevant, that's different from saying it will be different.
- Michael; I agree that's normatively the same. 
- Eliot; any @ LwD omits, it's default behaviour will still apply
- Michael; e.g. if we added shortdesc to map, but don't have copy-to @, so it should be as if copy-to @ not specified?
- Kris; but that is work spec editors are happy to do.
- Michael; so can we supply our differences and you'll incorporate them into 2.0?
- Kris; No. LwD editors should be working with what's already there and modifying them. It's inappropriate for DITA to match LwD; LwD should editors can modify them to adjust these.
- Michael; so the term 'element' is a challenge; Markdown user don't like it, because they don't like XML and it's identified with XML. Are you comfortable removing 'element'?
- Kris; No, I don't think so. DITA and its spec are about an XML vocabulary. 
- Michael; so we should have 3 sets of processing expectations so can distinguish XDITA, MDITA, & HDITA?
- Eliot; not necessarily, but syntactic representation; defining a mapping from MDITA to its equivalent to XML. 
- Michael; a markdown guy won't understand 'element.'
- Nancy; I disagree, a Markdown u;ser won't understand it, but a Markdown implementor will, may not prefer it, but will almost certainly understand it.
- Michael; a lot of Markdown folks dislike 'elements' they don't like XML.
- Tom; what is their language? what is what they use
- Carlos; we're going for component
- Tom; is that what they use when they're designing things?
- Michael; a lot of times they avoid using anything; they talk about blocks, or inlines, or Markdown codes; they have no equivalent to an element. But we saw 'component' in some of their discussions, so it seemed like someting they would be comfortable with. 
- Kris; the term 'component' was added during time LwD was getting response during its public review, trying to figure out an non-XML-centric term. I'm hearing a clear consensus that LwD and 2.0 must be a compatible subset of either 2.0 or 1.3 + multimedia domain. Any objections?
- Tom; is there any risk if it's both?
- Carlos; my concern is that the alignment won't work. If we align with the current state of 2.0, and it gets changed down the line, e.g., during its public review, would LwD have to change too?
- Kris; yes, it will, that's part of the standards process.
- Alan; maybe the only thing to do is declare that's it's a subset of 1.3+. we can't say it's 2.0. Either that, or we can't put it out as a spec till later. Instead, LwD goes out with some stuff, as a preview but not as a spec, and it exposes some parts of DITA 2.0.
- Kris; I see 2 possibilities for that 1. with 1.3 (with all its 'XML-ness), or
- Alan; that won't work.
- Kris; otherwise, LwD meets with where 2.0 is currently, and get re-alighed with 2.0 after 2.0 is released. That takes us back to the original question; if it has to be semantically identical, we need to tackle the thorny way of how we do that, and the best way to do that is single-sourcing. We can take advantage of conref, conrefpush, and conrefreplace, but I don't see a way around that method.
- Tom; I don't know if it's a problem, we would have to lock down topics that have been created for 2.0 with respect to LwD for the 2.0 spec in order to accommodate LwD and 2.0. So in future, when putting out 2.0, we'd have to not change those topics.
- Michael; Alan, rather than them being locked own, if you change something that's a common component, you just need to be aware of it and track it so they can re-align later. 
- Kris; I can't see us removing XML-centric language from the spec.
- Michael; so my plan A is that we have parallel sentences, so we don't change logic in rewording, but that processing context has changed.
- Kris; when I look at draft LwD spec, there are extreme descrepancies between it and stuff that in DITA spec.
- Alan; example?
- Kris; current LwD spec makes a statement on bold text; makes normative statements in sections that we never put normative text in (i.e. 'formatting expectations' section).
- Alan; Carlos and I are not expert spec editors; we need to know what to change,
[to be continued next week]

12 February 2019:

5. New item: Re: [dita] Groups - DITA TC Meeting Minutes 5 February 2019 uploaded (Kimber, 11 February 2018)
Renamed as Additional thoughts on normative vs non-normative content; formatting versus processing; RFC-2119 keywords (Eberlein, 11 February 2018) 
- Kris; Eliot's comment about the minutes led to a discussion of normative vs non-normative material in the DITA and LwD specs.
- Eliot; I agree with your original correction on what's normative/non-normative. I make a distinction between processing and formatting; processing produces the effective set of source files that result from resolving conrefs, links, filtering, etc. After that is complete, what's left is formatting. So proceesing is both normative and mandatory (except occasionally for when you do filtering) data processing has to be done according to normative requirements. The 'should' statements we make about formatting are about getting standard results, and simply reflect the considered agreement of the TC.
- Kris; I define formatting slightly differently; I don't know if it's important. If we have section called 'formatting expectations' (FE), we need to have a common understanding of some presentation layer issues. I may think of what Eliot calls 'processing' as pre-processiong, and there's other processing that does deal with presentation layer, e.g. 'shortdesc should be rendered as thefirst body paragraph'. For those things, we can only make 'should' oor 'may' statements, not 'must' statements.
- Robert; I think it is important to make a distinction between formatting and processing. 
- Alan; this is a change in position; in the early LwD review we got feedback that formatting is never normative; is that not true?
- Robert; what can never be normative is what it looks like on the page; bold, italic etc. but some aspects of formatting (e.g. shortdesc and desc in fig and table) may indeed be normative. 
- Alan; for many topics, the distinction between PE and FE was hard to figure out; I started to favor PE as the default place to put things. For LwD, we're not assuming print rendering, though there could be many outputs, including audio and Braille. In our opinion, formatting either declares or implies print.
- Eliot; formatting implies a visual presentation; it could be either print or browser; print assumes pages.
- Alan; formatting assumes text; can we agree with that?
- Eliot; it assumes a formatted presentation of textual content, generally by sighted persons.
- Tom; do audio presentation use formatting?
- Alan; Amazon Echo supports text embellishments.
- Tom; that sounds like the same thing as saying text needs to be 18pt
- Alan; I try to use formatting as 'processeors may embellish this in some way as to make it more meaningful'.
- Robert; processors can alsways do that.
- Kris; in any case we need to use the normative words only for normative use.
- Robert; in fact, if those words are used in any non-normative way, the OASIS TAB will hold up the review process until we 'fix' things from their viewpoint.
- Kris; we need to have an understanding on the TC on the difference between FE and PE.
- Tom; I'm having trouble with my own personal opinion around that; it seems like a useful distinction, otoh, it's also useful to apply 'processing' to anything a processor does; so distinction is ambiguous for human readers. Kris used 'pre-processing' for data processing tasks; is that a reasonable compromise?
- Kris; that's a DITA-OT word, and we should use it as such, and with care...
- Tom; but we should identify the different types of 'processing'. 
- Chris; isn't there a 3rd class of things? I see processing as in 3 parts:
1. processing that operates on XML source, resolving conrefs, links, filtering, etc.
2. processing that does rendering e.g. bulleted lists, shortdesc
3. stylesheet processing that defines fonts, colors, text size, etc.
- Robert; is rendering a better word? when I talk about formatting, I talk about rendering when I mean something that has to be applied.
- Alan; I like that idea. In making e distintions, where's the dividing line? Really, one bucket would be ideal, but if there are 2 buckets, I like rendering better than formatting.
- Tom; or we may just have an overloaded word - processing - 
- Kris; reminder; these 2 sections - PE and FE - are new for 2.0. Part of doing this structure for 2.0 was to bring increased clarity, and to make sure we had the right normative statements that need to be tied to conformance. This was our first pass at taking material that had been lumped together and separating it. Robert and I discussed formatting vs. rendering. We also wondered if we should just exclude anything about rendering from the spec. 
- Robert; as an example, the discussion on bullets might not need to be there. The question is, for an implementor, 'what do I need to know, regardless of how it's being rendered?', 'what are the rules I need to know to get it rendered correctly?' Current state is based on the content that existed; it doesn't imply that content was entirely correct or appropriate. One of my operating assumptions was that most of what ended up in FE will end up getting discarded, because it doesn't belong there anyway.
- Kris; OTOH, we do have some stuff, about generating multiple outputs from common source, that we want to consider keeping.
- Robert; I'm not saying we should get rid of all of it, just some.
- Chris; in working on 2.0 proposals, I've wrestled with these sections; my propopsals mostly had nothing to do with rendering. but, OTOH, when I was considering rendering of mediaobject, while thre were no formatting or processing expectations, there were rendering expectations. In general, I find rendering a more useful word that formatting.
- Robert; I agree, but we want to tread carefully; I thought differently a year ago.
- Kris; our original thought was to use rendering expectations, rather than formatting expectations. I can't come up with a good answer today, but we do need answers to a number of questions. What sections are in the spec, and what should those sections contain? What do we mean by those terms? I think we need a broader definition than Eliot's for processsing in the spec; we'll need to state formally what we mean by processing, and what we mean by a processor.
- Eliot; I'd suggest one way to determine the difference between processing and rendering could be; if behaviour is mandatory, it's processing, if not, it's rendering.
- Kris; but in the spec, we have historically made statements about rendering shortdesc, and I continue to consider them necessary.
- Eliot; but those are still rendering, not processing
- Chris; I agree
- Kris; I think some of my concern is for the purpose of some element ref topics, it's useful to have all statements that include RFC-2119 keywords in a single section; currently, all those are in PE, in FE there are only statemments like 'processors typically do XYZ'; we could just get rid of all of those.
- Robert; but shortdesc is weird; it often doesn't get rendered because users think it's metadata, so it gets taken out.
- Chris; depending on output, you might or might not show a shortdesc, and there's also how keyref works. 
- Eliot; the spec says 'shortdesc behaves as if it's the first p element of body', but pepople make bizarre choices all the time, so users need to complain to tool providers if their tools are doing things with shortdesc that are problematic.
- Kris; and we provide support for those users by putting a 'should' in the spec. So, how can we move forward on this item? We seem to have a consensus from Chris and Eliot to distinguish between PE and RE; PE should have 'must' statements, while RE should only have 'should' or 'may' statements. Or should we get rid of typographic stuff completely?
- Robert; one question; is this somewhat clearer in your mind now, Alan, even though it's not entirely clear?
- Alan; are multiple buckets necessary and useful to readers of the spec?
- Robert; shouldn't there be at least one for processing, one for rendering?
- Alan; I just need to know to continue with LwD.
- Kris; I think we need mulpiple buckets to deal with stuff that requires RFC-2119 statements, and stuff that doesn't. so the question is 'do we keep non-RFC-2119 content in spec, or do we get reid of it?'
- Eliot; non-RFC-2119 stuff serves the same purpose as examples; it offers guidance even if not normative.
- Chris; I think there's an audience of implementors that needs RFC-2119 material; and another audience of stylesheet writers who needs the rendering guidance.
- Robert; I think they're better served by separate sections myself. I don't like the idea of one having normative language and one not, but, I'm not sure how to define that. I would like to define the distinction
- Chris; my 3 buckets would be:
1. normative processing
2. normative rendering
3. non-normative rendering
- Kris; just fyi, OASIS says everything is normative in the spec except for introduction, TOC, examples, notes, appendices, and stuff explicitly labeled 'non-normative'. If there's something in the spec that's non-normative, we need to identify it.
- Chris; maybe we should have a non-normative topic for each element that requires non-normative guidance, akin to the translation guidance.
- Kris; that sounds like a good idea. 
- Alan; I'm not embracing that for LwD, without further thought, but it might be appropriate.
- Kris; it could be a peer topic to recommendations for translators. I'd be happy to take an action item to move typographic material into those types of topics, so we have something to look at next week. There aren't that many topics that have this kind of content. Then we can revisit this next week with examples.
***ActionItem: Kris will move non-normative formatting guidance to separate topics to illustrate Chris's suggestion.

19 February 2019:

7. New item: Reworked element reference topics (Eberlein, 19 February 2019) 
- Kris; I've removed 'Formatting Expectations' (FE) material from element ref topics and moved it into a single appendix; also changed sections titled 'Processing Expectations' (PE) to 'Rendering Expectations' (RE), and sent new stuff to TC list so folks could 'see' it.
- Alan; is the big picture purpose of this proof of concept? what would you like us to assess?
- Kris; we made a decision last week that we would 1) move FE stuff to non-normative appendices and 2) take FE guideline type stuff from the PE section, and then put it in a section called RE. 
- Alan; how did we decide on 'rendering'?
- Kris; hard to know without minutes, but we made decision that PE was RFC stuff, but RE was that stuff that should occur because people have common experiences and should not be surprised. e.g. shortdesc should be rendered in text.
- Chris; PE has to do with anything a pre-processor has to do; RE has to do with broadest processing language about what processors should do about rendering content; FE has to do with style, fonts, subscript, etc.
- Alan; I like your description of how element shuold be rendered in ourput content, but I don't want it to be text or exclusively text. otoh, some REs like 'desc' could mention text, in a 'may' context.
- Chris; wrt desc element, for fig and table, it's a 'should'; for xref and linking, its a 'may'. 
- Alan; it's content-centric. 
- Kris; good to be reminded that there are many types of content that are not text-centric. Do you think we've adequately covered this item, or do we need to return to it?
- Alan; would like to continue to revisit this as we continue on LwD framework.
- Tom; will we have another review of these topics? I see, looking at 'desc', it doesn't mention 'object', which bothers me.
- Kris; wrt the review we had, only a few people participated, we should do it again, and have more folks participate, and have more definitions of what we think is appropriate content, and do we need more RFC statements?
- Tom; so another DITAWeb round?
- Alan; what are the boundaries of DITAWeb review? just the DITA elements that have LwD equivalents?
- Kris; yes
- Robert; if we manage to refactor other elements before the review starts, would be nice to include them, but otherwise what Kris said.
- Kris; if we add anything to this DITAWeb review, it should be @ content; that;'s where we'll need to do significant work.

8. New item: Proposed review of DITA 2.0 elements to LwDITA components (Evia, 09 February 2019) (Eberlein, 12 February 2019) 
[discussion below also has aplicability to agenda items #9-#13 and #15 listed below]
- Kris; I'm assuming this is in relationship to LwD ???
- Carlos; this was based on old PDF element descriptions for 2.0. we took elements that were in both DITA and LwD, and extracted PE/FE to see if they would translate to LwD without changes, or if they needed changes. In yesterday's LwD mtg, we decided not to do that until DITA TC is comfortable with it. We want to be as compatible as possible, or as much the same as possible. But I see a lot of stuff that will require conditional processing to remove XML terminology. So we want to see if TC agrees; we have a wiki page where we pasted all the elements and PE/FE/RE to see if they fly with LwD, or can be easily conditionalized, or if they need serious rework. Does TC think this is worthy experiment, and is this perception accurate?
- Eliot; so this is to see if we can be semantically identical, which is what I see as necessary.
- Carlos; we want to see what can be reused with new language, and what needs to be conditionalized with a new @props value.
- Kris; my assumption, like Eliot's, is that LwD must be compatible and interoperable with DITA. If so, we'll have to be very careful about having both specs be semantically, if not formally, identical, where they overlap. If we can't do well in conditional proessing, it's not a good sign for us as guardians of DITA.
- Alan; I've listed issues for the 'data' component. We strongly don't want to use term 'element' rather than component. We also have a subset of behaviors in LwD.
- Kris; So you seem to be mentioning 2 different issues: 
1. you don't want to use term 'element'. 
2. you think data functions differently.
- Alan; in LwD, 'data' isn't for specialization; it's a container for name/value pairs. It technically supports specialization, but for for reasons of adoption, we don't want to mention that.
- Kris; so it's not that it actually operates differently, but because of LwD audience, 'data' isn't intended used for specialization in LwD.
- Robert; in genral, it works the same, but it's severely constrained, and has a content model that isn't same as full DITA; so in the DITA spec we call out things that can't be mentioned for LwD.
- Kris; and so it may need a very different description than the one used for full DITA.
- Alan; that's our premise in a nutshell; many of these need different description or usage infomation.
- Kris; that's not surprising; we'll just have to be very careful that they're aligned; that an implemention of LwD should work in full DITA. 
- Chris; to phrase it the other way; a full DITA implementation should be able to handle LwD without modification; otherwise it's not a subset.
- Alan; the LwD spec should be sufficient for guiding implementors; when XDITA is included, then the DITA 2.0 spec should define behaviors of DITA content. That helps me to understand how they don't need to be semantically identical, or at least not literally identical.
- Kris; you're teasing out areas where there needs to be very different usage info, like 'data', where the purpose of an LwD 'component' is very different from the corresponding element's use in DITA, where DITA is a superset of LwD.
- Alan; I'd say a significant superset.
- Kris; does it matter whether we say a superset or subset? 
- Robert; no
- Nancy; and is one a 'proper' subset/superset of the other? In other words, will there be anything in LwD that isn't in DITA? Seems to me there shouldn't be...
- Alan; there are likely to be full DITA implementations that don't support LwD, in particular HDITA or MDITA. Conformance to LwD and to DITA are going to be 2 different things; implementors might do one or both. We envision having different implementations. 
- Chris; the presence of HDITA and MDITA complicates the subset/superset question, but XDITA must be a proper subset of DITA. 
- Eliot; so if you have a conforming LwD processor that only takes MDITA, and doesn't ever instanttiate XML, it would be conforming LwD processor but not a conforming DITA processor.
- Alan; I agree; there's no source format exchange expected or required; all source equally viable; no required conversion to XDITA and validation of XDITA.
- Eliot; conformance rules for DITA are very light, somewhat useless for requiring anything to be useful; e.g., if you have 'MUST' statements, you must do those things, but 99% of DITA is 'should' statements, which means it's mostly meaningless.
- Kris; wrt that, we barely managed to get 1.3 thru the OASIS TAB with our conformance; for 2.0 we have to do much better to get it thru TAB. We'll have to have RFC statements linked to conformance targets.
- Robert; based on what we expected 2.0 spec to look like.
- Kris; and the LwD spec will need to have equally rigorous conformance statements with conformance targets. The only way we got thru 1.3 was by arguing that we couldn't break backwards conpatibility, and we can't do that for 2.0 or LwD.
- Eliot; An unavoidable aspect of a standard like DITA is that conformance is tenuous; we can define it in terms of contents, but processors' conformance is much fuzzier; and LwD will be even more tenuous.
- Kris; I think we can make conformance statements around 'should' statements.
- Eliot; yes, but a processor can still conform even if it doesn't do the things the 'should' statements says it 'should', sine they're not 'must'.
- Kris; it's still important to use the 'should' statements, to give proccessors better guidelines, and make it easier for application conformance, though it's better than most OASIS specs.
[continued to next week. please read thru Carlos email. think we'll have to have an intensive side by side reworking of material. We still haven't addressed issue of 'element' vs 'component', or how LwD spec will handle @s.]

05 March 2019:

8. Continuing item: Reworked intersection topics; "Rendering expectations" and appendix topic for "Formatting expectations" (Eberlein, 19 February 2019) 
- Kris; i though we'd decided to move formatting expectations to appendix, and move much of processing expectations (PE) to rendering expectations (RE)/ But the meeting minutes don't make that clear. So I want to check that TC likes this method and wants to go forward.
- Alan; and you've done this and it's in the spec source?
- Nancy; did anything end up still in PE?
- Kris; not in these topics, I think...
- Tom; only in navtitle element.
- Kris; and what about that entry?
- Tom; I think it should stay in PE
- Nancy; navtitle can be used for other things, not just rendering, so I agree with that.
- Eliot; I agree, this is processing, not rendering.
- Kris; so we do have a clear sense about when PE is still valid, as in navtitle; are we, as a TC, happy with these changes?
- Alan; putting very minor bits of info in a different place that a user has to go to and find doesn't make me happy.
- Kris; we have 3 choices;
1. we can remove the formatting info from spec entirely.
2. we can put them all together in an appendix, as I've done.
3. we can include FE in topics, but every time we do, we have to note that they're non-normative.
- Alan; can that be unobtrusive?
- Kris; no, by OASIS rules it has to be very obvious, in a parenthesis under the section title.
- Alan; are there other sections that are defined as non-normative?
- Kris; as I remember; appendices, examples, and notes are the only sections defined as non-normative.
- Alan; what kind of note is required?
- Kris; OASIS suggests a parenthetical statement in bold right at the beginning of the section. 
- Alan; my comfort would still be higher in that approach.
- Kris; I'd like to wait till we have Chris on the call.
- Robert; as an implementor, I prefer the appendix route; it makes it easier to reference. And it makes me more confident that an implementation will follow all the FEs. As an end-user, I can understand wanting them to be in the individual topics, but that's not our primary audience in the spec.
- Kris; I'd tend to agree, given audience for 2.0 spec. This is parallel to our topic on recommendation on DITA in translation.
- Alan; OK, this isn't a gatekeeper for me, but...
- Kris; we'll leave this open till nexxt week, and hopefully Chris will be there.

10. Continuing item: Proposed review of DITA 2.0 elements to LwDITA components (Evia, 09 February 2019) (Eberlein, 12 February 2019) 
- Carlos; we looked at Kris's template, and started to review it wrt using it for LwD. Without a doubt, there will be many instances where we'll need to put conditional processing in 2.0 topics in order to use them for LwD. If you look at message attached to this item, it's a list of elements borrowed from 2.0 draft and the beginning of an analysis of what will work and what won't work, and how to propose a solution. To do this, we may need to work directly on the 2.0 topics and add stuff that will be hidden in 2.0, but show up in LwD spec.
- Kris; we need to figure out where to move forward. At end of day, LwD spec is published by the DITA TC, not by LwD SC. So the TC has to be very clear about what it will mean to have alignment between 2.0 and LwD. 
- Carlos; Alan and I have been discussing one way this could work is if I work on 2.0 topics, on the same source. At first, we were only going to make changes to shortdesc, but if we're going to have alignment, we need to be working on same files. Maybe we need to have both specifications in the same Github repo.
- Robert; I agree you need to be looking at the topics; doesn't make sense to be doing reuse without actual reuse. We talked about setting up a sub-repo for LwD, but it's a bit abstract. For the moment, whatever you want to change, you need to conosult with Kris and me. I don't know what filtering will need to take place yet. could use subjectscheme. So we need some basic prep work. 
- Kris; I suggest 2 things:
1. set up a joint call with LwD ans 2.0 spec editors, and look at a few gnarly LwD topics, with an eye to developing filtering ideas.
- Carlos; that sounds good. we're going for the same types of audience, but our readers will be looking for different things.
- Robert; I also note; all your changes should be put in using a pull request; that sounds standoffish, but that's the way github works.
- Kris; and I also do pull requests, which do validation. I've never changed spec files directly; I always use pull requests. 
***ActionItem Carlos & Alan will pick 3 topics to go over; next week we'll take up questions of creating a sub-module for LwD.

19 March 2019:

 1. Continuing item: Reworked intersection topics; "Rendering expectations" and appendix topic for "Formatting expectations"
 * (Eberlein, 19 February 2019) 
- Kris; at our mtg in Feb, Chris suggested we move formatting expectations (FE) out of language ref topics and into an appendix. Can anyone summarize what happened the last time we discussed this?
- Hancy; general sense was that an appendix is somewhat less useful to users, but more helpful to implementors.
- Kris; right; so we need to make a decision today.we have 3 choices about FE:
1. leave them out entirely.
2. leave FE material in the body of ref topics, but mark each FE as non-normative.
3. aggregate all FE material in an appendix.
- Chris; I like the appendix option; it concentrates non-normative info in one place, and keeps ref topics cleaner. I think the spec, including language ref, is for implementators first and users second. Most of the FEs we have will be known to authors and expected, so users wouldn't need to go there to find these things out. 
- Robert; I like the appendix approach from implementation perspective. As an author, my tools will be better when my tools implementors have easy access to this stuff. 
- Bill; for most part, users don't look at the spec, implementors alwaysy do.
- Kris; formatting is so implementation-dependent. having an appendix would be useful for company info architects to check out when they're deciding on their own formatting choices.
- Hancy; and the FEs we put in are the ones that are so standard that they're expected.
- Tom; right, 90%-99% of people do things thw way we document in the FEs.
- Chris; that's almost an argument for not having anything, but we should have something.
- Tom; one real need for this appendix is if somewhere there's an info architect who needs to argue to their team about rendering expectations.
- Chris; so we need to look at our expedtations wrt things like e.g., dlhead.
- Kris; all of this FE material has been in spec since 1.0. Only changes made were in 1.3, when it was pulled it out into its own section, and now it's in an appendix. 
- Kris; so do we want to go ahead with this, or hold it for a TC mtg where Alan is present?
- Robert; we can go forward; 2 wks ago he wasn't so opposed as to want to stop it.
- Deb; I've been looking at the appendix to see if this is complete, and it looks as though it is, so I'd say go with it.
- Carlos; Alan says [via text] that he'd prefer it in topics, but "it's not a hill he'll die on".
- Scott; as long as there is a place where users can find it, that's ok with me.
- Kris; hearing consensus that we should go ahead on this.

LwDITA-thread (last edited 2019-04-29 18:21:46 by kjeberlein)