These are unedited transcripts and may contain errors.
Database Working Group session14 May 2014
2 p.m..
CHAIR: Welcome to this session of the Database Working Group. My name is Wilfried Woeber from sorrien university. Thanks to those who are on time to be here when we are going to start the Working Group meeting. We have got one?and?a?half /H?FS slot today and we are having a pretty full agenda, so, let us get into (hours) operations immediately, usually we start with a couple of administrative matters.
The welcome: I have done that. The reminder for you to observe the microphone etiquette whenever you speak up and contribute, please state your name and speak as clearly and slowly as possible in the interests of our stenographers.
The next thing is a couple of logistic things. First of all, to start out with the announcement that our team of three co?chairs for this Working Group is going to be down to two, as Wolf /TKPWAOEB err is stepping down for personal reasons and the fact that he is no longer able to be certain and reliable to be present and to contribute, and from my end, thank you for all your things you have done in the past and contributed to the Working Group, and for the next round, Nigel and myself will still be there, we are going to arrange for update on the leadership for this Working Group, but I personally I'd like to do that in a proper way with a proper amount of preparation and calling for interested parties to become co?chairs or to, during the next time maybe sort of confirm that some of us should stay on board for a while.
So this was the formal announcement. Again, thanks Wolf. Next thing is to go through the proposed agenda. It is displayed here on the screen. I was distributing the version 3 yesterday as the most recent draft agenda. There is a minor editorial glitch here on the item C ?? no, on the item D because the first paragraph is the up to date one and the second one is a leftover from Version 2. My apologies, it was probably my fault.
So, there anything regarding the agenda that you want to see changed or added or removed? No, thank you. So this is going to be our agenda for today for this meeting.
Next thing is review of the open action list. Nigel Titley who is going to take the notes for this meeting also, thanks for that one, is going to walk us through the open actions. This is going to be pretty embarrassing for me, but still, we have to go through this exercise and so I'd ask Nigel to come to the stage.
NIGEL TITLEY: This is the action point roundup. We do this every meeting. Basically looking at what actions was set up in the previous meetings and what the progress has been on them.
So, there is still one hangover from RIPE 66, which is on Peter and myself and Wilfried to raise the issue of internationalisation with the NCC Services Working Group Chairs and also on the Database Working Group action list.
WILIFRIED WOEBER: I guess the current state of that is that the RIPE NCC has done the homework and has confirmed to us that the machinery itself is 8 bit clean, UTF ready, but what we intended to do is sort of to communicate that fact to the other Working Groups and this has not happened yet, but this is going to be done pretty soon now, because I don't want to be embarrassed another time.
NIGEL TITLEY: Myself as well, remember. So...
Right. Hopefully this won't be here next time.
67. Okay, we have a number of action points from 67. One is on the RIPE NCC to report on known bugs in the RIPE database. I understand that will be done later on in the meeting.
67.2, implement changes as already announced to the release process. Again that will be reported on later on down.
Now we start the embarrassing bit for Wilfried. 67.3, to ensure that the next RIPE NCC Services Working Group ?? that's this one ?? includes a session on the operational aspects of running the RIPE database. I think we already have that actually, don't we.
WILIFRIED WOEBER: I think so, because we had the operational update.
NIGEL TITLEY: That's good.
67.4. Take the issue of geolocation to the services Working Group. As it doesn't seem to be heavily used. Despite all the work we put into it nobody seems to be much using it, so we need to find out whether people want it.
WILIFRIED WOEBER: Two comments on this one. The first one, Yours sincerely as one of the minor points on the agenda for today, to reraise the issue of this one and we had a couple of private discussions earlier this week, which relevance to that and one of the things relates to that is this ??
This is in the interest of time I'm going to do the shouting exercise.
Some of you may have read or listened to the ideas from within the measurement, the math Working Group, about this crowd sourced geolocation thingy, and at the moment we do not understand whether there is a potential interaction or a potential benefit for this exercise or this idea if and when it gets implemented, whether it would have anything to do with the geolocation thing in the RIPE database. So, contrary to what I had proposed and planned for this meeting, to propose removal of the geolocation thing, we are not going to do that today, but instead ask for use cases ?? ?? so instead of proposing to remove the capability in the database, we are just going to ask for use cases and for feedback. So this is still open. Let's keep this one open but this is the current state.
NIGEL TITLEY: 67 .5, again on Wilfried to check that the Anti?Working Group has properly specified abuse that should be sent to the abuse contact and possibly its format.
WILIFRIED WOEBER: Not done and I have had not had the resources to follow up on any recent discussions or be present at the ?? probably tomorrow it is ?? probably tomorrow, Brian do you want to speak to that.
BRIAN NISBET: Hello. So, the agenda item for that piece tomorrow says long pending. So it is an action on myself and Tobias to work on that which we haven't done, so there is essentially no update at this point in time. When it will happen... I would be loathe to commit to a timeline right now. I'm kind of hoping that we'll get some traction on it over the next 6 to 12 months, you know, within that time frame rather than start working within that time frame. So there is nothing there right now.
NIGEL TITLEY: As far as Wilfried is concerned we can consider this action point discharged.
BRIAN NISBET: I think that's entirely fair. It's on us now to either, from the Anti?Working Group point of view or as private Internet citizens to come up with a policy for that.
NIGEL TITLEY: Thank you. Basically this action is transferred to the Anti?Abuse Working Group.
BRIAN NISBET: Just to confirm by the way, the policy, if a policy does arrive out of that and arrives out of data verification, it will be brought to here. We don't intend to do that in Anti?Abuse, but right now, it's either on Anti?Abuse or just on me and Tobias to take the next steps on that.
NIGEL TITLEY: But the rock has been thrown over the wall and you are preparing to throw it back at sometime.
BRIAN NISBET: Pretty much.
NIGEL TITLEY: Finally, to bring to the attention of the Routing Working Group the fact that the routing data as recorded in the database is generally very poor. That was actually brought up I think in the Address Policy Working Group this morning who also agreed to prod the Routing Working Group, as far as I recall, so there should be a two?pronged approach on this one.
WILIFRIED WOEBER: Yes, and a follow?up on the discussion we had earlier today with the recycling of 16?bit AS numbers, this is also going to add to this whole sort of problem space and we should, together with Rob and some others from routing, we should try to find out what should be done, what could be done, what needs to be done, what must or should be done.
ROB EVANS: Is there a background to this as in which bits of data are recorded as generally very poor? Last Meathing or meeting before we had a presentation by I think it was by Alexander, and I think there was a difference of opinion in how the routing data was being interpreted as much as some people interpreted the RPSL as trying to model the Internet, where others usersed it in a far nor conservative record of peering relationships.
WILIFRIED WOEBER: That's true, but we also recognised during this discussion about 16?bit AS number recycling that we do have different mechanisms, or no mechanisms at all to do some sanity checking against the data which is put into the routing registry, and this may have something to do with this statement.
NIGEL TITLEY: It does depend on which paradigm we are actually trying to represent.
WILIFRIED WOEBER: Correct.
NIGEL TITLEY: Because if it's a representation of peering relationships or possible peering relationships then there is no way of checking it.
WILIFRIED WOEBER: Correct. And there is even two parts of the game and one is sort peering relationship which is in the documentation of the autonomous system number and the other thing is actually the rows object entries which need the consent of the IP address holder so there is sort of a better sanity check with the latter part of the data than with the other one.
NIGEL TITLEY: Yeah... Alex?
AUDIENCE SPEAKER: Alex Band from the RIPE NCC. If it's purely talking about route object, the RPKI said the data quality ?? that we could do something there in providing a single interface to manage both. Something like that, we mentioned this at the last RIPE Meeting. We can talk about it. We could also actually try doing something. It's really up to the Database Working Group whether that is something that you would like us to pursue. It's definitely something that we can look into.
WILIFRIED WOEBER: That's going to come up on the, one of those items back down ??
ALEX BAND: I think the most complicating factor here is that in order to create a route object, you need to authenticate against the aut?num object which for RPKI is not the case. If you are the holder of an address space you can essentially authorise any AS to originate it. If you don't ?? yeah, and I still don't understand, I would love for somebody to explain to me why that is an issue, because it isn't an issue for RPKI because apparently it is for route objects. Rudiger...
RUDIGER VOLK: The question is how is the data being used, what is the intention of what is being documented? And there are parties around that view the route object as the documentation of the AS of what it is originating. And the interlock of the authorisation, of course, is beneficial, so that I'm not allowed to claim your address space and hijack it, but in my mode of operation, if you want to be my customer, I ask you, give me the authorisation to use that address space so ?? and that I will reflect and use that in creating the route object. In RPKI things are done differently and my operational procedures have to deal with the fact that if you withdraw the ROA for me, for my AS, I have to cut down your service, which may be a surprise to you, in particular if you are a large organisation and have a couple of hats independent operating. While in the old RPSL world, and my view of things, I document, you authorised me, I am originating it and my operational procedure is if you want ?? if you give me a cancellation, I will withdraw the route, delete the route object in that sequence and you potentially and you potentially will set up the alternate parallel route object for the new service provider to be available for a phase over. Which, of course, can be done in RPKI, but kind of the operational thing is, does the route object document something that the originating AS is doing, or is it ?? or do we have a situation where we just document the authorisation of the address holder towards ASs.
WILIFRIED WOEBER: May I interfere here for a second? Just the last comment Alex please.
ALEX BAND: The only thing I really want to know is whether Rudiger regards a route object and a ROA as providing the exact same functionality?
WILIFRIED WOEBER: I think this is a very interesting discussion and I am pretty sure that we should have that one, at least together with routing, and I think that is or that you maybe want to take note of this issue for one of the topics that we want to tackle between now and the next meeting. Okay. Thank you. Sorry.
NIGEL TITLEY: I think that's probably the last action point. Yes, it is.
WILIFRIED WOEBER: Thank you for giving us the opportunity to spend about a quarter of an hour on the discussion of the action items, and I'd like to ask the colleagues from the RIPE NCC to go ahead with the database update presentation as we are used to.
DENNIS WALKER: I am Dennis Walker the database updates will this time be done by myself and Ed, so he had will do the first stage.
ED SHRYANE: I am a software engineer in the database department and I'm just going to give the operation update. I have two action items from the last meeting, which are for the RIPE NCC. Some statistics and also some changes, so, I'll be summarising the changes in the software since the last meeting.
So, starting with statistics, now with the software we have nearly 3,000 uni tests, nearly 3,000 end?to?end and integration tests. This covers every feature in the software and we're adding to it all the time as routing features. All these tests are run on every single change both on your internal continuous integration environment and also there is an external public building environment usersing /TRAF I say, operational statistics. They are on our website but we are serving about 200 queries per second. About 20 million queries per Kay and a couple of thousand updates.
So, we have the RIPE NCC has two action points from the last meeting. The first one is on issue tracking. So, we now track all issues publicly in GIT hub. They are in the same repository as the code. Issues as are raised by users, either we open them ourselves or they can be opened independently by users themselves as well. Since the last RIPE Meeting we have closed 53 issues in GIT hub and we have released those to production, and there will be 35 issues fixed in the upcoming release.
Users are also contributing to the code. So we have received four patches since the last RIPE Meeting. We see that as a very useful way to extend the software, extend the usability of the software, if you have an idea, we can help you technically with that as well.
So the second action point was to, was on release management. So, the first point is that releases are made to a release candidate test environment for at least two weeks before released to production. So we have had three major release to say production since the last meeting, and they have all been released into this environment, along with release notes for that describe the changes.
Users can test against this with a live copy of the data, so this would be a copy, a complete copy of the production data, but all personal data has been dumbified, and also, maintainers ?? maintainer authorisation is the passwords are the same as the maintainer name which makes it easy to test with maintainers in the environment.
This also includes all script changes. In addition ?? in this environment, we also have any scripts that remember planning to run in production. We have already run these in the RC environment so you can see the effect of these scripts. That's important for the upcoming 1.73 release because we have some changes due to policy 2012?07 and 2012?08. So, please have a look at the RC environment for that.
We have a question. Is this environment working for you and how can we improve it? We'd like to hear your feedback. The RC is a new environment, but we have seen for this release, the 1.73 release, we have seen about 2,000 paid views in the environment. We have seen about 2,000 queries, and about 20 updates, so I'm sure we would like to see more activity in this environment and if there is any way we can improve it, we'd like to hear from you.
Okay, so I have four slides on the changes in software since the last RIPE Meeting. So, first of all, we have continued working on R /TKAP, that's the weird service. And I'll be describing more about that later. We have added "Via" attributes in the aut?num object. That's as a result of a patch submitted by Job Snijders, and there has been a discussion about that in the last RIPE Meeting and on the database and I think routing mailing lists. So, that's now in production.
Member resources. So member resources are now covered by an abuse?C attribute. First of all, after the last RIPE Meeting we added the ability in the LIR portal to add an attribute for member organisations, and last December we ran a script to cover any organisations that had not already updated themselves.
For PI space, we provided a wizard in web updates, for PI space holder to add an Abuse?C themselves. That's in progress at the moment. So currently 33 percent of IPv4 and 44 percent of IPv6 PI space is covered by Abuse?C. So users get to continue to use that tool. The deadline for adding that is the end of Q 3 this year. And after that point, we will run a script to cover the rest of that space, but we will be setting is the contact to the sponsoring LIR.
Another major change we have made is to introduce single sign on. I think this is a major improvement. But it's also across the RIPE NCC Services, not just for the database. When you are signed in so single sign on, no passwords are then required for database updates. So, if you have added an SSO authentication attribute to your maintainer, and currently there is nearly 1000 aut Ss attributes in over 200 maintainer objects. And there have been over 2,000 updates authorised by SSO since that was released.
And we would like to add new features based on SSO such as authenticated queries, so you should be able to make queries for your own data, including personal data without any limit, once you're authenticated and also to add a certification of PI resources, using RPKI.
One other thing we have introduced pending route authorisation, this is a 2 step hierarchical authorisation process, so you need to authenticate against the address space, the INET number, the INET 6 num, and also the AS space, the AS maintainer, but this provides an alternative way to do this. The address space holder can submit an update independently of the AS maintainer. And authenticate the update independently. The update will stay ?? will pend on our servers for a week before both updates are received.
Another major thing is we have continued to improve the documentation. This has been, I think, a long process. But now the manuals are linked to the software version so we are publishing the query and update manuals along with each release of the software and that includes draft updates which are linked to the release in the RC environment. So the manual updates for 1.74 are already on our website. We are working on a new documentation style so in our manual update process, we need to include comes as a reviewer. And we have focused on two options. We have done some research on it and we will choose one of these options after the meeting.
Also now we have a hot node in Stockholm, so this is a query node that we're using to /HA*PBL queries, it's up and running and it is load balanced with the other Amsterdam servers, so this is accepting queries now, right now. (Handle) so, it is a part of production and it will be able to handle queries if the Amsterdam servers go down for any reason.
So, also done a cleanup of references to returned ASNs. That is ongoing, there is a total of 17 thousand references to returned AS numbers which we need to clean up incrementally. We have proceeded by attribute type, and we are ?? that's in progress, we will be starting on that next week I think.
And the last slide for changes.
We have added a mandatory status attribute to all aut?num objects. So this is both a software change and a script to retrospectively change all of the existing aut?num objects. So this is following on from policy 2012?07, legacy address space. So, aut?num objects on the next release will have a status attribute. This is a generated attribute, so users don't actually have to know themselves what type of object the aut?num is. We will automatically add it so updates won't fail. There is three types of ?? there is "assigned legacy" and "other." Assigned and legacy are within the RIPE region and "other" are for any aut?num objects that are originate outside the region.
We have also, along with ought numbs, we have also set all legacy INETNUM to legacy as well. This will be done on the next release in production. It's already been done in the RC environment. We have a list of 4,000 top level objects and 35,000 more specific objects. There is a frequently asked questions page for more details.
The other major change in this release is adding the sponsoring org attribute to end?user resources. This is based on the 2012?08 policy. And that includes INETNUM, INET6NUM and aut?num objects.
The software and this new software and all the scripts have been run in RC, so it's available right now. So I'll hand over to Denis.
WILIFRIED WOEBER: May I ask a minor question, the status for the aut?num, assigned is obvious, legacy is obvious and other, I suppose, is copies from other regional registries to be able to use the AS number in ?? is that correct?
ED SHRYANE: It's possible to add AS numbers in the RIPE database that is assigned in other regions. We used a dedicated stats file to find out where it originated from.
DENNIS WALKER: I am Dennis Walker, the business analyst for the database team. I'll take you through the rest of our update.
The unresolved features. These are basically a list of items that have been brought up over the last couple of years either on the mailing list or in meetings. And which have never actually reached any consensus, so, we haven't been told yes, this is a problem, something needs to be done about it, please do it, or, no, everything is fine, just drop it. So, we just want to run through these and say these are the things that have been discussed or mentioned even they weren't discussed, and maybe not today, but from now on, we would like to actually get some direction from you, do you want us to do something or shall we just drop it?
The first one is deprecate referral by attribute. This is basically a useless attribute. It had some meaning in the RFC 2622, but it was never ever implemented, so now it's just a mandatory attribute that is just constantly repeated and no one can even remember why it was there.
A new flag to request personal data. This was a little bit unconscious the last time I mentioned this, it means reversing the default logic of a query. But when you look at the actual logic we apply now, you ask for something, we give you that and something else you didn't ask for. When you have had too much of what you didn't ask for, we block access to the RIPE database because I have had too much of it. This is kind of reverse logic because really if you want the personal information, you should say give me this resource and the personal information because I actually want it. But if you say give me the resource, we shouldn't be giving you all this extra information, then saying oh, you have abusing the system, you have seen too much of this, now we are going to block access. It is awkward because this is the fundamental default behaviour of the database, so it is a tricky change, but we would like to have some opinions on whether it is and this confusing the hell out of new users because they don't understand there is this minus R flag which you can add to a query which doesn't give you what you didn't ask for in the first place. So it needs a bit of thinking about.
When we actually do block you, it's an all or nothing process, so when you reach this limit of stuff you didn't ask for, we now say sorry, you can't access anything at all in the RIPE database. This again is rather crazy old?fashioned logic, because what we should be saying is you have seen too many personal objects, we are no longer going to show you any personal objects today but the rest of the database, all the operational data, you should still be able to see. We'd like to change this blocking mechanism so that we only block access to the personal data if you have seen too much of it.
The internationalisation, which Wilfried briefly touched on. Technically we are almost ready to do it, but what's holding us back is the fact that we don't know what business rules or syntax rules to apply to the data that might be coming in. And you can think of it as three kinds of categories. As a registry, should there be information which we only accept in English? If so, which bits of the information should we only accept in English? Are there bits where we would happily accept in both English and in some other language or in coding? Is there some that we wouldn't really care about, put it in as however you like? So in the absence of this kind of policy, we can't write any business rules or syntax checks to make sure that the database contents is what you want it to be. And until we get that, we really Fr. A technical point of view, we are stuck, we can't move forward on this at all. So, it's basically in your hands again.
There is been quite a lengthy discussion on the change attribute recently. It was suggested that we should drop, or replace the change attribute with a created and last updated attribute. Some people were saying, yeah, it's fine to have these new attributes but please leave the changed attributes alone, we want to hang on to these things. So I did a few stats on the changed attributes. We currently have been 7.4 million objects in the database, which contain 8.6 million changed attributes. These change attributes have 105,000 unique e?mail addresses sitting in them, which, I don't think anybody really wants their e?mail addresses just lying around in this database. Then I looked at the first changed line date in each object, and compared it with the actual date, the object was created, because we actually have time stamps in metadata so we know when everything actually did happen in the database and I looked at the last changed line date and compared it with the last time the object was modified, and basically 27.4 of them don't match. So that's almost a third of the data has these change attributes which incorrectly imply when things were created and modified. George?
AUDIENCE SPEAKER: George from APNIC. This has caused considerable discussion in our region. It always confused us that a field from should have been a "read only" attribute under the control of the database logic was entirely editable by the person maintaining the object and that they could erase change history, it made very little sense. And I like your second bullet about the list versions, but there is a problem, because if you delete objects and then recreate them, list versions doesn't recover past the deletion point.
DENNIS WALKER: Just wait a few more slides and we'll get on to that.
So, basically, as George said, there is this list version there which shows you the you this history of when things were changed so you can actually see how it differs from the change attributes. So to be honest, we actually auto generate these two attributes created and last updated, there is sill point in the change attributes lingering on forever and a day.
Improved dumbification, this is another topic that was brought up well over a year ago. We dumbified personal data in different locations, we do it on the split files, on the FTP side, we do it on the NRTM streams, on the GRS, in RC, but we have different rules for different places and we have been criticised a number of times that we are too draconian in what we actually dumbify. We had this proposal to be a little less severe; for example, we only get rid of a second half of phone number. We only get rid of the first part of an e?mail address, we keep the last line of a postal address which is usually the city. Again, we got very little discussion, very little feedback, so we'd like to know how you would like us to move. But we would like to have one version of dumbfying which we use wherever data is dumbified.
So now we have a number of upcoming features. These are things which either users have asked for or we have picked up from training courses, from feedback on our customer support or from meetings. Lots of little things and some big things where people would actually like something to be changed. So I'll run through these fairly quickly.
One is from the member survey last year about the user experience. Most of our web tools like web query, web update, sync updates, these are all written roundabout 2001 originally, they were written as completely isolated services. We have since tried to merge them into ones for example you do a web query, there is now a button that says if you want to update this object. But it's a clumsy approach, what we'd like to do it take another look at this and see how do you actually use the database in your daily work? What is your work flow? Do you look at something and then you want to change it, so you go and edit it and perhaps you go over to the LIR portal and do something, we want to know what your work flow is and try to improve the experience so you can seamlessly move through these tools from one bit to another without having to jump from one separated tool to another one.
We'd like to review all the error messages. Again, a lot of these error messages will left over from the original version of the database in 2000, 2001. Yes, they were written and some of them were my fault, I admit, they were written by engineers, they were written with what we had in mind we need to know why something fails so that we can investigate it. They weren't actually written for you to easily understand and think oh, I know exactly what that problem is now because this error message is to clear. So we'd like to review them all and write them now with a more user friendly mind set.
One issue we thought of, if we do this is do any of you actually pass the text string of an error message? We now it's a silly thing to do but people have done sillier things, so if he wart rewriting all the error messages we hope we don't get people saying, oh, you have just broken my software because it no longer says what it used to say.
So, if anybody has a concern, let us know, but hopefully nobody will be talking to us on this issue and we'll end up ?? but then there is another point here because a lot of people have used the database for 20 years. You know why things fail. You just forgot something or typed something wrong and you know, a very simple pointer is enough for to you know what's wrong. For the new users, you perhaps need much more verbose explanation, what went wrong, what can I do to fix it? So maybe, we should be looking at having two layers of error messages where you can select to have a more verbose one so we can have a very short cryptic one for the experiences users and much more detail for newer users.
The history query, has George has said, at the moment it doesn't show any deleted data. Now, it doesn't show personal data and it's not going to show a history of personal data. That's a legal issue. Once you delete personal data it's gone and we don't show you old versions of it. But everything else, it was an arbitrary decision when we implement this had that we only show existing objects and we only show them back to the last deletion point. So if you have an INETNUM and you delete it and recreate it and modify it a couple of times the history will only show you those last couple of modifications. This doesn't help if you if you accidentally delete the wrong object and think what did that look like five minutes ago? I'm sorry we don't show you deleted objects. As it was an arbitrary decision, and I have spoken to our legal team and they don't see any issues with us showing deleted objects, we suspect you would prefer us to actually show you the full history of operational data in the database.
The role objects, this is another thing going back many years, because people have long since forgotten how personal role objects should relate to each other. For many years they have been used interchangably and it's invariably people use personal object when it should have been a role object. Well, really, the original intention was a person object contains personal information. A role object is purely business, it groups people who perform actions in the database. We would like to move back towards that concept where a role object should not contain personal information. Before we started aing Abuse?C which is obviously uses role objects we only had about 10,000 rogue objects in the database, so what we'd like to do is propose on a certain date after, after that date we will consider role objects no longer to be personal and we'll contact all the, those 10,000 role object holders and ask them to just check and make sure that it's business data and not personal data. Some of the advances of this means that the role object will be available in the history. It will be, the split file will be available. You won't get blocked for looking at role objects, it will become a norm will al object which it was always intended to be.
Converting IRT into role. We have been having a bit of a private conversation with the security guys in the background. IRT was a good idea but nobody understood it. You can't even pronounce it, nobody even knows what it means unless you are in the security world and really, it is only a specialised version of a role object, it just has a couple of extra attributes which can easily be added as option into the role object. So, what we'd like to do is go away and put a proposal to you on how we would easily convert IRT in the role objects and just deprecate the IRT object, because there is never been more than a couple of hundred in the database anyway.
Changing the syntax of the role attribute to match the organisation attribute. At the moment, again, because of this inter?relation between personal and role, the role attribute, the actual name was always contributed syntactically to be the same as a personal name. That also helps perpetuate the view that role is personal. We have been asked if we would convert the syntax to be more like the organisation object because at the moment if you have 1, 2, 3, A, B, C, company, you wouldn't create a role object with that became because you are not lay loud to start a name with a number whereas in an organisation you can. I think we agree that the syntax of the role attribute should be more like an organisation.
Change in the actual personal role names. Historically we used to do this but we had to contact RIPE DBN, the customer support around, say, I have changed my name and could you please do it. Again, I have checked with our legal team, they don't see any issues here because we don't do any validating at all on personal role names, and if you really wanted to change it and we don't allow you to do it directly, you just create another person object with different name and the same data, change all the references to it and delete the old one, so in effect you can easily change the name. It's just a lot of hard work, there there is no point in us not allowing it and just simply go in and change the names.
Authenticated queries. This is an extension of the RIPE NCC access where you can have single so sign on. Once you sign on, and if you have got your audit SSO in your maintainer and your maintainer links back to your O, now we know who you are, we know which organisation you are with and we know what resources you have. So we know what data you have got in the database. Now we can start doing much more for you in terms of how you manage and view and handle this data. And one thing is to have some kind of way in which you can view your own data /TPH?P many different ways. And these can be built?up with metadata and as private views. For example, if you have ?? if you have an ISP in England and you have got customers? London and Birmingham and Manchester, they might all be specific to one allocation object so doing a more specific query just shows you the whole lot. But maybe you want to only see your Manchester customers or your Birmingham customers, so you would be able to create some metadata within the database that would allow to you view your data in different ways. We don't know how we would do this yet but it seems a few people like this idea that they can have these private views.
Also we can give you unlimited access to your own personal data. You put the data in the database, there is no reason why we should block you or limit you in looking and get and getting it back out of the database the again if we know who you are and we know where you are from and what is your data, we can just not have any limits at all on what you look at if it's your data.
We want to integrate the RIPE database better into other NCC Services, things like web updates and others. In the past we have had stand?alone tools and products within the database, within the NCC, but once you sign on with SSO, you should be able to seamlessly move from one service to another, so that you don't want to go over there and use the LIR portal with one password and then go over there and use web updates with another password and sign on /O to something else with another password. Once you have signed on with single sign on you should be just seamlessly be able to go through all these services and products and do all these actions and they all simply work with the one password.
We have an automated cleanup service which at the moment only deletes personal objects and personal maintainer payers. But you can actually have quite a big cluster of objects with person, role, organisation, maintainer, key certs, but all this cluster of objects is completely self?contained, it doesn't relate to any operational data whatsoever. At the moment we can't detect these clusters but we want to rewrite this software so that we look for all these groups of what we call secondary objects that have no link to any primary objects, any operational data, and just after 90 days delete the whole lot.
The object editors, again is a throw back to early versions of our software when it was hard for us to build in business rules, so, because of the allocation objects, only certain parts of it, you can maintain and parts of it the RIPE NCC maintains, so we built these object editors into the LIR portals so that you can to go there and you can change your phone number or contact of your organisation object or your allocation. But, this is the wrong way around. Really, we should Kay, you go to the database and treat these as any normal object, but with the business rules, we'll prevent you from changing the bits you are not allowed to I think change. So instead of going over there and changing most of your objects but you go over there and change this special set of objects, everything becomes a normal object and if you try and change the bit you are not allowed to you just get an error message.
Server side expansion of AS. We need to talk to you about what you want. I get the feeling that what you are doing is you are looking at an AS set, you are expanding that into a list of AS numbers and you are recursively expansion an will AS sets in there. You are ending up with a long list of AS numbersnd doing inverse queries to find what routes originate. This is a lot of queries for to you do on your side. We can do this very quickly and very easily with some somebody SQL queries, once you identify exactly what it is you want these services to do, we can write an code that will do it for you with one single call.
And that's the end of it. So, if you have any questions, if you can remember any of what I said...
ERIK BAIS: On the last part, the last slide. The, you said the APIs and doing that in a single query, that only going to be using the rest APIs or also through the WHOIS interface?
DENNIS WALKER: Define what you want and we can look at how we can give it to you.
WILIFRIED WOEBER: First of all. Thanks for the report. I don't know who was first. So...
AUDIENCE SPEAKER: I have just after off mark remark. You mentioned that there were four contributors to the database software. As of a few weeks ago, on GIT hub, there is actually a traffic continuous integration thing going on which makes it very, very easy to contribute to the database, because you don't need to build the software any more locally; the software will be built in the integration system with Unicasting. So the barrier to contribute is very low at this moment. And I encourage everybody to join the effort and contribute code.
WILIFRIED WOEBER: Thank you. Rudiger
RUDIGER VOLK: Okay. Let me begin with the question, there was a pointer that, well, okay, on the website there are some statistics. I kind of missed the report about how the operations actually worked. In particular, I would have liked to see some statistic showing how many service impacting bugs have occurred over time, potentially compared to what has happened say over the last 24 months because, a year ago I know none of the actually happened operational bugs have been reported.
DENNIS WALKER: Have you had any service impacting bugs?
RUDIGER VOLK: My question is what number has been overall ?? yes, I have had three within 12 months.
DENNIS WALKER: Were they all before the last RIPE Meeting.
RUDIGER VOLK: They were two before and one after, if you remember. And kind of, and kind of I wonder whether anybody ?? whether all of the operational bugs have a tendncy to only hit me and never hit anybody else. I would be surprised if nothing else happened. But, well, okay, if a report is ?? it only hits me that would be interesting data too.
WILIFRIED WOEBER: Could I suggest that the report is going to be available on the website anyway, that the folks try to come up with a couple of slides with that view and we just link to that and make it available on the website.
DENNIS WALKER: We list all the bugs that are reported now in GIT hub. When we fix them, we create release notes which refer to the bugs that have been fixed in that release. We put it onto the RC environment for anyone to test, and we would hope that people actually went to the RC environment and did queries, did updates, and find these bugs before we actually put it into production. We allow at least two weeks on RC to try things out.
WILIFRIED WOEBER: Rudiger, in the interests of time we are very far down theâ ?? our time allotment, could you just limit yourself to one or two sentences. Or take it to the mailing list, as I suggested. Let's ask the NCC folks to come up with the data you want to see.
RUDIGER VOLK: Okay. First of all, I do not think a referral to back documentation in GIT hub is a valid answer to providing operational information to the users. The users are not all developers that are used to a GIT hub environment and I'm not quite sure whether the relevant ?? the back documentation within the developers environment will catch the information that is of operational interest. Further on, well, okay, the RC environment has only become available fairly recently and you might recognise that actually using such an environment actually also takes effort and preparation. So each time a release candidate became available, I had a small step forward in making use of it; making full use of it is something that is actually in the future only. And it would ??
WILIFRIED WOEBER: Rudiger, that was the reason why I suggested that we ask the NCC to do a little bit of looking into the recent history and come up with one or two slides, present that and then we can try to discuss whether this is good enough or not. I'm really sorry, but we are about an hour into the one?and?a?half hour slot of this Working Group.
AUDIENCE SPEAKER: Elvis: We had this chat, Denis, a few days ago. Can you go back to the slide about history of database. The question is: You are saying that now you want to provide complete access to the history, meaning you are no longer providing access from the last creation date
DENNIS WALKER: What we do now is if an object has been deleted, you can't see anything, it's gone. If an object has been deleted and recreated you can only see the history back to the last point at which it was controlled. What we are proposing is that for any object whether it exists in the database today or not, we will show you the full history as much as we have got.
Elvis: I'm not sure I agree with it. And let me tell you my reasoning. You will basically decide ?? well, the group will probably decide, but let's put it this way, I'll give you two examples. LIR one has received an allocation from the RIPE NCC, starts making assignments whatever, etc., etc., etc.. some point those assignments get deleted. The allocation gets returned to the RIPE NCC. Because maybe the LIR has been closed and then gets reallocated to member B. Now, if you look for the history of the assignments made by member B, you will actually at some point go too much back into the past, you will see a history of someone of objects that have actually been made by someone totally different.
DENNIS WALKER: But if you are looking at this history we would assume you actually have some knowledge of what it is you're looking at.
Elvis: You are looking for a history of an IP block.
DENNIS WALKER: It shows you the history of the assignments and the main tear points where the control of that object changed, so to some extent, some of this information is already available in the public domain.
AUDIENCE SPEAKER: What I'm basically saying is right now, if I understand correctly, once the update is done, if I look for an assignment, let's say a /24 out of an allocation, I can browse through until the creation and further deletion and further creation and creation and in the back, in time, until 1990 whatever you have history for, which will probably give us the history of that block as it has passed from an LIR to another LIR to another LIR, but it doesn't actually say that it has been ?? the management of that ??
DENNIS WALKER: Maybe somebody wants to know who was actually responsible for that assignment two years ago. And that may not be the people who currently manage it, so there is a used case for having access to that information.
AUDIENCE SPEAKER: That is exactly what APNIC was asking for, exactly, because we have an in?region request for long?term history of resource disposition by a large number of WHOIS consumers, we need this feature.
DENNIS WALKER: Basically, we'll leave it down to you guys, you tell us what you want. We can do it either way. We did it arbitrarily this way, we could do it arbitrarily another way, you tell us.
WILIFRIED WOEBER: We are going to take this up because we are having a long list accumulating already what we want to tackle. There is a contribution from remote participation.
JABBER: Mico Donovan, from Ireland, has a question. If a company has several objects controlled by one maintainer object, and several orders controlled by a second maintainer, can the other team help with a bulk migration of the bulk objects to a single maintainer objects.
DENNIS WALKER: Yes, we can, but this is basically a standard update. You get a copy of your objects, you put them in editor, you changed the maintainer, longs you got the authorisation, I drop it back into sync updates and submit an update. Sometimes we do help people, but they need to contact customer support with RIPE DBM and we'll look at it case by case. We don't have a generalised process for doing bulk updates for people.
WILIFRIED WOEBER: It's actually for a new version of the documentation, this may be one of the examples how to do stuff with this machinery.
DENNIS WALKER: Yeah.
WILIFRIED WOEBER: Thank you to both of you for the update and for walking us through all the activities, and I'd ask the media people to put up the agenda again. Because, what we actually did already is to cover quite a few of the things that I had listed as individual items.
One question that I might want to ask is, is there anything on the router we should be aware of with implementation of the legacy services stuff and that sort of thing other than just relabelling the individual objects with legacy? Is there anything that you think is relevant at the moment? But it's going to be on the release candidate environment on the same box environment anyway.
DENNIS WALKER: On the RC environment we have added the mandatory status attribute to all aut?num objects in the RC environment and they have all got the correctly generated status. We have added the sponsoring org attribute as well in RC, but because we don't want to at this stage identify the actual sponsoring orgs we have set them all to a dummy value but you can see which objects will have one.
WILIFRIED WOEBER: The point two objects are dumb if I Kated, but the relevanting is still in the my object?
DENNIS WALKER: If you have an end?user resource, we have put the sponsoring org attribute in there but it points to a dummy organisation, we don't identify who the sponsoring org is in the RC environment. We have changed all the legacy objects so that means all the top level ones and all the more specifics, no matter what status they had before, what you thought the status of of that object was, it is now all marked as legacy. So there is ?? where possible we have tried to make the software tolerant to you submitting objects with mistakes, so if you don't include the mandatory status attribute or you include the wrong value, we just correct it. If you miss off the sponsoring org attribute, we retain what was there before, because only the RS can actually set this value. We have tried to make the software as tolerant as possible to the likely mistakes that you will make when you are submitting updates to these objects.
WILIFRIED WOEBER: Thank you. Any other comment? Rudiger?
RUDIGER VOLK: When you do the the RC for that kind of extension keep in mind that actually documenting how the changes will show up in VRC needs preparation by people who want to test, so, that information probably should be available very early.
DENNIS WALKER: This information is in the updated query and update reference manuals that have been releaseed in a draft version for the RC environment.
RUDIGER VOLK: Synchronised with the environment.
DENNIS WALKER: We have released this into the RC environment and we have released draft documentation which explains the new version of the software and these new attributes.
RUDIGER VOLK: That means the testing time is just since publication of the whole thing.
DENNIS WALKER: They were published a couple of weeks ago, a couple of weeks before it goes into production. So that's at least a month.
WILIFRIED WOEBER: Okay. Thanks again. So, just to make sure that sort of we follow some sort of formal procedures in documenting decisions.
As you said, you want to get rid of the referral by. We had a discussion on the mailing list and on the mailing list there was consensus that there is no good reason to keep that in the database scheme, so this is just to have it properly noted in the minutes, there is consensus to remove that stuff, and we are asking formally the RIPE NCC to go ahead with the implementation to rip that out and put it it to rest. And the same holds true for the changed attribute. There is consensus that there is no longer any good reason for it ?? yeah, I think so ?? you are of different opinion ?? and that's the reason why we are talking here about. So, what's your issue?
RUDIGER VOLK: The impact ?? ?? I don't think that the impact of existing provisioning chains and other tools and software interfaces is really cleared for that.
WILIFRIED WOEBER: Okay, so can we have the statement on the mailing list please and I withdraw the proposal at this moment to ask the RIPE NCC to go ahead with ripping it out. But we need to make progress so at least we are getting rid of one of those.
The next thing that I wanted to have dealt with is the ??
AUDIENCE SPEAKER: Sorry, Wilfried ?? which of the two changes are you now withdrawing?
WILIFRIED WOEBER: The removal of the changed attribute. So we're going to rip out the "referral by" and we are having another round on the mailing list with regard to changed. Okay?
So next thing was the 2012?07, we touched on that one, I don't think there is any more issues to be discussed, any comments? No.
That gets us to the next one, the status of the IETF WEIRDS discussion and development and I'd ask Ed to walk us briefly through that one.
ED SHRYANE: Thank you, Wilfried. So to start with, I'd like to go over the current WHOIS protocol. It's been around for a very long time. But there are certain issues with it. First of all, it's quite a limited data model. It's defined by RPSL, which is essentially free form text. It's described by an RFC but you can read through it but it's quite complicated to pars.
It's not internationalised. Originally it's a US ASCII, the RIPE WHOIS database is a Latin 1, 8 bit. So there is no authentication in the protocol. There is no redirection, so, in the RIPE database we have mirrors of other data sources so we know which resources are assigned to other RIRs, so we can mirror that.
There is no boost strapping, so we can't tell the source for data. It's command line orientated so it's on port 43, the query syntax is not defined as far as I know. So each implementation is free to change the query syntax.
Error messages are not defined, they are returned as a comment, so in fact all metadata is returned as a comment along with objects. And finally the channel is not as secure. So all communications on this port is in clear text.
So, WEIRDS is I think an effort to solve some of these problems. It is a Working Group at the IETF that has been formed since I think it was formed last year, so there is a service describing the model, and there is a protocol called R lap to actually query and respond to queries. There is an URL on the presentation to go for more information on the Working Group and WEIRDS. It was formed in April 2012.
So, the benefits of the RDAP protocol. It uses existing standards that are used for an awful lot of other things. It uses HTTP as the transport protocol. Http S for the secure protocol. It uses Jason as the data format. HTTP is allows us to use query parameters in the URL, there is standard status codes in the response. It supports transport security, authentication and redirection. And the protocol uses a restful URI scheme.
Which object types can be queried? It is a bit more limited than the WHOIS protocol. It's focused on resources, on entities, so, in the RIPE database, we support aut?num, INETNUM, Inet6num as resources and entities are organisation, person and role objects. In the protocol, there is also support for name server and domain, but we don't implement name server, and we only have reverse domain objects in the RIPE database.
We have a beta deployment of this protocol since July last year, it's available on this link. It uses live data so it's completely up to date. It's in production. So it uses up to date production data. And it's based on our existing WHOIS software, which is Java and open source.
We have been participating in the IETF Working Group inter op testing, so we are testing our software for correctness against a reference test environment. And we have also been collaborating with APNIC on the implementation. So they are using the same WHOIS software and we have been collaborating on implementing features.
Very quickly, I'll go through an example WHOIS response. So this is the WHOIS response that you are all familiar with. Plain text. And to compare with the RDAP response, it is using Jason, so you have all attribute names and values are in there, and you have, where you have multiple values you have a raise, so it's slightly more complicated to read but I think it's easily parsable by software and there are a lot of libraries out there to pars Json format data.
So the WEIRDS Working Group progress, there are seven draft RFC documents, I believe two of these are already done. There is a status page on the Working Group page and two are ?? they are having significant changes at the moment since the last IETF meeting in March in London, the remaining issues are around boost strapping and querying. But I think our service is mostly done, so I don't see, foresee any significant changes coming out of the draft RFCs.
Okay, so any questions on the RDAP implementation?
WILIFRIED WOEBER: Thank you for the update. Any comments?
AUDIENCE SPEAKER: Job Snijders. What I am missing today in the RDAP implementation is that if you query for ought numbs, it doesn't contain anything useful. The actual routing policy that we have in RPSL exposed through the WHOIS protocol I would very much like that the same data would also be available through other transport means. But today the data is missing and I don't know why.
ED SHRYANE: I think the Working Group chose, there is one draft RFC on object attributes, and I think the lowest common denominator of attributes between the RIRs was chosen to be part of the protocol. So??
JOB SNIJDERS: But every RIR supports imports, export, MP import and export.
George: Unfortunately no, that is not true. To short circuit this presentation, Andy Newton who is one of the principal authors is present at the meeting and I think you would have a much more constructive conversation with him about how to get RPSL qualities into RDAP. It was discussed and it was dropped as expediency in the circumstance but it makes sense to have a real discussion about the utility of putting them in. It should be done properly and he is here so you can talk to him.
WILIFRIED WOEBER: So what's the process? We have to go back to the IETF and is this protocol work or is this just implementation work?
AUDIENCE SPEAKER: It's definition of a Json profile. It would be just addition of a data.
WILIFRIED WOEBER: Just agreeing on how it looks like. Thanks for bringing it up. Any other comments or stuff? No? Thank you very much for the second presentation. Sorry, my apologies.
AUDIENCE SPEAKER: Sandy Murphy Parsons. I am curious as to what the issues are in the boot strap protocol. That's kind of the part where you have to find out who the authoritative server is to answer a particular query.
ED SHRYANE: That is being discussed on the mailing list at the moment and I think most of the discussion is around names and not numbers. So, in terms of numbers, we already know which RIR is authoritative for our resource, so we can tell through delegated stats and that's how we have implemented that. So, we can already know who to redirect to, but for names, I think because there are so many TLDs out there that it needs to be done properly because otherwise, it is quite complex I think. That hasn't been resolved yet.
AUDIENCE SPEAKER: I'm not quite up on the WEIRDS discussion on this point but the last I checked, there was going to be, the initial query was going to be to go to IANA, is that still somewhat the case?
ED SHRYANE: I think it's still under discussion but specifically for us for the RIPE database, it's not really relevant for us, because we're more concerned with numbers and not names.
WILIFRIED WOEBER: And I presume even if the default, if you don't know better, would be sort of the route of the tree and IANA I guess you could still throw your first query at the server at the relevant RIR if you happen to know what you are querying for, but I'm with you, I didn't have enough time to actually get up to speed on all of the discussions on the WEIRDS mailing list.
ED SHRYANE: But for the very least for us, we do know about the number resources and we do redirect to the other RIRs.
WILIFRIED WOEBER: Again. Thank you. And next is Denis with the personalised authorisation authentication. Balk balk this is an idea
DENNIS WALKER: On an improvement toy way we manage authorisation in the database.
One of the problems that we find, particularly with a lot of new users to the RIPE database, is that the concept of a maintainer object is definitely not intuitive. It's some abstract box by holds passwords of unidentifiable people. The maintainer has no obvious link to the people who maintain the data in the database.
What we see on the training courses and when people are filling in request forms, registration services, there are literally hundreds of users every year who misunderstand this concept of a maintainer. The questions reget throughout DBM about authorisation, haven't changed in the last ten years. People are still confused about what is a maintainer, how do I maintain this data? Don't I maintain this data? And, you know, the natural thing for people to think is, yes, I maintain this data, so therefore it should be maintained by Denis. Or it will be more enlightened, you realise you got a NIC handle to you think it's maintained RIPE. That is how we see on the training course that is people naturally assume these attributes work. So what we'd like to suggest is actually a process that works on what people naturally assume already.
So, the proposal is quite simple, it's the first three lines. We move the audit attribute to the person object, you group person objects into a role object and you use the role object as a maintainer. This is the way people naturally assume it works anyway. As a transition or as a proof of concept, we could do this in parallel with the current system with the maintainers if we added an optional attribute to the person and we allow either a reference to a maintainer or a role object and then later we look at getting rid of the maintainer objects.
So, what would this actually mean, in engineering terms it's a very small simple change, but in terms of understanding the data in this database and how it works, this is actually quite a massive improvement. What you actually say is if you are describing to somebody how your data is maintained, you might say that my colleagues and I take on the role within our organisation to maintain this data. Translating that into database speak, my colleagues and I is a set of personal objects with audit attributes, take on the role, means you are grouped into a role object, within our organisation, so the role has an org attribute reference in the organisation, to maintain this data. In the "maintained by" you reference that role object. So, this is the natural intuitive way of thinking about authorisation, and it doesn't include these abstract maintainers.
The benefits of of doing something like this would be it's intuitive, it's the way people naturally assume if they don't have 20 years experience of working with the database, you can directly translate your business setup into the database design. It fits well with personalised authorisation using SSO. The whole concept is that you sign in with your credentials so we know who you are, we know what organisation you're from, we know resources so we know what permissions and privileges you can have in the database and we know what you actually maintain.
It's also a lot easier to explain, it's easier to understand. It would have less problems. There would be less questions to our customer support and you would have less business delays. Now I know for some of you who have been around for a long time and are really experience with this database you probably can't see what the problem S but if you talk to our training team, you will know what the problem is because they see it every single time they do a training course on the very basics of the RIPE database database, people simply do not understand maintainer objects.
It also as a by product gets rid of the chicken and egg situation, because right now if you are starting off from scratch in the database, you need to create a personal maintainer payer, so we have to have a separate script designed to allow to you create those first two objects. With this the audit is in the personal object, it's self maintaining. So, there is no chicken and egg.
The actual authorisation policy for update doesn't change. You find the relevant maintainer, maintained by a maintained lower, whatever, which is the same as you do now. You find the reference role objects in those attributes, whereas now you find the reference maintainer objects. There is an extra step for the software which is to find the personal objects referencing those roles. You get a list of credentials which is the same as we do now and if any of those redirect your attentions matches it's authorised the same as it is now.
So, questions?
WILIFRIED WOEBER: Support statements? Criticism? George.
GEORGE: I very strongly support this move. I have wondered for many, many years why the maintainer mechanism, the chicken and egg problem and the intrustion of aspects of data integrity that have no real relationship to RPSL got carried into this framework, it confused the heck out of me. And I think adopting a much more natural model of this means authorisation from external sources like SSO tokens, which essentially is where we are going, portals and other frameworks, will be used to fact audit will be used, trust sources external to this will be used to update makes it much more likely the maintainer should gently drift into the twilight. So I support this. I think this is an extremely good if radical design. I like it.
WILIFRIED WOEBER: Any other statement? One gentleman from the audience general support for the idea, and I think it is a really interesting new look at the whole machinery, and we have Rudiger walking to the microphone.
RUDIGER VOLK: Clean explanation of what is the new data model. How is the transformation going to be and well, okay, is needed and then we actually have to scrutinize the damn thing.
WILIFRIED WOEBER: Which gets me to the next agenda item on the agenda. So, thank you Denis. Thanks for sort of this lively explanation why we should move away from the good old stuff we are used to.
And to wrap it up, we are left with a couple of minutes to the coffee break.
I think it's really appropriate that we had quite a bit of discussion early on during this Working Group session with the review of the action items, because we have identified quite a bit of interaction that needs to be taken care of with other Working Groups and with other aspects. There are a couple of other things which came up during the previous meeting in Amsterdam in the hallways. There was quite a bit of informal discussions during the coffee breaks already this week. And we ended up with a couple of additional loose ends and open issues that should be, should, in capital letters, should be taken care of between now and the next meeting. One of those is the geolocation capabilities that were built technically into the RIPE database but up till now we didn't have any framework or any concept of who is expected to register data to maintain data, and on the other end we don't have any expectations or any write up on what the consumer side of the game would be.
So, up till yesterday I would have suggested to propose to withdraw and to remove the whole thing, but there is other stuff going on in the MAT Working Group with geolocation stuff, so, I'm not going to propose today to deprecate the geolocation functionality but at least we would really like to hear from the community if and when and for what purpose anyone is using it, either as a source of data or whether anyone or any service or any application is using it on the consumer side, because the way we are having it right now, it's just another notch in this whole system where we don't know how to really use it and it's probably more confusing than helpful. So this is the geothingy.
There is another thing which is around since many many years, and that's the white pages service. Back then when we asked the RIPE NCC to implement it, the world was a different shade, a different colour, and we were under the impression that people being active and should be visible in the Internet industry should also have a person object in the RIPE database, even if they were not directly liable, responsible for any of these resources. But there are lots and lots and services around these days where you can advertise your existence and your contact data and your history and your CV, and I think that we no longer need that capability in the RIPE database, and this is directly related to the proposal to remove clusters of something objects because all of those entries in the white pages services more or less are components are unreferenced information in the database. But just food for thought. This is going to come up again. And as you have read and heard the long list of open issues which the RIPE NCC is pushing ahead of themselves and we as a Working Group are actually not getting around to really have an in?depth chat or having sort of enough time in a managed environment to exchange our ideas or expectations, we, and it's a couple of activists, including the contact people from the NCC, we would like to propose that we install some mechanism between now and the next meeting where interested people can flock together, probably electronically in the beginning, and come up with a list of open issues and sort of sort that into bins and into problem areas and to come up with a sort of consolidated problem statement which is sorted according to interaction with other Working Groups or with other consumer parties, and this might be something which gets the shape of an interim face?to?face Working Group meeting between now and the next full RIPE Meeting probably in Amsterdam. This is just an early warning, we are tossing around that idea. It's not yet fixed, but I'd like to get feedback either formally during the last minutes here, during the Working Group session or in the hallways or on the mailing list, whether you think this is a good idea, and of course whether any one of you would be interested to climb in to flock together with some of us and try to come up with a more modern view ??
AUDIENCE SPEAKER: I volunteer. Rudiger, you too?
RUDIGER VOLK: I'm not quite sure, I have been committed for other things and I don't think I can escape. What occurs to me is you mentioned the long list of things that the RIPE NCC and Denis in particular, have been reporting. I wonder whether we should look at creating somewhere an inventory of the open suggestions that we can actually openly and item for item check and work on. I think just taking Denis's report slides and going through ?? and using the bullets that are fixed there as the permanent record of what's open quite suddenly is not going to work very well.
WILIFRIED WOEBER: It's definitely not the last word but it might be a start start
RUDIGER VOLK: I know bullets that are missing but it kind of opening up a systemic collection where we can work on I think would be extremely helpful.
WILIFRIED WOEBER: Noted. Any other comments? Okay. If this is not the case, then is there any other business that we should take care of? Not the case, so, thanks for being here and thanks for the lively discussions. Thanks for the contributions. And although I expected quite a different result, we are pretty close within our time allocation. Thank you and see you during the next meeting or on the mailing list.
(Coffee break)
LIVE CAPTIONING BY MARY McKEON RMR, CRR, CBC
DOYLE COURT REPORTERS LTD, DUBLIN, IRELAND.
WWW.DCR.IE