These are unedited transcripts and may contain errors.

OpenSource Working Group

Wednesday, 14 May, 2014, at 9 a.m.:

MARTIN WINTER: Good morning, everyone. It's nice to see so many faces here, I saw a few people yesterday I wasn't expecting to show up any more. So I hope you had enough coffee, you are feeling good again, and if you are not feeling that good, remember that OpenSource is good for you.

So we are waiting maybe another minute or so and then we give start off here. There is like one more person back there who wants to join in.

It's nice to see that many people especially because the second talk, this second choice seems to be address policy and as you yesterday heard, there is nothing to talk there any more because v4 is gone and v6 there is no discussion.

I think we can start. So my name is Martin Winter. I work for ?? I am one of the Working Group Chairs. This is Ondrej Filip, he is the second Working Group Chair. We welcome you this early morning for the Working Group sessions.

Let's start right away with the agenda. One thing we tried to do a bit different this time is we wanted to add some kind of lightning updates we realise a lot of sometimes talking like 20 minutes, half an hour, seems to be a bit on the long thing and we wanted to try out how it works for giving shorter slots for updates on different projects.

So obviously, when you listen today, we are very interested on feedback. Feel free to contact us as the Chairs afterwards or just send out to the mailing list your approval or ramblings about what we can do better so we can try to improve that.

So, from the previous minutes, I hope you have seen it, fort ones who have not attended, obviously that is all RIPE does a very beautiful job on having all the transcripts and everything, by the way thanks for the lovely transcripts, I am always amazed how they can write all that thing so that is all on?line and you can go and read it there and go watch it, there is some interesting stuff there.

ONDREJ FILIP: I just want to see if wave scriber, which is Anand, thank you for that. And we do not have a chat monitor. Is there anybody who could help us with that actually? Thank you. In case there would be some question in the chat room, we have somebody to raise them. Thank you so much. And we don't have any action points, is it true? So without further ado we can go to the presentation, and first of all it's my colleague Ondrej.

AUDIENCE SPEAKER: I have a quick ?? is it possible to turn off the elevator music? It's a good question. I have no answer.

MARTIN WINTER: You want it turned off or different song?

MARTIN WINTER: So we have a full programme so let's not delay it any more, let's start with Ondrej's colleague about the Knot DNS.

ONDREJ FILIP: He is going to talk about some DNS and I announce some fancy ??

MARTIN WINTER: Sorry, I forgot something already. We have one more slide to talk about the lightning talks. I want to give you an overview if you haven't seen it. So, we have like, this time about four short talks, which you see listed here. You see it on the agenda if you are looking at it. So we have, like, from the get DNS, API library; we have about e?mailing encryption, a very short introduction part, I am very curious how we can learn in five minutes; a quick Kea update and on the ONIE Project update. So I hope it will be interesting.

ONDREJ SURY: So, hello. I work for cz.nic and this is just ?? well, not basically the update, but we have ??but the update we are working on. So this is just short slide because I haven't spoken about Knot DNS for some time, so we have really 4.4 in time, and we have DNSSEC room signing which is really basic right now and we use the ?? it's very simple to enable. You just specify the key there and able DNSSEC and if you have the right keys in the key, then it starts working automatically.

We also added IDN support for Knot DNS utilities and lower memory consumption and that is something we are working on, I will show you some numbers later.

What is work in progress right now for Knot DNS 1.5, we should really scan at the end of this month if everything went well. We have something we call dynamic modules that hooks into different place of the query and response processing and you can do some real cool stuff with that. We also do lots of refactoring under the hood, which is not really visible to the end users but there are some stuff that is visible, slower memory consumption because some memory structures and the processing speed comes with it as well. And for Knot DNS 1.6 we planned improved DNSSEC support which is to come probably after the summer.

The dynamic modules, well, you can do ?? they are basically the hooks in query response processing in different places and we just have one. But you can do, well, some funny stuff with that like split Horizon, the geoIP and we don't have that for you but you can write it. But we will probably have something like that. Because we were asked to do that stuff and you can have something like poor man's HA, so off module that monitors the availability of some servers and puts IP addresses on dynamic link into the responses. And something we already have, it's reverse and forward resource record synthesis, so ?? well basically you generate the recourse on the fly and I think we were asked by a big telco to do that and they are already running that in, well, sort of direction.

So, what is that? The IP address space is vast and it's not possible to have all the PTR records in the DNS server by hand or generated, but customers want to send mail from DSL lines, at least that is something I was told and basically, the current best practices that the MTAs are checking for reverse records and are rejecting e?mails if the address doesn't have reverse record. And the customers are complaining, right? Well, customers are complaining always. So, we have created something like this, the synth record, there is more about the stuff in the manual and you can, well, write this into the zone, the synth record and you choose if it's forward or reverse prefix, which is just some text, TTL of the records and there is address and mask. We don't have DNSSEC signing yet but it will come with 5 of 6 because with the improved DNSSEC.

So that is example configuration could look like this. For example, for example zone, you have two forward synth record configuration, one for IPv6 and one for IPv4, and then for each reverse zone you configure module with the synth record reverse, again is prefix and these are the ranges it generates the record for.

So the responses look like this. So. If I ask ?? maybe I should put it in reverse. If I ask for the reverse record it's just shortened down because it's too long. And it will return the generator record, which is not put into the zone file, and well, it place well with the zone file so if you have some static records you can put them there and this comes into the plane only when the record is not found in the zone file so it's kind of fall back.

I think this is of the interest for ISPs who do IPv6 because I am not aware that any other OpenSource product can do that for you right now. I think that more will do in the future but right now, we are the only one.

So, here are some memory improvements, just ignore the address servers so I want to show you the improvements we did for Knot DNS, but I am just not able to cut that from the graphs.

So for the memory, the 10K small zones we cut it down. This is Knot DNS 1.4 and this is 1.5, so we really did a lot of improvements there. Then this is TLD signed quarter resource records, again the improvement here is not that big, but still, we did some improvements.

For the response rate, this is 10K small zones, the light blue is Knot DNS 1.4 and the dark blue is alpha. This is on the 10 gig interface so as you see, we did push squeeze some more performance from the server.

Same for TLD signed ?? well, there is not much room for improvement, obviously, because, well, the big players are pushing for the hardware boundaries but this is, well, half a million queries per second on Knot DNS 1.5.

I want to show you this slide. It's IPv6 and we have all the servers here. What is really interesting that how worth is the IPv6 performance on Linux kernel compared to IPv4. And it hits all the name servers as you can see. There is nothing knot specific and here even to compare the DNS servers but just to show you that the performance of IPv6 is not up to the par for IPv4.

So, the improved DNSSEC support, we plan to separate the /HREUB DNSSEC, the code to the separate library that can be used for other projects after that. We want to switch from open SSL to GnuTLS. It's not heart beat related. We have decided to do that switch because every DNS server is using open SSL right now and for the sake of diversity and also for the sake of supporting the hardware security modules, because the PKCS 11 support is getter in GnuTLS or at least easier to use. We decided to switch to that. We did spoke with the people about that and that was one of the recommendations, well the first one was the LIB NSS but we are not going to do that. And we also have something like key and signing policy inspired by the good OpenDNSSEC people, so we will have something similar for GnuTLS. What comes with that is on?line signing, and again we are do some right things, one of the things is the minimal NSEC encloser responses so we can bring even more privacy into the NSEC3. I am assure you are aware about the attacks on that, that you are mapping the small space to the big space and if you have really fast hardware then you you can find the common names even when you are using NSEC3. So this is something that could prevent the ?? NS 3.

But you need on?line signing, because for each query that hits the an ex?domain it will generate the new NSEC3 record that is just around the query and you need to sign it on the fly. Also, it will add the DNSSEC support for the dynamic modules, the synthesized results records.

So, that is all. This is ?? well, what other DNS server has the bread? You know, I wanted to show you that.

AUDIENCE SPEAKER: That is the challenge that I will happily ??

ONDREJ SURY: I know. We were the first DNS bread, you know. So, if you have any questions, shoot.

PETER KOCH: No affiliation. Can you go one slide back, please. And warning, technical question: On the minimum ?? minimal NSEC3 encloser are you saying you are generating ?? if you are generating things on the fly there is two RFCs that define how you can generate NSECs and avoid all the NSEC3 hashing and the response overhead.

ONDREJ SURY: Well, this is the plan you know. We haven't starting coding that yet, so yeah, imagine there is a slash, so NSEC/NSEC3, that will help you. Thank you for the remark.

PETER KOCH: The on?the?fly signing is interesting to explore with today's CPU power that might actually be helpful in total and probably even more helpful if you have less to calculate. Thanks.

PATRIK FALSTROM: Not Netnod but small DNS hosting company that I am also sort of helping to operate. So, with your automatic signing, you are going to have call backs, for example, the times when you are doing KSK roll over because it might be the case that you would like to do transactions to your registrar and that would also be good to have sort of part of the configuration because in some cases you would like to run on the server without passing the actual keys to the registrar but in some cases you want to?

ONDREJ SURY: No, not yet. Can you get back to us with the idea.

PATRIK FALSTROM: Absolutely. I had this discussion with DNSSEC people as well and I think it's a feature that is something that is sort of needed in these ?? this kind of software.

ONDREJ SURY: It sounds very useful for me so we would like to have a chat.

PATRIK FALSTROM: We should talk off?line then, thank you.

CARSTEN STROTMANN: A small clarification question. The automatic DNS signing, does that include automatic refresh of the signatures?



AUDIENCE SPEAKER: Alex from France, ISP regional network. I have already asked you the question J rest in France, I don't know if you remember; do you plan to develop in dynamic modules something like rest I IP I to get some information ??

ONDREJ SURY: Well, yes ??

AUDIENCE SPEAKER: Also to drive and to ??

ONDREJ SURY: We are certainly looking into developing some IPI on top of the DNS server, but I am not sure it will be inside it, that is not decide yet, but we will ?? well, basically, you are the first one to ask the question, that you need some provisioning because the configuration files are not sometimes flexible enough so we would like to have some DNS ?? well DNS server IPI on top that have which can be used from RES or some ?? something like that. But we would, if we are doing that we would like to do that more generally so you can plug more DNS servers into that, so, well, requires some more planning and we don't have resources for that for this year, so we have postponed the project for next year, but we are going to develop something like that. Definitely.

AUDIENCE SPEAKER: On the other side we need also statistics, so maybe we can get information from rest PI or something.

ONDREJ SURY: For statistics, I think there is an effort called DNS step and we were just sitting with him last week and we, well, we have sort of implementation of DNS step in the server and that could be definitely used for statistics because it provides the unified logging from the server. So, yes, that is a good idea and we will pursue them, definitely.

ONDREJ FILIP: Thank you very much, thank you very much, Ondrej.

And now I would like to invite Shane Kerr to the stage. Shane has a very interesting talk about a project that was really in focus of many DNS operators, providers, so I hope he will explain us some of the background of this project. Thank you very much.

SHANE KERR: Thank you. So, yes, I am Shane Kerr. So the talk is listed in the programme as the rise and fall of BIND 10 and that was a mistake when I submitted it; I had the vision of Gibson's famous 19th century novel about the Roman Empire which details all of the historical and sociological and economic and political reasons for the inevitable fall of the Roman Empire. And so this talk, as we go through it you will find ?? it will all become clear that it was unavoidable that BIND 10 would fail.

So this is very much a personal story from my point of view. I was involved with BIND 10 more than anyone else from the beginning to the end. I am not here on behalf of my former company or my current company or representing anyone else. All these statements are true, as far as I know them but I make no promises and no guarantees and no warnings, and so I hope it's fun. Let's see what happens.

All right. We have got the talk is divided into kind of two parts: The first one I am going to go through the history, what happened, why it happened, as far as we know, and things like that. And then the second part, well, will be probably more interesting and useful for this forum where I tried to note some observations about what happened and even more importantly, some recommendations, but we will start off with the history.

So before history there is always pre?history. We are talking about BIND. BIND is really, really old. In the beginning, BIND was DNS, if you want to run a DNS server you ran BIND and because it was OpenSource, and kind of came with your Unex in a lot of cases, it was widely available. And I start off the history in BIND 4. I presume that, somewhere, there were versions 1, 2, 3, maybe, but I couldn't really find anything written about them and I didn't feel brave enough to ask Paul Vixie so we will start with BIND 4. A lot of you probably used it. BIND 8 came after, and according to Paul, it wasn't that it was a quantum jump of improvement; it was just they wanted it to have the same numbers of send mail when it came out which was one of the other big OpenSource projects which was adopted and the basis of the Internet at that stage.

Eventually they wanted BIND 9 and that was a reaction because bind 8 was a security nightmare and single threaded and all these kind of things. So each of these different versions of BIND was very much, could you tell the history and see what the state?of?the?art and the kind of fashions were in software development and network development of the time. And the vision has shaped that. So, for example, BIND 9 was rightly obsessed with security because at the time late 1990s when it was developed and implemented, there were a lot of problems with people gaining route exploits on machines and we were getting to a point where single course CPUs were able to expand fast enough so people were experimenting with multi?course so these are kind of the important things for that software to do, and that is why. But eventually, a ?? it was time for a new era and Paul Vixie said it's time for a new BIND. It wasn't all his idea but he was the one that announced this is going to happen. So we will go from science to literature, I am going to tell a little fairy tale about OpenSource and once upon a time there was a little coder named me, Shane and I worked at this magical company called ISC which is really a wonderful place where everything is about OpenSource and making things for the good of the Internet. But of course, people never are happy with what they have, so I wanted to ?? the software practices to be better aye wanted to start adopting things like test driven development and continuous integration and have, you know, architectures and plans and all these things we love and hear but coders hate this stuff, they want to hack and I was the only one that wanted to do these Doring software engineering things and eventually you come to a point where everyone else disagrees with me maybe I am crazy, and I decided I had to go, and so I left. Which made me sad but at least I got to work on software the way I thought it should be done.

After a while I got a phone call from ISC and they said, we just got the project defined for BIND 10, loosely, and someone went and got funding for it and sponsors and we were all ready to go when that person, who is Drow, is sitting there, left ISC. They said could you come and run the project. You can imagine as a software developer someone has we have budget and we know vaguely what we are going to do could you come and do it. Fantastic, could you hardly ask for a better Greenfield project. I said sure. And it was great. I came back and we started the project.

And in any relationship the beginning is all wonderful and you don't see any of the flaws and everything is great. So we decided to define the project, what is it exactly that we are going to do. So I thought it was really important that we wanted to be a community project. ISC had, at that time, what they called managed OpenSource, which is really just another way for the cathedral style development where a bunch of smart guys go in the basement of a year and every year pop out a new version and hopefully you will like it and maybe the next one will have what you want. I thought that is crap. Let's do something like Linux kernel which is open and unstructured and unfiltered and people solve their own little problems and you have to refactor the whole thing all the time but it's great. That is what I wanted.

So we tried to encourage that by having a public WIKI with all the documentation and meeting notes and design plans, public mailing lists, we did our design discussions in the open, warts and all, so you can see the sausage being made. For the first time we had a public repository, we eventually went to GIT, normal standard stuff. I was pretty happy with that part.

The one I think that had been done before I started was picking the languages. There had been, apparently, years of bike shed discussions about this. And so they eventually decided to use Python and C++. Python for the parts that could be writtena nd didn't need speed and C++ for the rest. I decided to let this decision stand because frankly I liked Python. We wanted to minimise reinvent. If you like at the bind 9 source you will see it has its own random number and so on. And I thought that is crap we don't want to do that.

We wanted high standards for code quality and reusability. It was pretty big so we tightened that up, we started requiring that all the code have unit test, things like that. So, we had all this stuff. We starting gathering the team slowly the one at a time, putting people on BIND 10 and started hiring people a few at a time. We also had non?developers on the team. The first person working on the project other than me was our engineer and documentation guy so I was pretty happy with that, we got off to a good start because coders are bad at release engineering and it /URPBS out bad at everything but actual coding. So it's good to have other people for that stuff. We started making some code. We built the basic infrastructure for the project, so BIND 10 is not just a single Daemon, it's got a model with cooperating processes but even if it was, we need the underlying libraries and things like that so we started working on that developed and a toy DNS server, something that could actually answer queries using that kind of fancy designs that we had, this is the minimal first case, so that was good. I have to say this was really exciting time for me, it was all filled with hope and promise.

Then we were doing this for, well I say first year, but we didn't actually have anyone working on the project for the first month and then we had the document and release engineer come so it was really, this is really about nine months of work, maybe a little bit more. But we made our first public release, and we met the goals that we had defined at that time. We had gotten statement of works from the people sponsoring and what they wanted was a simple autoritative only server and the infrastructure allowing that and we had a separate DNS library and C++ with Python wrappers, we were on time and under budget and I thought this was great. It was pretty nice.

So the bad things, though of course, happened, we have a lot of technical debt, it's all the stuff that you know you need to do but you don't need to do it right now so you are going to do it later. You have to do it eventually. So, this, I think, is unavoidable if you are working towards a deadline, because everything that is not going to be measured when you drop that code off, you can defer, and so you will defer it. And that is what happened. And so, and in this example, I mean things like being able to support resource record types that we don't expect to be common and things like that.

Anyway, another ironic bad side about this, it seems so easy for you to get at that first release out there, now please just finish it. Which is unrealistic. And actually, some of our sponsors were really unhappy that we went under budget, for different reasons; some of them were like, well you can't give us the money back, we wouldn't know how to take it, what would we do if you sent us a cheque, we have no process for that. Others were, you have to give back to us so please figure that out. Other sponsors were we don't care. That was a bit of a downer.

And the biggest problem really that was it really wasn't usable in production. I mean, you could if you had just a vanity domain but I didn't recommend that anyone ran it for any kind of critical infrastructure.

Because I did good and bad there is a slide for ugly but there wasn't really anything ugly.

All right. That was our first year of the project. And so, we start our second year of work. And we had even at the time we recognised it was a very ambitious goal which was the second year goal was to get the recurser resolver done. So, it's not a clean analogy, it's not 100 percent but no analogy is perfect. But the way I like to look at it is, if you think about people scratching in the walls and you wonder ??

AUDIENCE SPEAKER: There is a dentist next door.

SHANE KERR: I think the NSA turned their gain up too high. Well, we will pretend we are not being really annoyed. If you think about the analogy of a DNS authoritative server as a web server, the recurser resolver is more like the web browser. It's a lot harder. So I just did a quick look and it turns out the Firefox source is about 20 times bigger than authoratative resolver, but I don't think it's 20 times but a good one probably is so...

Another problem we had in second year was coding resources so we had this wonderful five?year plan worked out about the budget and what resources were going to be available and the original plan was to have twice as many on the second?year as first year. That did not happen. We got like, I think, two extras sponsors but we lost one. I didn't talk about the sponsorship model. ISU is a more struggling not for profit money and doesn't have the money to do a development like this on its sewn it got sponsorship like this, mostly top level domain operators. We did get some addition A so we were able to get additional ones but not a lot. And we had all this technical debt which took us a couple of months to pay back, and people don't ?? that is not something that provides any value, usually, to the users; it's, you know, finishing your tests and getting your libraries factor the way you should have done it, but you didn't know when you started. Weirdly, we had a lack of expertise for this particular effort because even though we had real DNS experts and excellent developers no one had written a recurser resolver at that time. And in fact, there is not really very many people in the world who have, so it's not like there is a whole bunch of experts in this area. But we were not afraid. But then it got worse. So we had probably this was ?? this first bullet here, the change of plans, that was probably the single decision that actually was the worst decision I have ever made and probably the one that caused the project to fail ultimately, which was that in an effort to be agile and responsive to our sponsors' needs we changed our plans. One of our sponsors wanted to use this authoritative only server in their production environment, which is great, but it wasn't ready for that, and they wanted to do it by the end of the third quarter of the project, so by the end of the calendar year, and remember we are already two or three months behind. So, we needed to spend some work on that. So what we decided to do is split our team, which was already not as big as we had hoped, remember, into two separate teams: The R team working on the recursive work and the A team of going to be back and finish, aye picked the name and you can tell me how great it is later, A team. So you can imagine what the results were in the end. We did actually get a working recursive re/SO*FLG solver but it was basically just a toy, it wasn't what you would ever want to run in production. One of our developers uses it on his home network and things like that. You can use the Internet with it, it's not problem, it is not scaleable, it doesn't have DNSSEC and so on. The authoritative server didn't get production ready in the way that we had hoped either. So you can imagine how happy our sponsors were who had at this point been /TPEUFG us money for basically two years and from their point of view had nothing to show for T the plans had changed and we weren't doing what we said we were doing and what the heck is going on with this project?

So, undaunted, we decided to keep going and we started year 3 and by this time we had lots of technical /TKEFPLT that toy recurser resolver we knew what we needed to do to turn it into a real one but there was a lot of work. It still wasn't ready for production although we can discuss that over beers. The whole idea of saying something is production ready or not I find a bizarre concept because the way to know if it's production ready, you run it in production and if it works for you it's production ready and if it doesn't, it's not. But my production ready is not yours, and what does it mean? I don't know.

Our recurser resolver was still pretty weak and we had all this underlying work to do and because of the status we didn't have any real users which is not surprising and the sponsors were becoming more and more unhappy, I will talk more about that later.

Another thing that happened was that we didn't get our internal support in ISC and in fact ISC internally started working against the project, so we had a new manager on the BIND 9 project who was very enthusiastic about BIND 9 and he started making a lot of improvements on it and the business people really don't, they only care about things they can sell which makes sense, you want products that you can sell so for them they didn't know or want to hear about BIND 10, it wasn't important to them. Also throughout this whole thing going on, other DNS software had improved so when the project started there was not a lot of options in OpenSource space and they had a lot of serious limitations, like having run ?? I thought well why would anyone ever want to do this but the software gets better because it's not standing still, not other authoritative serves start appearing, PowerDNS was finally convinced to support DNSSEC, so the competitive landscape that BIND 10 was sitting in was also becoming a lot more difficult.

In the midst of all this, there was trouble in the palace. So Paul Vixie had been the president of ISC for a long, long time, and he finally decided to step down. When he started ISC, it was one other employee and himself and had grown to the point it was more than 20 people and I mean, as you can imagine, the difficulties of running an organisation of 20 people or 40 people or 100 people are very different from the start?up of an organisation. Paul stepped down, he had one of his friends step up. His friend was fired by the board because of some personal disagreements. And then we went without a leader for about three months and that was a complete tragedy. We then got new leader, which is great, but he ended up being a complete ass hat and was difficult to work with. And I defy you to find anyone in the room who would disagree. (Laughter.) They're laughing because they know. I ?? it took a long time for the Board to get rid of him and by that time all of our customers hated us and it was a complete disaster and eventually we hired Jeff Osborn and things got better but by had time things were really bad.

So eventually, the sponsors who had been unhappy for several years, started to withdraw, some of them because of problems with the president; others ones because they were unhappy with the status of the project, others because they said we can use NSD now or not why would we want to sponsor BIND 10. Our team leaders start leaving, most people don't retire at the job they are at today so the problem with that we weren't getting permission to replace these developers so the project was put on hold, life?support. At some point, more than half of the sponsors had left so we ended that whole model and the question; well can we fund it ourselves? We thought we could at some point but eventually the Board said we can't fund this ourselves, we can barely keep the lights on after all our customers are gone so they decided to shut down the project so it's part of an other all staff cut.

All hope is not lost, thanks flee things can get forked and so on. So ISC has kind of let it go into the wild. There is a project, we call it the bundy project, it was the mass cot for BIND 10 so it's on GitHub now, wave website and mail list and stuff like that set up. There is patches and bug reports flowing in. We are going top an informal, I call it a BoF like gathering, it's about one single project. It's Thursday night, one of these rooms over here. Please come to talk to us, we will talk about the future plans in there. Not all the good code has been lost and we can do ?? we have a lot more freedom and flexibility now in feature directions.

So I have got two minutes to go through observations or recommendations, soil try to be quick here.

Go big or go home. Well, first of all, managing transitions is very hard, especially because new system that you are going to have developing is going to have less functionality and it's going to be different and people aren't going to like that, no matter how you do it it's difficult. It was a really, really bad idea for us to maintain two pieces of software at the same time, it's expensive because you have got to do all the work twice, any new future. It gives a very unclear message to the community like what should I use, what I should use tomorrow? Internally, there was confusion about priorities. And you can't reach future party with your old software if you are improving the new one. My recommendation is only develop one code base. Go and continue to maintain the old code base but don't develop two. And you absolutely, it's obvious in retrospect, you need an evolutionary plan for deployment; big bangs almost never work. And it would have been possible in retrospect that is the right approach.

Another thing to do, a lot of people, this is an OpenSource, a lot of projects don't have companies around them but if you are doing it in the ?? convince your suits, they speak a different language but it's actually important to length that language. I ran the original in skunk works mode because I wanted to do the things with code quality and things like that. The problem was, you need to come out of the skunk work mode eventually. Your business folks will likely resist this. No customer ever wants anything new. If customers were to decide the future we would never have an iPhone, because people wanted a faster CPU on their crappy Nokia phone. And business people don't care about technology, that is what they use to make money. I don't have a good recommendation for how to convince your business people, you sit down in a therapy session and sit them in front of a theatre and open their eyes. This is Clockwork Orange, see the movie.

Sponsor ship, more than ten which is awesome, that means a lot of support in principle. The sponsors had a steering committee which actually ended up being a really bad idea because each sponsor of course this had their own unique goals and expectations and actually it was real disaster, they didn't talk to each other until the end when they decide how to beat us up and it was just very confusing, some people wanted full reports on everything done, other people didn't want to be bothered and so on. So recommendations, sponsors are great, get those, money is good, you know, the affirmation of what you are doing matters to other people is great. Don't like any one sponsor dictate what you are going to do and my recommendation is even either have a formal contract saying what you are going to give and when or just take money without any strings attached. We support what you are doing, here is more money, do more of that. Anything in between is going to be a disaster because you are going to have mismatched expectations and people are going to be unhappy.

Also, within this whole thing, any time anyone has a request to change anything, make sure it's very visible exactly what that is going to cost them. It's not saying no, it's not saying you are going to get what you wanted in addition to what you are asking for know.

Release early release often. Of course you have to release early release often. That doesn't matter if no one is using your code. So this is the ?? the unsaid part of that is release early release often to the users who are using your code and providing you feedback and hopefully patches and things like that. The other thing is eat your own dog food, run your own code and production, that is true, but you need to really, really, really eat your own dog food, like make that the only food you eat. So the idea here is you have got to put it in your mission critical parts and your operations people are going to kick and scream and not going to do it because they don't want to run code that have bugs like that and your business people are going to agree with your operations people because they don't want to fall down either. That is what it means to eat your own dog food, is to take the risk, how can you expect anyone else to run your code if you are not willing to do it yourself.

Minor thing about languages. Of course you want a popular language that is safe and fast. Python is really great but you can't use it for DNS authority IPv at the server. From Python 2 to 3, long drawn out and that is a whole other story. At the time when we were picking languages we basically had a choice of C or C++, maybe you can use go ?? RUS is designed for firebox development but it may be more generally useful. People really care what language your software is in which mistifies me and I don't understand it. If I am developing of course it's very important, but if I am just running it I don't care, but people do care. So many administrators come to me, I don't know ?? anyway, they hate. It's a great bike shed topic and spent hours talk about it.

Code reuse, we chose as I mentioned to reuse libraries, well?designed and well?maintained, had to be licence compatible that kind of stuff. There are down sides. It does make installation more difficult. We tried to make it easier by working with distributions to make sure it was part of their standard libraries. We actually have that think point, Fedora and debris include the prerequisites for BIND 10 ?? everything derived from those do.

But administrators still again heat any dependency at all, they don't want it, they hate it. They don't want it. I had downloaded BIND 9, I had open SSL so why I would need anything beyond that? There is no other good option. You have to write all this stuff yourself and maintain it forever or just not have that functionality, neither of which is very attractive. That is no brainer. Provide specific value from day zero. We have all this stuff but they don't care, they want DNS software that works. We are almost done.

One thing here is people fear and hate change. I do, certainly. Anything that makes me to take my valuable time learning a thing, so it's different for administrators what we have today. So kind of what I alluded to earlier, you want to have a roadmap for these things and introduce one change at a time and then that gives you a few benefits. If it works great, people say incremental improvement awesome, if it doesn't back it out quietly and pretend it never happened. I mean if you compare your browser today with net scape from 15 years ago, it's completely radically different, but because these features are introduced one at a time we become familiar with them and are happy with them. If it came out today this is weird cling on stuff and I don't want to use it. One change at a time.

I have a whole bunch more stuff. I was working on this project for more than four years. So, if you want to know any more about the history, that is great. I have many more recommendations which I haven't have time for, that is about it.

ONDREJ FILIP: Any questions? We do have a short time for questions.

Jaap: Talk without any hat. The history of BIND is actually combined with the history of BIND is actually combined with the USB systems and Kevin Dunlop and somebody else whose name I always forgot, moved it, put it into 4.0 or 4.alpha, that is where the number 4 comes from.

SHANE KERR: OK. So there never was 1, 2, or 3.

Jaap: Probably not. We moved on to Berkeley 4.2 it was decide I had that all major numbers should be the same and since list had already 7, everything became 8.


JIM REID: Not wearing any hat at all either,. Shane, I think this has been a very interesting presentation, and I think it takes great deal of guts to come up and unfold ? and unfold excruciating pain all the things that happened in the jestitation of this project, but as you were talking, some things were ringing a bell in the back of my head because I was on the periphery of the BIND 9 initial development work, and I remember many of the things that were being discussed there and similar kind of problems arose, albeit in a slightly different guise and what it strikes me there seems to be an institutional memory inside that has been lost, I would have thought that after all the things that happened with BIND 9 that the next time a project came along, we won't make those mistakes again, we might make other ones, but let's try and not make the same mistake, a large number of people that were sponsoring the project, unrealistic expectations, underestimates of the time that was going to be involved and at the same time, the sponsors perhaps going off in different directions.

SHANE KERR: So, you hit on one of the key misunderstandings about ISC. People thought that because BIND 10 was called BIND and because it was being written by ISC that it had in fact anything to do. So none of the people who wrote BIND 9 work at ISC. There was a guy who had a participated in the BIND 9 development who left ISC who actual leaves the guy who became the BIND 9 manager. He wasn't the key architect or anything. There was no institutional memory. ISC didn't even develop BIND 9; that was done by phenomenon mum on contract and I SC kind of took it over so ?? yeah. I have many, many times gone to web forums where people had talked about how ISC didn't learn from its mistakes and it's like that is actually true because ISC is like saying the government or something.

JIM REID: Yes. And to be fair, this is into the criticism either of you or any of the ISC management, I realise there has been a lot of turn over and changes in the management and the board but I would have still have thought there would have been some kind of institutional down written in 950 charter street, explain what had been done in the past.

SHANE KERR: There may have been. I think we painted the walls once.

AUDIENCE SPEAKER: Warren Kumari, Google. We were one of the sponsors and I think all would have liked a different outcome but seeing as we didn't get this thanks very much for being so candid on what worked and didn't, thanks to ISC for not trying to puzzle this presentation.

SHANE KERR: That is true. Although to be fair, I didn't ask. So...

ONDREJ FILIP: Let me thank you, Shane, for the great presentation.

And now Willem Toorop is coming to the stage because he is going to have a short presentation as we are going to the lightning talks part of the meeting and the presentation is going to be the get DNS API.

WILLEM TOOROP: So this presentation is about implementation we did of DNS API specification for resolving, designed not by DNS programmeers but by actual application developers that develop for experience.

And the application developers were not really happy with the DNS libraries that were available designed by people like me. So they made their own design and in a collaborative effort with various sign labs we started implementing this API.

So, why would application want to link against API like this? Well, there is the DNSSEC last mile problem. This is the classic DNS ecosystem, the local network resolver protects the computer against cache poisoning or against domain hijacking by not being able to be cache poisoned, but there is no guarantee. There is one piece of this ecosystem that is not protected and that is the local network and effect. Malicious local resolver could inject itself by Arp spoofing, for example. By the way, the Net wear could be a public wi?fi, a computer could be something else in a computer.

So to work around this application, could link against ?? validating DNS resolver that does the validate DNS chain itself. It could be talking to the local network resolver so all the merits of having caching network resolver, in fact. One of the ?? one of the advantages of that set?up is that the local network resolver does not have to do the validation itself. If it is DNSSEC aware, then the library could go ?? validate the iterate of the DNSSEC chain and do the validation. And this greatly improves DNSSEC deployment, I guess.

Also, when the local network resolver is not DNSSEC aware, this library could just do the recursive resolving itself. Why do one ?? why do applications want to have DNSSEC? Well DNS then turns into a global distributed database of authenticated entities and this is very nice. You have, for example, DANE. The problem with current PKI is that you have this clumsy reposse in your browser of all the certificate authorities and they are all authorised to authenticate a certificate for any domain. It's not pinged to a specific domain. This is not the same when you would authenticate from a DNS because then you say this certificate authority is the one that is authorised to authenticate my TLS certificate.

So don't get me wrong, I don't say that conventional PKI is a bad idea. Encryption is better than non?encryption unless you are leaking memory, of course. So this is a perfect fit for the application. You could also have all sorts of other authenticated data that might be interesting for applications. For example, BGB keys to start encrypting e?mail messages for someone.

So, these are the features. Asynchronous by default. We have a plugable event library that you can hook into your application. We already have Java Script and Python bindings so there is absolutely no reason to not use it. We support these platforms. Some packages are in make.

Or you could build yourself.

Last April, we were together with Verisign, API partner in hack battle that was at the next web conference in Amsterdam, and I would like to showcase a few of the projects that were using our API to build applications. There was the verify application which was a a plug in for Thunderbird and it would give a security assertion of the D K IM keys that would have signed e?mail. There was also DANE doctor ?? oh, this verify won our prize. DANE doctor, Henekin Richard, implemented to set up a DANE connection for the asynchronous event for Python which is very popular and quite happy with their implementation.

There was one application that made a plug in for the off the record plug in for the Jabber client, so Jabber is a chat programme and off the record is a way to plug in for this Jabber client to open encrypted channels and normally when you would, for the first time, encrypt chat session with someone, message would pop up stating that well this is the remote participants' fingerprints of his key, please could you use a side channel to validate that this is correct. And so they made use of our library to look up the fingerprint in DNS, which it is working very nice.

And there was also the DNSSEC name and shame web application with ?? which was built to shame all the other API partners which didn't do DNSSEC, except PayPal and they actually won the prize that PayPal give.

So that is it. These are the details.

ONDREJ FILIP: Thank you very much. We have a short time for questions. Are there any? So it was very clear. Thank you very much.

The next speaker is Carsten. He is coming. He is going to speak about technology that was mentioned a few times in the presentation, about DANE.

CARSTEN STROTMANN: Hello. So my presentation is a short version of a longer one that I gave last week at Linux tag and I prepared that together with my colleague Patrik Cutter. So this is about the SSL encryption or TLS between services broken. It works today on port 25, the sending service out to the recipient server, do you speak encryption and the other one needs to say "yes, I do". And the certificates are not validated, so the sending server doesn't know if it talks to the right recipient server so a spoof server can just go in the middle and say yeah, I am the right one, here is my certificate and send me the e?mail. Or there can be a man in the middle saying I am your recipient server send me the e?mail or there can be a downgrade attack where the man in the middle pretends no can I not do encryption you have to send me everything in plain text and then the sending server has no chance as to send, and DANE and securing that with DNSSEC will help. That is not end?to?end security so it's not the same as BGP it just transport security between server so the e?mail is still visible on each server.

Who is about the TLS DANE and SMTP, it's a way to secure or validate the certificates being sent by the recipient server with the help of DNSSEC and, therefore, either the full certificate or a fingerprint of that certificate is being stored in DNS, and can be looked up and that all the secured by DNSSEC.

Here is an example. So I am skipping some slides because else it would be too long. The recipient server is indicating that it can speak TLS with the start TLS information and now the sending server gets a second opinion, asks the local DNS resolver and that local then asks the authoritative server of the recipient mail domain, asking for a TLS record and that holds the fingerprint. That is coming back, it's being checked for DNSSEC validation. If that is OK, that goes back into the sending mail server and then we have a secure and authenticated connection, which we nowadays don't have, most of the time. And if there is a man in the middle, then the certificate doesn't match or hopefully doesn't match the TLSA record that comes down so if the man in the middle attacker doesn't have the private certificate part of the recipient domain the man in the mad he will attacker cannot fake that certificate and the sending server will defect that something is wrong. Same as with spoofed server, wrong certificate here and the sending server can see that there is the wrong server, it's talking to the wrong server because of authentication. What do we need to implement that? We need DNSSEC cache ?? the latest version of DNS mask also has DNS support. And Windows 2012 also works for those on the Windows world. And for the DNSSEC authoritative paths on the recipient domain we need an authoritative name server that supports DNSSEC and we can use BIND as bundy, PowerDNS and Windows 2012 for that. Windows 2012 with some tricks only. And we made mail server which only ?? already supports this new DANE stuff and post fix 2.11 came out in January and now supports DANE validation. So it does all this stuff. EXIM which is another popular OpenSource mail server is currently in development to have that as well.

And we need a certificate, either one of the standard certificates or even self?signed certificate will work. This is now we enable this and this is how TLS record looks like. It has the port number in the beginning and the transport protocol and the do domain of the mail server ?? the host name of the mail server and then it has either the full certificate or a hash of the certificate. This is what we see here, the hash of the certificate.

And then it can be validated with DNSSEC, we see AD flag and the TLSA record is correct, we match that against the certificate, all is good. In post fix these are the three lines that we have to add to our post fix configuration after upgrading to post fix 211 and then it works. If we don't have DANE we see something like this, untrusted L TS connection so the server we are connecting to is not authenticated, once we have this installed we'd see a very tight TLS connection.

The benefits of this is, we have an authenticated encryption connection. We don't have downgrade detection any more, it's not possible. We can secure against fake and spoofed certificates and in the review of the heart bleed with this in place you don't need to revoke your certificate, you need to change one DNS record, the TLS record was the new finger print of the new certificate that you deploy on your mail server and you can even do some marketing with this. This is just from Monday this week where a German mail provider enabled this. They enabled DNSSEC and a DANE checking for their mail servers. It's not a big one, but it's a start, and we hope that more mail ISPs will follow.

ONDREJ FILIP: We are back on time. Are there any questions? We have time for one more question. One remark, before your presentation you asked me whether the DNSSEC validator add on had a spike in downloads because of this newspaper announcement and it's quite possible we have a lot of new downloads, so thank you for that.

Olaf KOLKMAN: Sort of remark on the part of the question, I think this is incredible important needle pushing implementation work. My gut feeling is that DNSSEC allows these embedded systems so to speak, that are not really user facing to do encryption between server entities end?to?end, not completely end?to?end but between the server and encrypt all this traffic that goes over the network through unknown places. This is really important stuff in the light of pervasive monitoring. Usually, mail servers, you have the opportunity to have a mail server that is close to home, you have the opportunity to protect your transport between your mail client and your mail server. This allows the next step in a very scaleable global way. Mail is not the only thing; XMPP is another. There are so many others. So that leads to the question, is, post fix has implement this had stuff. We saw the plug in for XMPP for a client doing OTR. What more is on our particular radar where we can collectively push that needle and say, here is the technology that is being deployed by the network application users, pick this up and run with it and that goes back to the DNS API where we believe that that is an enabling technology. Open question: Where else do you see this happening? Or anybody else in the room.

CARSTEN STROTMANN: There is tonnes of information. E?mail clients that check their I or or or I mop. Just look in what software you use and if it doesn't use and you see it could benefit from it, contract the developers. Tell them that there is get DNS API, which makes that easy for the developers to implement the stuff and show that it works. And if you run a mail server or web server or any of the other software parts that already enabled for DNSSEC and DANE, take a little bit of time and implement it. It is not hard. This was the ?? e?mail provider has been done in a few days and there are people here in the room and other where that can help.

ONDREJ FILIP: Thank you.

AUDIENCE SPEAKER: We had a question from the Jabber.

ONDREJ FILIP: Carsten, I think there is one more question for you.

AUDIENCE SPEAKER: It's a question from the chat. Unfortunately I couldn't ask for affiliation and name but it's nice what do you do if validation fails?

CARSTEN STROTMANN: That is policy that can be can be configured on the mail server. Now, I am not the person in the team that did the mail service stuff that ?? so would be really a question that Patrik would be able to answer, not me; I did the DNS parts of that. So my understanding you can configure that in the post fix whether you want to have the e?mail being stalled or bounced or delivered anyway. And that is policy to be configured in the post fix mail server or in any mail server.

ONDREJ FILIP: We are slightly leaving the DNS area. I think the DNS Working Group Chairs can debrief a little bit and we have another topic which is THCP about to mech.

TOMEK MRUGALSKI: Let's change topics a bit. I will be talking for a couple of minutes about DHCP implementation so we are short of time so I can skip this. As you know all know the ISC DHCP implementation, it's OpenSource and something that is called managed OpenSource so there is closed repository with not really publically accessible bug system. So the software is quite old. So the first release was in 1997 so that was a long time ago. So this is the default DHCP software in many distributions so it has the, all the necessary components, so client, server and the relay for both v4 and v6. So it's over the years, it's gained a lot of features.

So, why do we need new implementation? So the existing code is old, so when the code was designed and developed, the networks were radically different. There was no concept of mobility so everything was fixed. The hardware was different. There were different limitations. So, and of course, the code evolved over time. But there is so much you can do with the code base. So, right now, the code is extremely complex and difficult to extend and it's quite fragile so documentation, let's say that it's lacking performance is not always sufficient. So for most cases, it's sufficient but if you are talking about millions of users, there are some problems that are very difficult to solve.

And of course, the problem that everyone is encountering is that once you change the configuration, you are supposed to restart the server, so there are some tweaks that you can do but they are difficult to use.

So, Shane went into great detail with the history of BIND 10, so I just wanted to mention that in the BIND 10 framework, there was DHCP component or components for DHCP 4 and 6 servers so that started a couple of years later than the whole BIND. So and when ISC decided to stop developing BIND 10 they also decided to keep the DHCP part of it, so that is the project Kea.

So right now, Kea has several components, so we have DHCP v4 and v6 server. We have DNS updates. We also have stand alone tool that is called PERF DHCP so used to measure performance of basically any DHCP server so it supports both v4 and v6 so all these of those are using /HREUB DHCP plus plus, does low level stuff like parsing and generating packets and options, interface and so on.

So, the code right now is able to perform all the things that you expect from DHCP serve, so assignment, renewal, expire the leases, reuse them, if you want to define custom options are the server to configure an option that is not already, for example if a new RFC is published you can define custom option and the server will start supporting it. So we are also have support for vendor options so we manage to provision a cable network, and also support traffic delegation, we have DNS updates, that is ?? it's functional just one piece that is currently missing, we are working on this, is TSIG so it's not supported. We have dynamic reconfiguration so tweak without any need for restart.

So, one of the interesting features of Kea is that it has switchable back ends. So the whole implementation has just one type of database pact so that was essentially a flat file. In Kea it's possible to switch between several different back ends. So we have support for my SQL and very high performance meme file, so this is a custom database that we implemented in C++. So it's very convenient because you can build different tools around it, so and you can tweak the least information in the database. Or get some statistics or whatever you want to.

So another feature that is quite interesting in Kea is the hooks mechanism. So it's possible to extend Kea with the custom developed libraries. So we have defined hooks API so it's well?documented. So for every process ?? for every step in the DHCP packet, packet received, options parsed, the lease selection and so on, you can register a function that will either use that information or you can even affect the servers, for example, say that this particular client is supposed to use a different subnet or get a different lease or just reject the client completely and don't send any response. And of course it's possible to have multiple libraries.

So we think this is quite a powerful features so there are already some customers who are developing their own libraries and the feedback is quite positive.

So the Kea 0 .8 has been released in April, so the next version, 0.9, is expected in summer, basically 0 .8 is BIND 10, 1.2, so once we separated Kea from the project, the major step that we are working on right now is to remove the BIND 10 framework because as Shane said many people don't like BIND on 3 and they need to ?? they are just ?? they are used to the old implementation that you just install and run. So we are working on this to separate the ?? to get rid of the framework. However, we had a discussion with Shane that perhaps it would be possible to run Kea in a stand?alone mode but also if someone wants to run it in bundy framework we think it will be possible but make no promises now. We will keep the on?line configuration and right now Kea supports only Linux. The major problem is that for other systems we need to do some ?? we need to send packets to clients that don't have, we need to do MAC streaks. This is fully supported on Linux. We have partial support for FreeBSD so hoping to get full support.

This is our long?term roadmap so we plan to have 1.0 in the first half of 2015. So, the features that we are currently missing is host reservation and client classification. We also are going to support full lease exploration. Right now when it expires, it is kept in the database and checks the lease is already expired so it's OK to reassign it. And after that, we will support ?? we actively are ?? have plans to support the failover, extend the statistics, RFC 69, that is information in the DHCPv6 reconfiguration and a couple of other things.

So, so if you are interested in joining ?? this is the last slide ?? if you are interested in joining, you can submit patches. We are also looking for sponsors, both in ?? as developers. It's possible to sign development contracts. If you want to help, we have documents that we are looking feedback for.

OK. I suppose that is it. So any questions?

ONDREJ FILIP: Are there any questions? No. Then, thank you very much, thank you.

We have last presentation ?? we are slightly behind the schedule, we will be five minutes late, which is good I hope. You see, I did a picture because I wanted to document that the room is completely full, which is great. I think the sort of cooperation with the policy Working Group is perfect. Next time, we need to request a bigger room and not a small one. And we have another announcement, we have a winner of the yesterday's rating game, I think it was, but I forgot her name, it was ?? can you read it please for me. Say /SHAO*E

AUDIENCE SPEAKER: The winner of yesterday rating was say /SHAO*E.

ONDREJ FILIP: So this is the last presentation, we have Nat Morris and he is going to talk about ONIE Project.

NAT MORRIS: A quick update on the ONIE Project. I spoke briefly about it last time. But we have had a significant traction in the last few months. So in summary what is it: This is a boot loader for network switches, predominantly top of rack layer switches, mixed port configurations, to 48 by 1 gig to 48 by 10 gig with 440s or now we are starting to see 32 by 40 gig as well. So in the for many years, changed the operating system that is running on your server and that hasn't been the case in the network world. You have bought a switch from Cisco or Juniper and come with the same operating system that they have supplied. And we need this to change, so we want to be able to run different operating systems on different switches. ONIE allows to you discover a network operating system and install it on the switch and a framework for different vendors to package into a common format. It's available on GitHub as a hardware compatibility list on there.

We have added some more vendors and some more step forward. Originally it was a handful but in the last few months Del have announced that you can order switches on the Del website and they don't necessarily have to come with the force tell operating system. You can get the switch bare metal to install on operating system of your choice. Our partners at Mellanox have recently contributed to the project and they will be a changeable layer there so you can run a different OS other than Mellanox operating system.

I have got some other companies that are producing, packaged in ONIE format apart from ourselves, so big switch and Mellanox and broad com their reference OS is also available in that format as well. One thing to note not all the OSes with will be compatible with all the hardware, you have got to think the OS has to programme the ?? doing the switching and routing as well, but all these vendors will list on their sites the different switches they support.

Also, we have made some steps with the Open Compute Project. It has been taken in the network section. And also the University of Texas in San Antonio have developed an OCP lab where they are testing hardware, so these are very dense servers and going to be collaborating with us on testing operating systems. So this is going to be of great benefit to the community, there is some great team there testing the installation and stuff.

And here we have go with some steps or what we are doing on the ONIE Project at the moment. For a year now it's supported PowerPC because that was the predominantly the system on chip. Now we are starting to see the newer boxes, 32 by 40 gig box that ?? we have ONIE in X86 compatible format. We have had a lot of requests for virtual machines so you can test your network OS on your laptop, and we are planning a support for ARM and MIPS as well. And we maintain the same behaviour of discovering the network operating system using v6 enabled discovery, DNS, TFTP Waterfall, across all of the different architectures. We see vendors now including ONIE support on their data sheets, which is nice that people are highlighting that. And this is the sort of thing you are going to see on the Dell site in the next few weeks. For years you have been able to go alock and pick the operating system on your server a you will be able to pick the switch on your ?? DEll OS or ours, and other such as big switch or ?? I think that is it, sorry if I talk really quick. Are will any questions?

ONDREJ FILIP: Thank you, you are on time R there any questions. If not, then I thank you very much.

ONDREJ FILIP: And since Martin started the meeting, he is going to conclude it as well.

MARTIN WINTER: Thank you, that is the end of our session. I am happy to see that many people are showing up in a small room, I hope to get a bigger room next time. I hoped you enjoyed it and to let us know what you think about it. We want to try at least one more time. We are looking for feedback, is that the way to do it or if you are something different there. Please contact us, sign up to the mailing list if you are not on there and start having discussion there. We are hope to any suggestions. Thank you.

ONDREJ FILIP: We would like to thank the scribe, jab monitor and transcribers who did excellent job as usual. Thank you.