[AusNOG] User-Aware Netflow
Beeson, Ayden
ABeeson at csu.edu.au
Fri Mar 28 09:14:34 EST 2014
Hey Scott,
Just on the wireless topic, we have a similar system in place to automatically auth our users to our access system; however we rely on radius accounting to return the assigned IP in the first instance rather than DHCP logging as we had scale problems doing that.
We had a long standing bug with our wireless vendor where this was not occurring correctly, IIRC that's "fixed" (for the most part) now but we still check DHCP logs after a short period just in case the accounting doesn't work and we haven't received an IP in the required amount of time.
The system becomes much more reliable when you are not trying to poll DHCP logs / lease files as a first check method and most wireless systems are configured to return a second accounting message once the device is assigned an IP, I can't recall what the standard calls for, I do know some systems do act differently with the accounting, sometimes only including the IP after the next scheduled accounting message comes through which could be minutes or hours later....
Thanks,
Ayden Beeson
-----Original Message-----
From: AusNOG [mailto:ausnog-bounces at lists.ausnog.net] On Behalf Of Scott O'Brien
Sent: Thursday, 27 March 2014 5:44 PM
To: ausnog at lists.ausnog.net
Subject: Re: [AusNOG] User-Aware Netflow
Wow, so many responses! Ok, I'll try and address them all here and explain a bit further.
The system I've built does not currently use AD. Sorry, I should clarify, when I say the user-auth exchange I'm talking a RabbitMQ exchange, not AD. This is just a way RabbitMQ can route messages to queue(s) and consumer(s).
With the AD topic though, netflow consumers are looking into a collection/table on a database for the user auth information (and caching IP mappings in memory from login events by directly subscribing to the user-auth fan-out exchange.) The database is fed by a different consumer logging login/logout events from a separate queue also attached to this user-auth exchange.
The system I fed user auth from was exactly how Jonathan Thorpe described. I'm doing a trial of my tool on our "free wireless" network, which happens by the very nature of it to be .1x and DHCP. A log reader listens for Username->Mac events and ties them to Mac->IP events from the DHCP logs before pushing out a login event to my RabbitMQ user-auth exchange. I guess the option of doing .1x (difficult in some situations) on wired would work, but monitoring logs, captive portals, websites or user-agents could be used to feed this user-auth exchange? Basically how you put them in there is up to the environment it's placed in.
Someone mentioned DB size worries and a few people have mentioned hooks/triggers in either Postgres or MySQL. I pretend to be many things in my life but I'm afraid a DBA is not one of them! The worry I had with Postgres or MySQL is that it's not easy to scale out as much as MongoDB is (feel free to flame me down on this one though.) I think as soon as I start adding hooks and pumping every single netflow entry into the database, it might start to quickly become my bottleneck. As for storage, I'm making use of MongoDB's ability to be able to most of the heavy lifting. I store a document for my counters (daily_counter, user_daily_counter, etc.) This way I can issue an upsert update command to Mongo (that is, if the record I'm looking for doesn't exist, just create it) with an "increment command" such that instead of making calls to see what the current counters are, then having to lock the tables, I can just let MongoDB worry about it in one sweep and keep my usage inform ation cl ose to real-time. I'm storing only the counters (and a capped 5GB collection for my netflow with user column just for curiosity), meaning my storage requirements are pretty small, but if this becomes the bottleneck I can easily just chuck more nodes into my cluster and shard out my datasets fairly easily.
As far as the system being able to handle a lot of throughput, I *think* I've hit the nail on the head. The BGP/Netflow collectors (pmacct suite) can be load balanced if these are a bottleneck, the message queue just needs a lot of RAM, but there a ways to scale it out, my Mongo cluster can scale out if that starts being a bottleneck, I can always spin up more consumers to process more netflow if they can't keep up with the traffic. I know with only a few consumers I can currently handle over a hundred Mbit/s of netflow, but I still need to optimise the code such that it can cache some local counters and only hit up the database in batch every 30s or so, so I don't think this will be a problem. I've been running this for the past two weeks and it seems very usable (still a few rough patches, but I'm just working on the interface now.)
It's great to see so much interest in this little project! I'm going overseas for the next two weeks or so but when I get back, I'll definitely be cleaning it up a bit and putting it up on github with a blog post in the next month or so, so watch this space I guess.
Thanks again,
- Scotty O
On 27/03/2014, at 1:35 PM, Mark Currie <MCurrie at laserfast.com.au> wrote:
> There are UTM's which can associate data consumption to AD for a standalone business (such as the Sophos UTM or Bluecoat), I think Scotty is talking more ISP grade?
>
> Mark Currie
>
>
> -----Original Message-----
> From: AusNOG [mailto:ausnog-bounces at lists.ausnog.net] On Behalf Of
> Scott O'Brien
> Sent: Thursday, 27 March 2014 11:53 AM
> To: ausnog at lists.ausnog.net
> Subject: [AusNOG] User-Aware Netflow
>
> G'Day Noggers,
>
> Long time loiterer, first time poster here. At the organisation I've been working at, we've had a requirement to attribute traffic (and the type of traffic) back to a user. Not being able to find any open source stuff to do this, I decided to build one.
>
> I've been building a tool that makes use of pmacct to put netflow and BGP attributes (namely community and AS Path) into a central message queue (RabbitMQ). In basic, the tool is basically a set of consumers that listen on a user-auth message exchange and have access to auth history in my MongoDB cluster. When a flow comes in, I'm able to add the user who had the destination IP address at the time to the netflow record before storing it on my database and increment the appropriate counters in Mongo. I'm now working on a front-end (in Meteor) that shows information on the traffic and per user usage in near real-time.
>
> There's a little bit of work now to abstract the tools I've built such
> that it's easy to use for the wider community. I'm curious, is this
> style of IP based user-attribution something that people want/need?
> How are others tackling this problem? (I know proxies are popular.)
> If there's a demand for it, I'll abstract it, clean it up a bit and
> put it up on Github but only if it's an area people have found
> lacking. Ideas and suggestions welcome :-)
>
> Cheers,
> - Scotty O'Brien
>
>
> _______________________________________________
> AusNOG mailing list
> AusNOG at lists.ausnog.net
> http://lists.ausnog.net/mailman/listinfo/ausnog
>
> --
> This email was Virus checked by Sophos UTM 9. http://www.sophos.com
> _______________________________________________
> AusNOG mailing list
> AusNOG at lists.ausnog.net
> http://lists.ausnog.net/mailman/listinfo/ausnog
_______________________________________________
AusNOG mailing list
AusNOG at lists.ausnog.net
http://lists.ausnog.net/mailman/listinfo/ausnog
Charles Sturt University
| ALBURY-WODONGA | BATHURST | CANBERRA | DUBBO | GOULBURN | MELBOURNE | ONTARIO | ORANGE | PORT MACQUARIE | SYDNEY | WAGGA WAGGA |
LEGAL NOTICE
This email (and any attachment) is confidential and is intended for the use of the addressee(s) only. If you are not the intended recipient of this email, you must not copy, distribute, take any action in reliance on it or disclose it to anyone. Any confidentiality is not waived or lost by reason of mistaken delivery. Email should be checked for viruses and defects before opening. Charles Sturt University (CSU) does not accept liability for viruses or any consequence which arise as a result of this email transmission. Email communications with CSU may be subject to automated email filtering, which could result in the delay or deletion of a legitimate email before it is read at CSU. The views expressed in this email are not necessarily those of CSU.
Charles Sturt University in Australia http://www.csu.edu.au The Grange Chancellery, Panorama Avenue, Bathurst NSW Australia 2795 (ABN: 83 878 708 551; CRICOS Provider Numbers: 00005F (NSW), 01947G (VIC), 02960B (ACT)). TEQSA Provider Number: PV12018
Charles Sturt University in Ontario http://www.charlessturt.ca 860 Harrington Court, Burlington Ontario Canada L7N 3N4 Registration: www.peqab.ca
Consider the environment before printing this email.
Disclaimer added by CodeTwo Exchange Rules 2007
http://www.codetwo.com
More information about the AusNOG
mailing list