Defending Against RYUK

Computer code on a screen with a skull representing a computer virus / malware attack.

It has been exactly four weeks since Homeland Security, the National Guard and LA DoE scheduled an emergency phone conference with all Technology Directors in the state of Louisiana.

During this briefing, we were informed that 6 school districts and 2 government agencies were attacked by a ransomware known as RYUK. The immediate reaction was frightening as the governor of Louisiana demanded a state of emergency. We were told to shut down internet access and remove local admin rights until further notice.

Keep in mind, we were two weeks out from the start of school (smart timing on RYUK). We had to finish deployments for hundreds of chrome books, projector installations, finalize surveillance installs and manage several other projects in our department.

A day passed before we received a strategic game plan from Homeland Security that detailed several phases of security implementations. Phase 1, turn off all internet access. This can be hard to do when your trying to deploy devices, run updates and have 150 staff members coming back to campus…

I’ll explain the technologies and how everything works later in the blog.

We spent a week tightening up the ship, blocking internet access based on firewall rules, attempting to have offsite backups work, deploying devices, installing software… we were extremely reliant on the internet.

Services were breaking constantly, as expected when you turn off the internet (LOL, if I don’t laugh, I’m crying). My boss could see the stress on our department and offered full support to us while we navigated these high seas. I have to say, I have one of the most supportive bosses in the world (Shout out)!

She granted the additional resources necessary to tackle this oncoming storm.

Four weeks later, 600+ hours between two employees, we now have all systems patched, removed local admin, wiped and deployed. In addition, all members of our organization have been trained on identifying phishing attacks (for your reference). And the entire network is locked down according to recommendations made by Homeland Security.


The Technical


Known threats to block

deny any any 84.146.54.187/32
deny any any 75.147.173.236/32
deny any any 218.16.120.253/32
deny any any 170.238.117.187/32
deny any any 195.123.237.129/32
deny any any 194.5.250.123/32
deny any any 85.204.116.158/32
deny any any 31.184.254.18/32
deny any any 186.10.243.70/32
deny any any 104.20.209.21/32
deny any any 445
deny any any 447
deny any any 449
deny any any 8082
deny any any 16993

They have identified RDP (3389) and Email (80/443) as the two primary vectors of initiation.


How we “turned off” the internet

Using the firewall “deny any any” and manually adding 40+ pages of “trusted” ip addresses was not an option for us. It was extremely time consuming and impractical. I often fat-fingered IP and port numbers. I broke everything. I wish Meraki allowed me to use a CLI for this type of task. Luckily, Meraki had a second option for us.

Meraki offers Content Filtering, which allows you to blacklist everything (*) and whitelist URL’s. I chose this option. Upon blacklisting the entire internet with (*), I was then able to whitelist common sites much more efficiently.

Anything that ends with .gov and .edu were whitelisted, but not completely. Aside from these, every other site had to be whitelisted. Aside from the constant adding, this process is very easy.

All traffic is triple filtered with the leading Cisco, Google, and Meraki products in the globe. With dual content filtering, IPS/IDS and AMP screening, our traffic has been relatively clean – to say the least.

When it comes to Meraki, we were also able to filter traffic by country. This allowed us to block traffic from random countries that we have no business communicating with/through.

Anti-virus

We commissioned a new AI based product to help protect all of our servers, faculty and staff. Hoping that their spread of knowledge with the recent attacks will help prevent attacks on our network.

Advanced email filtering & quarantines

Google allows for us to enable advanced email filtering and quarantine. I’ve enabled all features to flag suspicious emails and I’ve personally trained every employee on proper email usage and what to look for in an email.


As of today, we are not in the clear, but we are in a much better state now than we were a month ago. We were given the chance to reflect on our current policies, enforce new procedures and tighten up security campus wide. Other organizations were not given the same opportunity as us.

For anyone out there battling this, please reach out if you need support. This is a beast to navigate and cyber crimes are not going away anytime soon.


References

Center for Internet Security (Homeland Security)

Read about Protecting your network

Read about Emotet Malware

Read about TrickBot

Google Sheets – Asset Management

Do you need an asset manager? Are you a small/medium organization? Then I have the perfect application for you! It’s called Google Sheets.

Get yours here:

Asset Management

https://docs.google.com/spreadsheets/d/1c17XY8iywyal_fUr1LBFofWlljrpQEbOs5WW0d9EKBU/copy


Seriously, though, if you have any questions – reach out to me.

My sheet covers Asset Management, Repair Logging, Systems and Network logging, and a user dashboard to see which devices a user has assigned to them at any moment and their repair log history. You know, just in case you need to bill them…

Duplicate IP! – 5.27.2019

Well, I’m dealing with this on my night off. 42 of my Meraki access points are yelling and complaining like a bunch of kids shopping with their mommy during a hot summer day about not finding home.

Yeah, I mean, I’m upset too.

I drove 45 minutes to work (yep, I commute)… Upon arrival, I decided to get my priorities straight, so I started Spotify and played by favorite playlist (lots of hip-hop) of aggressive music.

I then started to TSHOOT by logging into Meraki > Wireless > Monitor > Access Points where I confirmed if any errors were still populating. They were.

I immediately decided that I needed to verify if I added/removed any devices from my network by matching up the dates from when the alerting started and my ticket queue. We decommissioned a few network devices, but we made zero network changes.

Phase II, I RDP’d into my DHCP and DNS server to validate the AP IP addresses. All checked out. I then reviewed DHCP for any “Bad Addresses”. I had 50+ “Bad Addresses”… Yeah, that’s an issue. They were all on the same VLAN (20) that Meraki was claiming DHCP failures on (5/5 transmit failures on VLAN 20).

Okay, so I deleted the “Bad Addresses” since nobody was on campus just to see if we had a stuck entry or caching issue. Most of the IP entries did not come back online. Great. Moving on.

Phase III, I panned over to my DNS server. Wow, okay, I have a lot of clean up that I need to do… PTR entries from 2016!! Okay, I’ll delete most of those entries (since I knew that they were not needed). Checked AP status, we’re almost there, I’m starting to see AP’s come online.

I then decide to go back to DHCP and refresh the lists to see if any entries have been updated. Welp, there she was… ap0016xx.domain.com with a VLAN 20 ip address… I don’t know about you, but I don’t put my access points on access vlans. AP’s belong on the network VLANs.

I take the device name and search Meraki, bing! It pops up immediately with a conflicting IP address! I trace the source port and disable the switchport. The AP goes offline. I refresh my Meraki dashboard and continue to delete the remaining “Bad Addresses” from my DHCP.

Success! All AP’s are online.

I then, physically, traced down the rogue AP in my environment and found that it was coming from our intern VLAN with a DHCP and Print server on it… The dated DNS records was giving our intern server an old Cisco AP name! Several things happened here that could have prevented this issue, however, it was a great reminder that we must stick to our “Maintenance” schedules and keep our network as clean as possible with regular updates and checks of all systems.


Resource Credits:

CyberSec & Fraud – 5.22.2019

I attended a Cyber Security and Fraud conference today with special agent Eric in the white collar crimes division in New Orleans. It was very insightful to learn about our immediate attacks and pressures.

To start, there are countries are countries with written agendas to target countries like ours (USA). With bankers tightening their security and protocols for handling their processes, its now forcing the hackers and criminals to move their operations directly to the client(s). It’s great that banking institutions are cracking down on security (passwords, encrypted communications, malware/adware/ransomware detectors and scanners and phishing simulations).

With the immediate threat coming from foreign internationals, security agencies like the CIA, FBI and local police need to move quickly to identify and target these criminals. $350,000,000 was criminally taken through the banking system (I believe only in Louisiana), with 76% of which being recovered. That’s a great number recovered! At the same time, there’s a lot of money that was not recovered!

The FBI set up https://www.ic3.gov/default.aspx for community members to report suspected criminal activity. The quicker you report, the quicker they can deploy their task forces to combat the criminals.


Don’t let this overwhelm you, there are preventative measures that you can take to help combat these issues. One, keep all systems patched and updated – especially Windows environments. Two, training is the second best effort to protecting your data. The majority of criminals are allowed access (Directly/Indirectly) through email spoofing and spearheading. This means that they breach your account or a vendor and then monitor the accounts for months. Once they feel that they are ready, they are then able to “mimic” your rhetoric and attack others in your contacts list.


Image credit: https://www.pymnts.com/news/security-and-risk/2018/scams-bec-government-sfc-fbi-washington-dc/

I oofed… – 5.10.2019

Today, I started the day off with an oof…

Picture this, Friday morning, I’m in the office early (7am) to start work on some configs before the campus started filling up with Admin, Faculty and students. I grab my coffee, sat down at your desk and logged my PC.

I was feeling confident, started studying my CCNP, felt like I knew my environment like it was the back of my hand, so I launched SuperPutty, configured my sessions (10.2.2.75) over SSHv2. I proceed to show my current interface status:

#show interface status
From here, I saw that there were several updates that needed to be done.

Performing the show interface status command showed that I had several things that needed to be updated. To list a few: out of date descriptions, port-channel groups on non-redundant switchports, dated hostname, and various open ports that should be disabled.

So, I address the quick and dirty: renamed switchport descriptions, disabled unused ports, changed down ports to an arbitrary VLAN # in the event that it did come back online, and fixed my “archive” link that automatically backed up my config to a local tftp server. I know, I should be using SFTP, but I don’t know how to! Teach me.

Moving forward, I wanted to tackle the port-channels. I log into both switches (Core and the adjacent switch) and proceed to remove the fiber uplinks from the assigned channel-groups. Well, I started with the core…

That was a mistake, it completely threw my switches into error state. It’s now 8AM, students are rolling in and teachers are starting to check students into our Student Information System (SIS).

Then, all phones go offline, my primary DC (domain controller), goes offline, printers aren’t working and several people are calling me to tell me that the internet is not working!

It’s because I blocked Google. ahahaha, Just kidding.

I quickly realized that the switchports originally configured are now in errdisable state! I oofed! First off, I should have waited until the end of the day to make these changes so that this sort of thing can be mitigated without causing a service outage. Second off, I needed to be in two places at once so that I could cycle both ports, prevent loops, and then verify that they were both up and running but I was alone. My colleague was still on his way in for the day.

After several minutes of frustration, running back and forth on an 18-acre campus, I finally did it. I cycled the ports, waited two minutes for the entire core to come online (I did a full restart) – we were finally online. I then had to test the DC to make sure it was happy, pinged the phone system and validated that the printers were operational again.

We were back online after 15 minutes of outage due to my stupidity. Reflecting on my ID-10-T error, I realized that I should have waited until the end of the day, notified my head master that I wanted to make the changes, scheduled the change and then proceeded to a planned change. Remember to stay humble and grounded. You can prevent these mistakes.

Lesson learned.