CDP Neighbor – 6.2.2019

Technology: CDP Neighbor

What does this technology do? CDP Neighbor is used to identify directly connected devices on a Cisco system.  

Use case? If you don’t have physical access to an adjacent switch, you can use CDP Neighbor to identify the device on a specific port. 

Basic Command:
#show cdp neighbors

Full Command:
#show cdp neighbors [ interface { ethernet slot/port | mgmt mgt-num}][ detail]
  • interface – Shows CDP neighbor info for that specified interface. 
  • ethernet – Shows CDP neighbor info for an Ethernet interface. 
  • mgmt – Shows CDP neighbor info for management interface. 
  • detail – Shows the detailed information about CDP neighbors. 

My lab: 

How I used it: 

In todays lab, we will use CDP Neighbor commands to determine which devices are directly connected to the MainDistribution switch from within the CLI of the MainDistribution.  It’s obvious that the AccessLayer switch and the EdgeRouter are directly connected, however, we are not always working in lab environments. In a real world application, the AccessLayer switch may be several hundred feet away. Understanding CDP Neighbor commands will help us determine the exact adjacently attached devices that we have in our network. 


To start, I started all of my network devices. Once booted, I decided to login and run the CDP Neighbor command

#show cdp neighbors

From here you can see the “Local Intrfce” and the “Port ID”. The Local Interface identifies the current switch that you are currently working on and the port that is locally attached to the remote device. The Port ID identifies the remote device port number. So, MainDistribution (Gig 02 from the “Local Intrfce”) is directly connected to the AccessLayer (Gig2/1 from the Port ID) switch. 


Now, you may be asking, how do you know that the adjacent device is the “AccessLayer”? Well, based on the previous image, you cannot unless you know the environment very well. Let me explain. 


The “Device ID” column shows the adjacent device “Hostname”. If the hostname is configured and you understand the name, then you will be able to identify the adjacent switch. Take a look: 

I changed the hostname of the adjacent device so that you can see the difference between screenshots. In my first image, the Device ID said “Switch”, which is the default hostname. Since I changed it, you can now see “AccessLayer” as the Device ID for the connected device. 


Now that you can identify the adjacent device, the local port number and the adjacent port number, we can now spend some time to understand the “Holdtme” column and what to do if the CDP command isn’t showing anything. 


“Holdtme” means Hold Time, this is the length of time that the switch will hold that information before it discards it. You can use the following command to specify the time (Default = 180s). (Think “Time To Live”) 

(config)#CDP holdtime <60>

I personally prefer the shorter times, but if you have a ton of management traffic, you can cause CPU/RAM overload… You can always set the time when you are troubleshooting and reset it when you’re done. 
Finally, if CDP neighbors is not working, you may need to enable it on your devices. This is a very easy command.  

(config)#CDP enable

Reference:

Cisco.com

Duplicate IP! – 5.27.2019

Well, I’m dealing with this on my night off. 42 of my Meraki access points are yelling and complaining like a bunch of kids shopping with their mommy during a hot summer day about not finding home.

Yeah, I mean, I’m upset too.

I drove 45 minutes to work (yep, I commute)… Upon arrival, I decided to get my priorities straight, so I started Spotify and played by favorite playlist (lots of hip-hop) of aggressive music.

I then started to TSHOOT by logging into Meraki > Wireless > Monitor > Access Points where I confirmed if any errors were still populating. They were.

I immediately decided that I needed to verify if I added/removed any devices from my network by matching up the dates from when the alerting started and my ticket queue. We decommissioned a few network devices, but we made zero network changes.

Phase II, I RDP’d into my DHCP and DNS server to validate the AP IP addresses. All checked out. I then reviewed DHCP for any “Bad Addresses”. I had 50+ “Bad Addresses”… Yeah, that’s an issue. They were all on the same VLAN (20) that Meraki was claiming DHCP failures on (5/5 transmit failures on VLAN 20).

Okay, so I deleted the “Bad Addresses” since nobody was on campus just to see if we had a stuck entry or caching issue. Most of the IP entries did not come back online. Great. Moving on.

Phase III, I panned over to my DNS server. Wow, okay, I have a lot of clean up that I need to do… PTR entries from 2016!! Okay, I’ll delete most of those entries (since I knew that they were not needed). Checked AP status, we’re almost there, I’m starting to see AP’s come online.

I then decide to go back to DHCP and refresh the lists to see if any entries have been updated. Welp, there she was… ap0016xx.domain.com with a VLAN 20 ip address… I don’t know about you, but I don’t put my access points on access vlans. AP’s belong on the network VLANs.

I take the device name and search Meraki, bing! It pops up immediately with a conflicting IP address! I trace the source port and disable the switchport. The AP goes offline. I refresh my Meraki dashboard and continue to delete the remaining “Bad Addresses” from my DHCP.

Success! All AP’s are online.

I then, physically, traced down the rogue AP in my environment and found that it was coming from our intern VLAN with a DHCP and Print server on it… The dated DNS records was giving our intern server an old Cisco AP name! Several things happened here that could have prevented this issue, however, it was a great reminder that we must stick to our “Maintenance” schedules and keep our network as clean as possible with regular updates and checks of all systems.


Resource Credits:

Carpal Tunnel & RSI – 5.17.2019

Well, today is one year too late. I really feel like I’ve been abusing myself by not taking care of my RSI (Repetitive Strain Injury), which has now formed into carpal tunnel.

Between my daily commute where I put pressure on the wrist, the 8+ hour work days where I hold my mouse improperly, and the additional time on the computer at home has taken a toll on my right arm and wrist.

Now of course, this is a self diagnosis. I have not spoken to a trained professional, except for my family friend who is a licensed chiropractor. She has guided me for several months, reducing the strain on my wrist nearly 80%! This is huge. However, it’s still unbearable. I’m currently scheduling doctor visits with our local “Hand Center of Louisiana” to get my wrist checked out.

I’m feeling nervous, but less than I was eight months ago. I decided to postpone this procedure because I thought that it would take several months of recovery. After speaking with a colleague , it’s now apparent that I will be able to recover within two hours, functional recovery in four days and complete recovery within three months. That’s not bad.

If you’re experiencing pains in your wrist please consider getting ergonomic equipment such as a standing desk, vertical mouse, and standing desk! If you do not want to stand then invest in a proper chair that can support your body type with the option of adjusting between 90 and 130 degrees to allow for proper blood flow.

Seriously, take care of yourself. Invest in proper equipment, good food, and study materials!

Edit: I visited a doctor to undergo extended rehab. We identified about eight (8) areas of concern that have all lead to the issues in my arm and wrist. Seriously, starting in my ankles, progressing through the hips and through the spine… Please, spend the time to take care of yourself with daily stretching and ergonomic checkups. Doing this can help you avoid serious irreversible damage to your body.

I oofed… – 5.10.2019

Today, I started the day off with an oof…

Picture this, Friday morning, I’m in the office early (7am) to start work on some configs before the campus started filling up with Admin, Faculty and students. I grab my coffee, sat down at your desk and logged my PC.

I was feeling confident, started studying my CCNP, felt like I knew my environment like it was the back of my hand, so I launched SuperPutty, configured my sessions (10.2.2.75) over SSHv2. I proceed to show my current interface status:

#show interface status
From here, I saw that there were several updates that needed to be done.

Performing the show interface status command showed that I had several things that needed to be updated. To list a few: out of date descriptions, port-channel groups on non-redundant switchports, dated hostname, and various open ports that should be disabled.

So, I address the quick and dirty: renamed switchport descriptions, disabled unused ports, changed down ports to an arbitrary VLAN # in the event that it did come back online, and fixed my “archive” link that automatically backed up my config to a local tftp server. I know, I should be using SFTP, but I don’t know how to! Teach me.

Moving forward, I wanted to tackle the port-channels. I log into both switches (Core and the adjacent switch) and proceed to remove the fiber uplinks from the assigned channel-groups. Well, I started with the core…

That was a mistake, it completely threw my switches into error state. It’s now 8AM, students are rolling in and teachers are starting to check students into our Student Information System (SIS).

Then, all phones go offline, my primary DC (domain controller), goes offline, printers aren’t working and several people are calling me to tell me that the internet is not working!

It’s because I blocked Google. ahahaha, Just kidding.

I quickly realized that the switchports originally configured are now in errdisable state! I oofed! First off, I should have waited until the end of the day to make these changes so that this sort of thing can be mitigated without causing a service outage. Second off, I needed to be in two places at once so that I could cycle both ports, prevent loops, and then verify that they were both up and running but I was alone. My colleague was still on his way in for the day.

After several minutes of frustration, running back and forth on an 18-acre campus, I finally did it. I cycled the ports, waited two minutes for the entire core to come online (I did a full restart) – we were finally online. I then had to test the DC to make sure it was happy, pinged the phone system and validated that the printers were operational again.

We were back online after 15 minutes of outage due to my stupidity. Reflecting on my ID-10-T error, I realized that I should have waited until the end of the day, notified my head master that I wanted to make the changes, scheduled the change and then proceeded to a planned change. Remember to stay humble and grounded. You can prevent these mistakes.

Lesson learned.

Starting Point 2.0 (Eve-NG installation) – 5.3.2019

Umm.. yeah, bro. I had a blog before. I had different priorities at the time, but now I’m back at it. You got beef? Or are you vegan?

Today, I have Eve-NG configured on the Google Cloud Platform… This ended up being a total waste of time. More about this later (See my GNS3 post).

My total cost was roughly $90/month. With me shutting the server down when I didn’t use it, the monthly cost was $20… It became more of a hassle. I deleted my compute instance and moved on.

eve-ng.net

I subscribed to INE’s All Access Pass. This was great, because for me, I paid $300(ish) about three years ago for their CCNA class, and this meant that I would have all of their classes for $99/month. I plan to have my employer pick up the tab in the new year if I end up liking the subscription.


If you’re installing Eve-NG on Google Cloud Platform, you may need to use the following:

##Community Edition installation repo command
wget -O - http://www.eve-ng.net/repo/install-eve.sh | bash -i 

If you’re like me and pre-installed the pro version, then you’ll need this:

## To roll back from EVE-NG Pro to the Community Edition, issue the following commands in the CLI of EVE

> apt install eve-ng eve-ng-guacamole

> systemctl disable docker

> systemctl disable docker.service

> systemctl stop docker.service

> systemctl disable udhcpd

## Reboot EVE

Finally, do not forget about the license files!!

https://www.eve-ng.net/documentation/howto-s/62-howto-add-cisco-iou-iol