Category Archives: Server Fixes

Blocking Shodan | Keeping shodan.io in the dark from scanning

For a while now I have been interested in keeping Shodan’s paws off my equipment both at home and in the cloud. While doing a search I noticed a number of other tech people who feel the same way. While there are parts of information on how to do this scattered across the internet I thought I would combine various parts of them into a single post.

In the last few days of writing this post there has also been a massive amount of mongoDB installs that have been hacked. For more info in preparing for data breaches see my previous post on 3-2-1-0day rule for backups. While shodan is not responsible for this generating a largest list via their service is trivial for whatever service you have a exploit for. So it may not be a bad idea to try and keep away from the all seeing eye Shodan is. While there are arguments on both sides that shodan helps identify issues as well as identify targets I think its best if we had the option to opt out. Thus,

The Definitive Guide to Blocking Shodan from scanning.

First we need to identify the list of IPs that Shodan sends scans from, this are commonly from their census servers but can come from other hosts they control as well. Below is a list of the domains and IP addresses I have collected online, and monitored scanning my equipment.

census1.shodan.io 198.20.69.72 - 198.20.69.79 US
census2.shodan.io 198.20.69.96 - 198.20.69.103 US
census3.shodan.io 198.20.70.111 - 198.20.70.119 US
census4.shodan.io 198.20.99.128 - 198.20.99.135 NL
census5.shodan.io 93.120.27.62 RO
census6.shodan.io 66.240.236.119 US
census7.shodan.io 71.6.135.131 US
census8.shodan.io 66.240.192.138 US
census9.shodan.io 71.6.167.142 US
census10.shodan.io 82.221.105.6 IS
census11.shodan.io 82.221.105.7 IS
census12.shodan.io 71.6.165.200 US
atlantic.census.shodan.io 188.138.9.50 DE
pacific.census.shodan.io 85.25.103.50 DE
rim.census.shodan.io 85.25.43.94 DE
pirate.census.shodan.io 71.6.146.185 US
inspire.census.shodan.io 71.6.146.186 US
ninja.census.shodan.io 71.6.158.166 US
border.census.shodan.io 198.20.87.96 - 198.20.87.103 US
burger.census.shodan.io 66.240.219.146 US
atlantic.dns.shodan.io 209.126.110.38 US
blog.shodan.io 104.236.198.48 US *
hello.data.shodan.io 104.131.0.69 US
www.shodan.io 162.159.244.38 US **
host private.shodan.io , ny.private.shodan.io 159.203.176.62
atlantic249.serverprofi24.com 188.138.1.119 ***
sky.census.shodan.io 80.82.77.33
dojo.census.shodan.io 80.82.77.139

Last updated: 2017-05-25

*Probably not a scanner
**Their main website, don’t block prior to running tests below / at all if needed
***Consistently appeared when forcing a scan on my own host details below

Now how can you trust that these are the IP address owned by shodan.io and not randomly selected by just reversing DNS? Easy!
Shodan does not want you to know where its scanners are located on the internet, and this makes sense since their business model revolves around it. To help hide the servers IPs they scan from shodan automatically censors its own IP addresses in results. Here is a random example of what the returned data looks like:

They replace their own IPs with xxx.xxx.xxx.xxx this is done prior to us ever getting the data. Even if you get raw firehose access to the scan results it is still censored prior to being given to the customer.

(example from firehose demo on their blog)

Due to this we can simply search any IP or domain name we think it operated by a Shodan scanner in Shodan! They will appear as censusN.xxx.xxx.xxx.xxx see the below example.

That’s great, now how do I check and make sure that Shodan cannot reach my host.
First block the IPs listed, I would recommend you check them first to ensure they are up to date but as of 2017-01-12 this is the most complete and accurate list available comapred to older postings I have found.

Then you have two options, you can sign up for a paid shodan.io account and force a scan on your host, or you can simply wait and check your IP periodically from the web interface for free: https://www.shodan.io/host/ [ip here] under the last update field.

Since I already am a paid Shodan member I can test my block list right away. This is done by installing Shodan instruction can be found here.

Once installed you want to initiate an on demand scan of your IP. A working example can be found below:

But if you have successfully blocked Shodan you will see the following alert when attempting the scan, the left is my terminal the right is the firewall dropping the connection.

Testing over multiple days I always got the same result.

To ensure it was not just that I had scanned to close together I had tested another one of my hosts that had not been blocked and the Last Update was close to real time. You can also check when your host was last scanned using the following command:

You can see that since putting my IP block in place I have not been manually scanned at any of the two previous attempts. The dates are also listed when you were last scanned with sucsess. You can also see when the first time Shodan picked up your MongoDB or whatever else you run on that IP.

Shodan is definitely a useful tool, and will help admins who dont realize what is exposed to the internet find out their weak points. It is also very useful for vulnerability assessments and getting metrics about services from the internet as whole. But it is also like all good things used by people who want to exploit the data within for personal gain or entertainment.

There are literally hudreds of thousands of interesting and exploitable items on shodan, just dont be one of them.

3-2-1-0 Rule for Backups | A new take on 3-2-1 Backups

 

I would like to take a look at the 3-2-1 rule for backups that is commonly taught and ingrained in the memory in Netowrking101 and Computer101 classes.

While the basic rules of 3-2-1 still seem relevant in today’s day and age and have saved numerous company’s millions of dollars (See Pixar needing to go to an employee’s home PC in order to save the film Toy Story https://www.youtube.com/watch?v=8dhp_20j0Ys). I want to talk about the new darker rule 3-2-1-0. But in order to do that we need to know what 3-2-1 stands for.

TrendMicro, Rackpace, and Veeam define the 3-2-1 rule as:

3 – Have at least three copies of your data. 
2 – Store the copies on two different media.
1 – Keep one backup copy offsite.

However in todays world we need to consider the new (as in fresh off the press) 3-2-1-0 Rule. This new version even comes with this nifty image:

3210rule3

3 – Have at least three copies of your data.
2 – Store the copies on two different media.
1 – Keep one backup copy offsite.
0 – 0day release, assume someone else has illegally obtained a copy. (assuming someone else already has a copy or will obtain one in the future)

Rule 0 takes into account the fluid nature of how data is stored online today and what we need to do in order to prepare for the eventual discolsure of this data. It could be a user table with passwords from your database, a rouge developer cashing in on a backdoor left in the system, to a unlikely but possible scenario where someone loses an offsite unencrypted backup disk or laptop. Everyday there are a handful of leakes added to the public domain some new some old. But this is the world we live in.
This rule would call for a plan to be in place that would cover the following topics:

Response: What are the first actions a company would take after confirming or assuming their data has been compromised.
– Will services continue to operate during the Validation, Next Steps, and Review process. What are the risks of leaving the system live?
– Who are the groups that need to be alerted? (Company stakeholders, Users, Partner Orgs, etc)
– Acquiring and validating the data dump itself. Will the company purchase the data from a darkweb vendor or pay access to a fourm if necessary to confirm if the data is from their own system, or is it readily available online
– Were we notified by a 3rd party asking about a bug bounty, have there been recent twitter threats that now need to be considered as having truth to them.

Validation: Checking the data that you have acquired.
– Does the data align with the current data you have or does it appear to be a fake? (Same type of hashing method, same users, same tables)
– Does the data contain any unique information to confirm that the data was stolen from you such as unique system accounts or passwords.
– Was the data taken recently? (Compare the number of users, compare the password policy, timestamps of logins)
– If the data was not taken recently how long could it have been traded online prior to going public.
– Do any of the passwords not match the password policy set out by the company. (May indicate the passwords are from another source).

Next Steps: What to do now that you have validated the data.
– Roll out password resets.

– How was the data obtained? (SQLi, Account Stuffing, 3rd party websites)
– Prepare a statement for the media and users. The statement should be written by someone in IT not marketing and contain accurate information regarding the breach, not generic information on password hygiene.
– Comparing and or restoring the data to ensure that nothing was left behind or tampered with
– What information can be harvested from steps 3 2 1 that would assist in identifying the type of attack. This would aide in the event logs have been cleared.
– Issuing takedown requests on existing dumps and looking into vendor reputations services to automate the rest. Set up google alerts if you do not already have a social monitoring service.
– Do I need to blacklist any of my backups where data may have been tampered with or where security holes have been left unpatched.

Review phase: take a breath.
2dd
– Can we attribute (lol) this attack to anyone, competitors, scriptkiddies, China?
– How were we identified as a target? (example: Checking to see if you were listed on pastebin with a number of other vulnerable hosts of similar exploits)
– What type of encryption was used, was it sufficient, how difficult is it to implement a higher level of security in the event the data is taken in the future.

To date the 3-2-1 rule has been for protecting data you have onsite, ensuring reliability of those backups from data loss, and guidelines on media types to store it.
But I hope the 3-2-1-0 rule will bring to light some subjects that some companies may not have thought about regarding someone else having a ‘backup’ of their data.


There may just come a day when you will be buying the user data back from a nefarious party just so you can validate you were not hacked and the information is false, this just comes down to brand reputation in my opinion.

 

Let’s Encrypt | How the future of SSL has come to the pennyless.

A great product called Let’s Encrypt will be coming out in the near future. One of the best things about this service is how easy it will be to manage the SSL certificates. Oh and its free! Thats right web monkeys and hobbyists, stop paying godaddy for your SSL certs every year and spend your hard earned money on beer!

Problem: My certificate says its invalid or Im too poor / lazy to buy my own certificate for 100$ a year from my current domain provider.

Solution: Use Lets Encrypt in the Week of November 16, 2015!

A lot of people may say who cares? Well they are wrong, lets encrypt will alow people who want to spend more time developing their product and less time learning the difference between UCC  and wild-star certificates, let alone how to make the CSR. In fact the whole renewal process will be automated aswell assuming your using a compatible OS.

Let’s Encrypt is supposed to be so simple to use in fact that even people who were marketed Drobo’s will be able to use it. 

The reason I waited so long to post an article about this was the burning question, will it work and am I required to install a intermediate certificate (This is sometimes the case with BlowDaddy just have the client see the certificate as valid).

Well you can see for yourself on their live test page located here.

I’m looking forward to this forward thinking method of creating a more secure web and will be lined up on the 16th of November to start applying for certificates.

Notes: I do think that learning how SSL certificates work is a great idea, but for those of you who know already Let’s Encrypt is a great way to quickly get your web service online with a zero cost. 

Moving Exchange Datastore freezes at “validating the destination file paths” | Exchange 2007

So recently I had to move a datastore location to a new disk since where we had it was running low on physical space. To do this is very straight forward and covered in a previous article, however this one was stubborn.

With the database offline I had copied the data to a new disk and verified it matched. But it stayed on this step for a number of minutes I never recall it taking so long before.

1

 

Problem: When I move an exchange datastore I freezes on validating the destination file paths. Unable to move exchange datastore. Moving exchange datastore takes excessive amount of time. 

Looking around online it should only take a number of seconds to check. There was little disk IO just lots of CPU usage since it stalled. I moved another Storage Group to ensure I was not insane and it went right away.

2

Solution: Remove log files from source and destination if you copied the whole folder.
/!\ Always copy log files never ever delete them until you are sure you need them. Always work with the DB offline and test mounting the DB after making a change.

Comparing the folders I noticed there were a large number of these log files sitting around in the same folder as the DB.

3

Since the DB was offline I moved them to a different disk to ensure it did not need them. I I remounted the DB and it came up fine without them. I assume they were made during some bad migrations we were working on at the time they were still around.

After removing the files I began the migration again but it still took time validating. I then remembered I had copied the whole folder so the logs were in the source and the destination. Moving them out as well and starting the process over yielded a fast migration!

I assume I could have waited for the system to compare the files and so on but after waiting 30 min for something that normally takes seconds did not sit well with me on a weekend.

 

 

 

Exchange 2007 keeps dismounting database daily.

Have an exchange 2007 server? Tired of coming in every day to calls from users that at some point over the night the server has dismounted the database? Then I have the solution for you!

Problem: Exchange Server 2007 will unmount a specific database on a regular basis forcing me to have to remount the database manually. This is due to the database growing to large.

How to confirm this is your problem: I would start by looking in the event log for errors like the following:

Event ID: 1216 
Source: MSExchangeIS Mailbox
The Exchange store ‘some custom group\some name‘ is limited to 250 GB. The current physical size of this database (the .edb file) is 256 GB. If the physical size of this database minus its logical free space exceeds the limit of 250 GB, the database will be dismounted on a regular basis.

This is a good indication that your DB is full. Good news is there is a fix.

Solution: First we need to learn about how Exchange frees up space in a DB this is important.

Odds are the DB wont dismount again until the daily cleanup run. The schedule for this is located in EMC \ Server Configuration \ Mailbox \ Datastore Name \ Database Name \ Properties 

001

Now we have a good idea that during the next online defrag it might dismount again this is generally when it happens. We have established a rough idea on how long we have until we might experience the problem again. Exchange will never shrink the raw file that is has created. If you hit the 256GB default size than you will always have a 256GB DB file unless you run an offline defrag using ESUIL, this creates downtime for the datastore and will take hours to complete.

So how can you reduce the size of the mail in the DB so tomorrow you can enjoy your cup of caffeine in peace? The fastest and often best answer is to move mailboxes out and spread the data across all the datastores. We now need to find out how large the databases are, and how much white space is in them. Looking at the raw files is not a good estimate as I mentioned exchange will not shrink the raw file, just the free space in in.

Open up the EMS and drop in the code that I found online. If your worried about it messing up exchange then I suggest you look up what the command is doing or take a course on powershell. Same rules applies here inspect code before running it on a production machine.

Get-MailboxDatabase | Select Server, StorageGroupName, Name, @{Name=”Size (GB)”;Expression={$objitem = (Get-MailboxDatabase $_.Identity); $path = “`\`\” + $objitem.server + “`\” + $objItem.EdbFilePath.DriveName.Remove(1).ToString() + “$”+ $objItem.EdbFilePath.PathName.Remove(0,2); $size = ((Get-ChildItem $path).length)/1048576KB; [math]::round($size, 2)}}, @{Name=”Size (MB)”;Expression={$objitem = (Get-MailboxDatabase $_.Identity); $path = “`\`\” + $objitem.server + “`\” + $objItem.EdbFilePath.DriveName.Remove(1).ToString() + “$”+ $objItem.EdbFilePath.PathName.Remove(0,2); $size = ((Get-ChildItem $path).length)/1024KB; [math]::round($size, 2)}}, @{Name=”No. Of Mbx”;expression={(Get-Mailbox -Database $_.Identity | Measure-Object).Count}} | Format-table -AutoSize

The ouput can be seen below:

003

Ah perfect we have some room in my other databses that I can use. I can also another one of my DB’s is getting really close to the 256GB default limit so Ill need to keep and eye on it.

Now when you migrate a mailbox is is down until you finish moving it. From my experience it can take a while to move a lot of mailboxes at once. If you cancel the move of a single mailbox it will take the time elapsed divided by 3 to cancel the task. My guess is to revert the changes. This is just from past experience so don’t start and expect cancel to stop that second if users complain.

We should find out what mailbox are excessively large so we move them out. Based on your needs you might want to move 50 small mailboxes of users who are not working as apposed to the CFOs mailbox that is 30GB in size, save that for the weekend.

To output the list of users and sort by size put in the following command into the EMS,

Get-MailboxDatabase “Name of the storage group” | Get-MailboxStatistics | Sort totalitemsize -desc | ft displayname, totalitemsize

004

Here hopefully you can identify some large mailboxes not in use that you can start moving right away. To move mailboxes head on over to the EMC and choose Recipient Configuration \ Mailbox \ Select the specified mailbox \ Move Mailbox…

005

Now this will move the mail to the new DB. Were not done yet, the database still needs to free up the room after a move. This is done by the scheduler we already looked at earlier. I would modify the schedule to have a longer run time say when work stops until tomorrow morning. Make sure it does not overlap with another databases online defrag.

006

Hopefully tomorrow you will not have to mount the Database again and to prove that you freed up some room open the event viewer. Head to application logs and filter by ID 1221.

Here I can see last week I was freeing up next to nothing on this DB.
007
However the combination of moving mailboxes out, and extending the run time has freed up over 27GB of data compared to the 44MB I had freed up the night before.
008

Wait! Were not done yet. How can I stop worrying about database sizes and get back to enjoying weekends without mailbox migrations?

When I was looking into this issue I noticed some inconstancies with exchanges functions compared to its licensing model. I came across this article: https://technet.microsoft.com/en-us/library/bb232092(v=exchg.80).aspx?ppud=4  where they state that the max size should be 16TB however it has to be manually set via the registry.

To be clear I have not gone ahead and done this yet seeing as everything is now working but once I get a database down to zero mailboxes in the next month or two then I plan on applying this registry edit to that single empty database and playing with it to verify Microsoft’s guide everything thing mentioned above this line has been tested and implemented into active production. Here is how it is done for my and your reference, Ill post if the following has been successful.

See below from technet on clarification on Exchange DB sizes.

1. The default DB size limit in Exchange 2007 (all service packs) is 250GB
2. The limit is in place however can be lifted via the registry
3. The reg key is the only way I know of to change the limit
4. The application log already issues warnings if you are getting close to the limit

First we need to find out if someone has already set the key and if not where we will set the key. Open regedit and head into:

HKEY_LOCAL_MACHINE\SYSTEM\CURRENTCONTROLSET\SERVICES\MSEXCHANGEIS\

009

Here you will see the name of your mail server expand this and you are presented with a list of private GUID’s like such:

I can see there are more entries than DB’s so head on back to EMS to get the corresponding GUIDs for the DB we want to test this on. Type in:

Get-MailboxDatabase | Format-Table Name, GUID

010

Here you can see the GUID’s match well I am telling you they match you will have to use you imagination since its blurred out.

Now that we know the GUID for the database we want to bypass the 256GB limit on we need to add a new REG_DWORD called: Database Size Limit in GB

For the value enter in GB. More details on the key can be found here despite it mentioning it was for server 2003 it was part of the previously linked MS guide.

 

 

 

 

 

 

Bypass Windows 8 , 8.1, Server 2012 product key on install

Have you been like me and gone to install Windows 8 , 8.1 or even Server 2012R2 and seen the most annoying screen below? Like how annoying is this?

 

1

Well let me show you a neat way of getting around the key that will not violate the TOS.

Problem: Microsoft wont allow me to install windows without first putting in a valid license key. Or maybe your key is on another partition you want to get to after install : )

Solution: Build a new USB install of Windows 8 / 8.1 / Server 2012 with a OEM config file to bypass this prompt.

First you need to have a built Windows 8 / 8.1 install on a USB stick from ISO. This can be don’t with the Microsoft USB Download tool. This tool has recently been pulled from the Microsoft store but you can download it here.

Once you have built your USB stick (not covered in this article) then you need to go into the folder on the newly created drive. Look for a folder called Sources\ .

Create or modify if it already exists a file called ei.cfg in notepad. Paste in the following text:

[EditionID]

[Channel] OEM [VL] 0

Ensure you save the file as a .cfg not as a .txt file!

Now use that USB stick to install the desired version and edition of Windows. If you are unable to make the guide work then you can use some of the keys Microsoft provides to volume licensing folks. This will not activate windows but simply allow you to finish the install.

The keys can be found mid way down the page here.
Enjoy your new Windows install!

Deduplicated volume wont free up or release disk space? A lesson in Server 2012

Recently I ran into an issue where my Data Dedupe on Windows Server 2012 was not releasing space back to the OS. I would delete files over and over. Attempt disk cleanup but still nothing would happen. This only seemed to affect the volumes I have deduplicated then it hit me.

Garbage Collection; this feature will scan the disks and look for any files or free blocks that are no longer used and remove them. However this was set to run once a month or something  ridiculous  like that. For me that is not an option.

Problem: I am unable to free up or release disk space back to the OS after deleting files on a deduplicated disk in Windows Server 2012. This guide will also apply to dedupe on Windows 8, see my previous post.

Solution: Enable garbage collection right away to free up disk space; then schedule it if needed. 

First thing we need to do is open Power Shell. Once it is is open be sure you know the disk letter that you want to clean up. Ill use H: as my example. Type in the following command:

Start-DedupJob H: -Type GarbageCollection -Verbose

You will get output like below:
5-22-2013 8-11-03 AM

Now you will want to verify how far it is so simply type in:

Get-DedupJob

You should see the job like I did here:
5-22-2013 8-13-15 AM

Deploy with ease | A Windows Deployment Service guide | Windows Server 2012

Rolling out PC’s can be boring and a pain. Using Acronis Snap and Deploy can help (that was not a sponsored ad, however acronis if you want to pay me to post this please give me a copy the trial ran out.) but the software can be expensive.

So what does a sysadmin do then? Whats free with my Server 2012 installation that can help me roll out my PC’s faster? PXE and WDS. Using a simple mix of PXE and Windows Deployment Services you can guarantee a quick and painless roll out.

Issue: I want to deploy Windows to a bunch of hosts but I also want to play Portal 2.

Solution: Install WDS

First we need a Windows Server 2012 install and a DHCP server that works I am going to assume you have both. These dont have to be on the same host but you can if you like (as is this case).

If you are using Windows Server 2012 as your DHCP this will be super easy. Before you start I would reccomend adding a new virtual disk for your images. ~200GB would be good for working room but I used 40GB in my lab. In this example we are going to be using disk D:

5-10-2013 8-28-18 PMOnce the disk has been added and formatted simply go to the server manager and choose Add or Remove Roles. Click next till you get to server roles. Scroll down to Windows Deployment Services.

Install the role and reboot if necessary.

 

5-10-2013 8-32-59 PMNow you will see a new icon under the start menu (metro).

Click it to open it up.

 

You will be prompted for an  initial setup where you will be asking items like:
-Is this a DHCP server (this is the most important step).
-Where should I store the data for these images? Keep this path simple let it pick the path simply pick the disk. This will be covered in a different article where I show you how to add ISOs (memtest, konboot) and other items into WDS that  shouldn’t be there.

Once you have completed these steps its time to add some ISOs and Images. To be clear on this you will need some OEM ISOs, or  genuine  disks that came with the PC or something that came from MSDN. Simply take your iso and extract it with WinRAR or with Windows Server 2012’s built in ISO mounter.

Now you should have some files that look like this:
5-10-2013 8-43-30 PM

Inside this \sources\ folder you will find some WIM files. These are what you need you can leave them here though. We now need to tell the WDS about these disks.

5-10-2013 8-48-09 PMWith the WDS MMC still open choose Boot Images, then right click in the window on the right and choose Add Boot Image…

Please note I already had images in here you will not have.

 

 

5-10-2013 8-48-30 PM A menu will appear prompting you for the location of your DVD.

Simply go to the folder that you extracted the contents of your Windows installation DVD to and open the /sources/ folder.

 

Open the boot.wim file then choose Next. You then will be asked for some info on what you wanted it named fill out as desired.

5-10-2013 8-55-07 PM

5-10-2013 8-55-19 PM

This is great but you still cant install windows with just this added. We still need to add the install.wim to the WDS server. This is done by right clicking on Install Images and selecting Add Install Image… 

You will be prompted to make a group. Name accordingly. I use MSDN and OEM just so I am aware what is what. However you may wish to do it by OS level.

 

 

 

 

After you have added an image your basically good to go. Power up a VM and press F12.
5-10-2013 8-59-22 PM

You can now see all your new .wim installs there! This creates a easy method for booting into the install  environment. Now you will need to install and capture your image and your good to go!
5-10-2013 8-59-50 PM

Bye bye NewRelic, Hello GraphDat!

3-20-2013 8-50-52 AM

I have been using Newrelic for some time now, let me say the support and features are decent. However there are a few shortcomings that would cause people who are just starting out or playing in a lab (as is my case). To disregard the service:
-The fact there is only 60min retention on a free  account
-The fact I cannot embed my graphs into my own web  front-end on my free account.

I know the term nothing is for free, but considering even the paid embedded graphs contain a giant new relic logo, and even if I were able to embed my graph on a free account I would only have 60min anyways so why not let me? I mean hey if I embed it and they keep their logo on it why not let me do it on my free account. So I got thinking. There has to be something better out there that is perfect for my home lab. And there is!

I present GraphDat (updated they were purchased by a company called Boundary), this application is 100% free. The nice thing about GraphDat is not only is it free but I can see up to 3 hours retention on my graphs free of charge, have unlimited servers, and I can embed it into my own front end so I can keep an eye 24/7 on whats going on. Not only that but the customization features of GraphDat are way better for  building  your graphs. They do lack monitoring individual disks something NewRelic does do well. But they are looking alot better day by day.

Updated 2013-5-10

5-10-2013 7-21-46 PM

Here is an example image of a month retention on GraphDat.

 

 

 

 

 

5-10-2013 7-39-06 PMAnd a example of how the product can be used to create custom  embedded graphs.