Getting connected to the Web
Have you ever wondered how many different devices and servers are connected to the Internet every second? I was surprised to find out that now - in 2013 - almost 80 different products are connecting to the Internet every second. According to Cisco - this number will raise to 250 devices per second by 2020.
After the mass introduction of smartphones in the last 5 years, the tech industry is now coming up with a greater diversity of smart devices designed to be connected to the global network. Almost all TV makers have launched Smart TVs that allow you to Skype, watch YouTube videos, and what not. The next big thing is the wearable tech - Samsung and Sony have already introduced smart watches this year.
I can imagine that in 7 years almost all appliances will have the capability to connect to the Internet and make our lives easier. The number of devices that are predicted to be connected to the Internet by 2020 is close to 50 billion, and even 70 billion, according to some researchers.
IPv4 is the most used Internet Protocol right now but soon we will have to shift to IPv6 simply because we will run out of unique addresses. Back in 1981, when IPv4 was introduced for the first time, no one could predict that we would ever run out of unique addresses. IPv4 uses only 32 bits to store the address information. This gave us 4 294 967 296 unique addresses which basically ran out in 2011. Thanks to changes in network designs and the use of Network Address Translation (NAT), we are still able to use IPv4 today.
By the early 90s, we were paying attention to the development of the Internet and technology in general, and realized that sooner or later we would need an Internet protocol that will not have the limitations of IPv4. And so the development of IPv6 began. The IPv6 Internet protocol was developed by the Internet Engineering Task Force (IETF) and was commercially deployed in 2006.
Unlike the IPv4 Protocol that uses only 32 bits to store the address information, IPv6 uses 128 bits which provides 3.4×1038 possible addresses. There are 38 zeros in that number so we will not be running out of addresses any time soon.
The number of possible addresses is one of the most recognized advantages of IPv6. Unfortunately IPv4 and IPv6 are not designed to work together and this actually makes the transition to IPv6 much more complicated than expected. Although IPv6 is not very popular right now, most companies are trying to implement it in their systems and prepare for the future. For the first time ever, more than 2%of all the traffic that reached Google’s servers last month, was via IPv6.
Also, IPv4 uses only 32 bits for the address and IPv6 uses 128 bits, so IETF designed a totally new packet format in order to minimize the size of the packet headers processed by the network equipment.
The WebSitePulse Monitoring Network and iPv6
Since 2010, we are able to use IPv6 from several of our locations. You can use our Chicago, Toronto, Munich and Brisbane locations to monitor your IPv6 assets. We have servers in more than 40 different data centers around the world and only 4 of them have introduced IPv6 right now. From this data it seems that only 10% of all hosting companies currently provide IPv6 but we expect more and more of them to implement it in the future. We are constantly expanding our monitoring network, and we hope to have at least another 6 monitoring locations supporting IPv6 by the end of 2014.
If you have read our blog before, then you already know how important web monitoring is for your business. However, what you probably don't know is what the top 10 most commonly used monitoring services amongst our clients are. So, here they are:
Top 10 most preferred monitoring services at WebSitePulse
This particular chart shows the top 10 monitoring levels that our customers have chosen to use. And the winner is: the HTTP Performance type, followed by the HTTP Full-page. The HTTP Basic, HTTP Advanced and PING Basic monitoring levels take 3rd, 4th and 5th places accordingly. The rest of the list is completed by the Performance Transaction monitoring level, which is 6th; 7th is for the Email round-trip monitoring; 8th - DNS Advanced; 9th - SMTP Advanced, and the 10th place is taken by the Custom Port monitoring level.
Basically, the HTTP Performance level of monitoring allows you to check the availability and measure the performance of your website. Using one or more of our monitoring locations, we perform a specific request and download the header and all HTML contents of your website, without the images and the other embedded objects. Our clients like this type of monitoring because they can test the site content for the presence or absence of specific keywords, check the file size, be notified if there are any changes or content variations, etc. And last but not least, the most useful feature of this monitoring type - the detailed breakdown - is provided for the total response time of the page, the DNS resolution time, as well as the connect and redirect and first and last byte times.
This monitoring level is used by both small and medium businesses, mainly orientated at the Internet advising, digital marketing, social media campaigns, different hosting and mobile providers, online shops and e-commerce. Even several governments, schools and universities have chosen it. Our clients mostly go with the HTTP Performance service because they want to know if their site is up and running, if it is accessible from around the world, how fast or slow it loads and more. Of course, they don't always know exactly what kind of service should they choose in order to get the above-mentioned information. But that is what we are here for – to offer the best configuration based on the technical and budget needs of the client. In most cases you can go with both HTTP Performance and HTTP Full-page. The difference is that the second one checks not only the header and HTML content, but also downloads all the images, frames, scripts, Flash and/or other embedded objects and external links. For some of the customers this information is not that vital and that is why we recommend the Performance level to the Full-page.
All services offered by WebSitePulse
As you can see from this last chart, the HTTP Performance monitoring ranks comfortably on top of all our monitoring levels but also shows the great variety of tools and services we offer. With the next sereval posts we will get into more detail on the other monitoring levels, tools and services as well, and why our clients use them.
While natural disasters cause more downtime than cyber attacks, when you are in the middle of figuring out a cyber attack, it really doesn't matter. Dealing with downtime not only costs your business time and money, but a breach in your security can be crippling for your infrastructure. Since much of the IT industry is based on efficiency and everything happening in a very rigid sequence, downtime can be a killer.
So what is it about cyber attacks that cause downtime and lead to losses in time and money? How do these cyber attacks from hackers get into your infrastructure and ruin so many of the security walls that you and your IT team have spent hours building?
Cyber attacks usually focus on stealing, destroying, or altering your data, mainframe, or infrastructure. This could happen in several ways from malicious data mining to changing the passwords for your security, to destroying all of your company's data. These attacks range from pranks to serious threats to your business and your clients. Picking up the pieces from these serious attacks can be like trying to piece together a crime scene and figuring out suspects. In essence, it can all be like a crime drama from the investigation, to the accusation, to the arrest and trial for the hacker, especially if they have committed a cyber crime against your business.
A cyber attack is basically a hacker or team of hackers getting around your security and wreaking havoc in your network. Whether they get in and destroy or steal data, or install a program to collect information, or they are just up to shenanigans, the breach in your security takes time to fix. The worst part is that these fixes require your network to be shut down and that means that anything that your business relies on the network for to stop. Breaches in your security system are the easiest ways for hackers to get into your system.
The downtime originates from going through any information that has been accessed by the hackers. This can mean anything from quarantining a single computer to going through thousands if not millions of lines of security code to figure out exactly where the hackers have been in your system. In some cases, hackers have installed trojans that have remained undetected for years since they were able to cover their traces to such an extent that no attack was reported by the security. While these cases are extremely rare, they can be some of the most serious because of the constant data mining that can be going on without your IT team knowing.
Another aspect of downtime is the painstaking effort it takes to rebuild a completely new security system. Ensuring that there are no breaches in your security system takes even the best IT team plenty of time and effort to make sure that everything is just right. One of the best ways to prevent attacks is a multi-layered security system. Like building several fences or walls around your home, the more security layers your company has, the harder it is for breaches to occur since walls can cover up for each other here and there. Overlapping your system, like a resealable plastic bag closure, can make it that much harder for hackers to find holes in your security.
The faster you can discover a breach, the faster you can go about discovering and fixing any problems that have occurred because of the attack. One of the best ways to detect a cyber attack quickly is by updating your system regularly. With regular updates, your system will reveal any problems that were not present with the last update. Security updates can also help your IT team locate holes in your network's security before they present a problem. There are several important things that your company can do in order to ensure that productivity and downtime losses are minimal, and regular updates are one of the best steps that you can take.
Passwords are also some of the best ways to prevent cyber attacks and reclaim your system if or when a cyber attack occurs. The stronger your passwords are, the harder it is for hackers to decode them and get into your network. Along with passwords, regular housekeeping can go an incredibly long way in ensuring that your security system is capable of defending cyber attacks and hackers. If your company deals with large amounts of data, then security has to be at the forefront of your mind.
Another step that you can take is making sure that your whole company is on board with security procedures. From strong passwords to regular updates to housekeeping, a company that is able to monitor itself and watch its own security will go a long way in minimizing downtime.
If you have a mission critical operation or if you just want to make sure your email systems are up and running 24/7 without any glitches, the email round-trip service is what you need.
The email round-trip monitoring is a two-step process:
- Firstly, we send a test message to a specific email address.
- Secondly, we retrieve that message either from your IMAP/POP3 server or from a mailbox on our side in case you decide to set up forwarding rules for the test emails.
All test emails have unique hash-strings in the subject line in order to avoid any duplication issues. We run these tests every 10 minutes from up to 40 different monitoring locations simultaneously, and we alert you when problems with your MX records, SMTP or IMAP/POP3 servers appear.
We use several different ways to set up an email round-trip target so we can cover it all. If you want to be alerted immediately in case of a problem with your email systems, this service can fulfill all your requirements.
Here are the different ways in which you can configure an email round-trip target according to your needs.
If you provide us with a test email account on your servers (email address , username and password for the IMAP/POP3 server) we will send a test email to the address in question, log in to the specific mail box and check if the message has arrived. You can specify the time (between 1 and 15 minutes) of the message arrival. This value is very useful if you have spam filters, for example, and it takes a few minutes for the messages to arrive in the mailbox.
If you want the test messages coming from a specific email address, you can enter that address in the Send message from field in the advanced configuration section of the target.
By default, we will get your SMTP server address from your MX records, which is recommended simply because you will be alerted if something goes wrong with the MX records as well. If you have many different mail servers and you want to test just one of them, you can always use the Manually configure servers option.
All common authentication methods are supported as well.
To finish setting up your email round-trip target, all you have to do is set up the POP3 / IMAP configuration.
The first option will work just fine if you want to be sure you can receive emails. Option 2, however, will help you ensure you can send out emails.
All you need to do is provide us with a test email address i.e. firstname.lastname@example.org, and set up forwarding rules for all incoming messages to be forwarded to one of our test mailboxes i.e. email@example.com. You will need to contact us and we will provide you with an exact email address. The only thing you should keep in mind here is - don't change the subject lines of the test messages when you forward them to us because we use the unique strings for the test purposes.
There is a third option to monitor web mail interfaces as well, but you will have to use our custom script level of monitoring. Our developers can always create a custom script for you - free of charge - which will be able to monitor almost 100% of all available web mail interfaces that are used right now.
What HTTP authentication is all about
HTTP is able to use several authentication mechanisms to control access to specific websites and applications. Some of these methods use the 401 status code and the www authenticate response header.
The most commonly used HTTP authentication methods are:
The credentials are passed to the server in hashed form. Although the credentials cannot be captured over HTTP, the request can be replayed using the hashed credentials.
This method uses a secure challenge/response mechanism that does not allow password capture or replay attacks if you use HTTP. It only work with HTTP/1.1 persistent connections. You cannot always use it with all HTTP proxies. Also you should not use this method if the connections are regularly closed by your web server.
How the HTTP authentication methods work
If you want to make your http transactions more secure, basic access authentication is a method you can use to provide a username and password when requesting a website or any other resource. When you open a website which uses HTTP authentication the server will send back a header requesting authentication for a given resource. You will be asked to provide your username and password in order to access the website. The credentials are not encrypted and that is why it is recommended not to use this method of authentication over HTTP and always use HTTPS.
The server will request the user agent to authenticate itself by sending a request for authentication using “HTTP 401 Not Authorized” code.
The user agent uses the authorization header, after which it sends the credentials back to the server and is created in the following way:
- The login credentials are combined into a string "username: password"
- The "username: password" string is encoded using the Base64 scheme
Digest access authentication is also used to negotiate credentials between the server and the user agent. A hash function is used on the password before it is sent back to the server.This authentication method is much safer than basic authentication, where the credentials are only encoded.
- Typically the user asks for a page that requires authentication but does not provide login credentials because he entered an address or followed a link to the page
- The server responds with the 401 "Unauthorized" response code, returning the authentication domain and a randomly-generated, single-use value called a nonce
- A pop-up window will open and the user will be prompted for a username and password. You can also click cancel at this point
- When you enter the credentials, the client will resend the same request but will add an authentication header
- If the username or password are not correct, the server will return the "401" code again
- The credentials are actually hashed (MD5 (username: realm: password))
- The nonce the server returns can contain timestamps. This information can be used to prevent replay attacks
- The server is can create a list of recently issued or used server nonce values in order to prevent reuse
This method is vulnerable, and the client could not verify the server’s identity this way. Also, some passwords are stored using reversible encryption.
NTLM is an authentication method developed by Microsoft and optimized for Windows platforms. NTLM is considered to be more secure than Digest.
NT LAN Manager (NTLM) is a challenge-response authentication method that is a securer variation of Digest authentication. NTLM uses Windows credentials and not unencoded credentials and requires multiple interactions between the client and the server.
The NTLM authentication scheme uses much more resources than the standard Basic and Digest schemes and might influence performance. Once authenticated, the user identity is associated with that connection until it is closed. These connections cannot be re-used by users with a different user identity.
How to monitor HTTP authentication
You can use WebSitePulse performance and full-page levels of monitoring to make sure your clients are actually able to log in using HTTP authentication. All our transaction levels of monitoring support HTTP authentication.
For safety reasons, we suggest you to create a free test account with limited rights for the purpose of monitoring. Once you have the username and the password for the specific website, all you need to do is enter the details in the advanced configuration section of the specific target. Simply fill in the details in the Server Authentication section.