image courtesy to: exotel.in
Although cloud adoption is growing at an amazing rate, many CIOs still have doubts when it comes to choosing the right service provider. Certainly, making such a decision is by no means an easy task. Still, even if you’re sure a particular company offers all you need, you shouldn’t rush into signing a SLA with them. Before signing a contract with a cloud provider and making a long-term commitment, you might want to contemplate several important things.
Security and authorization
If you’re familiar with the way cloud technology works, you must have realized the potential risks of storing your data there. Security has always been a heated topic in the cloud, which is why you must make sure your provider uses high security standards. Enquire about the encryption strength they use to protect your data, as well as about the people who are authorized to access it any time. In most cloud companies there are groups of employees who are allowed to access your data in case you encounter problems. What you need to sort out with your provider is that nobody else would ever have the possibility to do so.
Maximum uptime is a crucial thing for every cloud user, especially for businesses or commercial websites. While many providers guarantee 100% uptime, you must be aware this isn’t actually possible. Every service needs to have network maintenance every now and then, so you might want to check your provider’s schedule. Additionally, make sure service uptime doesn’t refer only to scheduled uptime, as providers tend to exclude what is called ‘planned downtime’ from this digit. Planned downtime allows service provider to check bugs or introduce features at a specific time of the week or month. Usually this is a short period at points when the network traffic is least intensive.
As cloud providers usually establish their server farms across different states or even countries, it may be useful to know where your data is actually stored. In case your data crosses national boundaries, it becomes a subject of different laws. This issue is why many European businesses restrained from using US-based cloud services, as this would most probably make their data accessible to the US government under the Patriotic Act. Anyways, wherever your business is located, you might want to check this issue in order to avoid possible inconsistencies with local legislation.
Service price was probably one of the major factors that influenced your choice of a specific cloud vendor. However, even though the contract costs are always neatly listed on the SLA, there is a possibility that some extra fees, taxes or upgrading costs are outlined somewhere in the back of it. It is important that your provider make these things clear, so that you’d avoid any potential misunderstandings. Enquire about the exact situations when you may be required to pay extra and make sure there won’t be any possibilities for your provider to charge anything you weren’t already prepared to pay.
Obviously, there are plenty of reasons why you should carefully read every single sentence in your SLA, especially the ones that may be listed in tiny print. Here you may find about the things mentioned above and thus eliminate any doubts that the provider you chose would actually meet your cloud expectations.
There are a lot of interesting ways in which you can use our monitoring services. Engaging in online reputation management is also one of them. A main prerequisite for establishing a good name for you and your business is being transparent. But let's face it, eventually your website will experience downtime. It happens even to the best of sites. And more often than not it is your responsibility keep track of the performance and availability of your website, and take proper action when needed.
In the unfortunate event of your website going down, for one reason or another, it won’t take long for your visitors to find out. It is better for them to learn about it from you rather than speculate on social networks about the outage. Do you remember how Twitter used to have issues remaining online under heavy use? They came up with what became an internet meme - the fail whale. It was their own way of saying: “Yes, there is a problem and we are working on it”.
You can now do some proactive reputation management of your own, by using their platform and our monitoring services. Your visitors have twitter accounts, and when your site goes down you will get flak on Twitter. It is just something that happens with popular websites. Fear not, for there is a solution to this problem. Being upfront about a problem will save a few twitter users the time to write about it.
By using WebSitePulse monitoring with ifttt.com you can let your Twitter followers know there is a problem, and you are working on it. Even better, you can also keep them informed of the status and inform them when the issue is cleared.
How does it work?
The alerts you get in your email are easily convertible to tweets. Obviously, we don’t want to flood twitter with too much information and unnecessary updates, so information will be provided only when the site goes down and when it recovers. To do this, you need to use two “if this then that” recipes.
The first recipe will let people know that you have experienced a problem and you are doing your best to get your website up and running. Over at ifttt.com we have set a simple rule. Each time when a new email containing the text “Timeout warning for www.example.com” is received, we would tweet the following message:
"Www.example.com is experiencing minor technical issues. We're working on it! Expect the website back online any moment now.”
Letting people know there is a problem is just as important as updating them when the website has recovered. The second recipe uses the same approach as recipe number one. We use ifttt.com to monitor our email account for a recovery message containing “Recovered www.example.com”. Then we tweet out a preset message, e.g.:
“Ok, www.example.com is back online, running like a charm. Let us know if you have experienced any problems.”
Now you have an automated system to inform your followers about site issues as they happen. It is advisable to use this approach moderately and create your own ifttt.com recipes. There are many other ways in which you can use the combination of email alerts and tweets. If you have some maintenance work scheduled, you can create a custom message based on that.
Do you have any good recipes to share? Leave them in the comment section, below.
What’s the fun in having a fully functioning API if you can’t play around with it, right? Sometimes data is needed in different forms to serve a specific purpose. APIs are great when the existing interface and available functions are not enough. Getting data straight to spreadsheets is great for custom reports and graphics. So, let’s learn how to do this.
Today we are going to focus on XML and export data with the HTTP API. Before we start, let me remind you that if you want to replicate and build upon any of what I’m writing below, you will need to register for any of our services – paid, trial or free. They will all give you access to our API and what you have in your account.
Currently the API supports 7 methods of exporting data – GetStatus, GetUptime, GetHistory, UpdateTargetStatus, GetDailyLog, AccountStatus and AccountSummary. The method names are pretty much self-explanatory. For the purpose of our article, today we are going to focus on GetUptime, but the same approach is applicable to all other methods.
A peek under the hood
If you like to know what the average response time for a particular month is, the GetUptime method will do the job. Along with the uptime information, this method will also give you the number of checks, fails and estimated downtime (in seconds).
With GetUptime we can request a specific target (website or server that you monitor) and target location. We also need to specify the start and end date for the period we require the information for (the necessary minimum is one month). If you like to get information for June, 2012, your corresponding value would be 201206. If you are at least somewhat familiar with the API, you can skip down to Importing to Google Drive.
To export information from your WSP account you need to send the proper query. Each HTTP/GET request for the retrieval of uptime statistics is basically a URL comprising of several parts:
- http://api.websitepulse.com/textserver.php - First is the base of the URL or resource we need to send the request to.
- username=yourusername – then we add the username you would normally use to login to websitepulse.com
- key=b555tyyy72092fc77pxxxxxxxxxx – right after the username you need to place your own API key, so you can access your account.
- target=15555 – specifying the exact target you need the information for is the next part of the URL. You can get the target’s unique ID here, once you are logged in and have the API at hand. Here, using “all” instead of an ID, would return all targets you have in your account.
- location=all – this way you request the location you need data from - “all” specifies that you want to extract information about each location you are monitoring from. The IDs of all locations are available here.
- method=GetUptime – this parameter enables you to request different types of data. For example, with GetUptime you can select a date range (in months) and get statistics for uptime, response time, number of fails and estimated downtime. The different methods would provide you with different levels of detail.
- format=xml – depending on how you want to use the data you can get it in 4 different formats – txt, csv1 (semi-colon separated), csv2 (comma separated) and xml.
- startdate=201205 – choose a month to start with. 201205 would include the information for May, 2012
- enddate=201206 – choose where the date range should end. Selecting the current month would include all available data for it, up to the current date.
Importing to Google Drive
The base URL, along with those 8 parameters is all you need to form a proper query. OK, enough with the boring part. Let’s build a query and fire up Google Docs :-) . This is how the query should look.
When you paste it in your browser of choice, you will get a well-formed XML file in return. Here is a sample of the output. This is the output by Google Chrome. Since we’ll be importing the information to Google Drive, it seems like a nice fit.
We have decided to blur the label, but in your case this would be the name you gave to your target. New York is the location corresponding to ID 22, the one we entered in the URL. On this target we ran exactly 2540 checks from New York and in that time we got zero errors, so good for us! Then we get the response time values and the uptime percentage. For downtime we got 0 and that’s a good thing.
Uptime statistics end up in monthly reports alongside many other performance metrics. Some of you might need to incorporate this data. Instead of generating multiple reports and finding yourself in copy/paste frenzy, you can use ImportXML in Google Spreadsheets. From there you can copy the data to MS Excel with a single move.
First we name the fields we want to extract. Because we use ImportXML with Xpath, we can get only specific parts of the response, thus making it easier to read.
Importing the data is really quite easy. To export the data we need, we simply need to place the URL and instructions together. Below you will find a sample. Paste that into any cell and you would get the uptime percentage for May and June.
The part in red is our Xpath. Does it look familiar to you? It follows the hierarchy of the XML files. We simply tell Google Spreadsheets to take only the value of “uptimepercent” by giving the full path to it.
Now, in order to get the data for the remaining fields, we only need to change the item to a different one. Let’s scroll up and see what other values we had. We can click under the first label, Target, and paste following query, where we have replaced the item for uptime with the one for label. To save some time, we went ahead and entered all values.
With this approach, you can simply check the item you need to export, and replace it in the example. If you are curious on how to extract the path, you can use Xpath Helper, straight out of Google Chrome. Once installed, the plugin will load automatically. You would simply need to click in the item you wish to get the path for and it will appear.
Snap it next to the URL and there you have it. Behold a scalable method to automatically grab data from our API, via HTTP/GET, and use it to your liking. The spreadsheet used in this post is available for download http://goo.gl/G2UwT . Make a copy and use it as you will. Don’t forget to fill in your username and key, otherwise the function won’t work. To make things a bit easier, we have specified two cells to enter them in.
A lot more can be done with the API with just a bit of tinkering. All available request methods and available calls are available in the official WebSitePulse API Specification. The data can be used for MS Excel charts, integrated to your reporting systems, or become part of a new application. Give it a spin by registering for a free monitoring account. The API is available with all trial accounts. We are curious to see what you make of it!
This month, we’ve detected a high increase in response time for one of the sites we monitor. It is not a particularly large site; however it is one that relies on fast navigation. On average, the website used to load in about 3 seconds. One morning it went up to 10 seconds only for the site to begin to load. This is a good time to say that it is always advisable to test out new designs and functionality.
The website owner rolled out a new outlook combined with more functionality. Later on we found out that the site was rewritten from scratch - scripts, databases, supporting files, everything. The webmasters overlooked one single thing - they left the site working with the beta version of their database. They made multiple request to a database located on a completely different IP. This added up 5 seconds on top of their average loading times.
Having your database on a separate server is by no means a bad thing. Big sites have no other choice but to have dedicated servers powering their databases system. These servers are usually very well connected, and most of the times in close proximity to the hardware holding their web server. Smaller sites don’t usually need all this and have a single server or shared hosting providing everything they need. In this particular case, the database was on a server which was used for various tasks. It wasn’t their fastest one either.
So, imagine you request a page and then the web server requests data from the database that is located in a different city, running alongside many other services. Then the database generates the reply to the query and sends it back to the web server to process. This all happens before the actual transfer to your host starts.
We were able to act fast and let them know of the issue. They fixed it and shaved off a few valuable seconds. Luckily, it all happened in under a day. Monitor your website for 30 days, completely free, and let us know if our services helped you!
In this blog we’ve already discussed the importance of response time. And I hardly need to tell you that when a website loads slowly, you’d rather take your business, reservation, purchase or pleasure somewhere else. So, now that you have your website up and running, you definitely don’t want it to be an underachiever and drive those potential clients away simply because it fails to load properly. So here are 5 simple ways to improve your website response time without too much effort.
Minimize the HTTP Requests
The greater part (and I mean like 80%) of the response time is spent downloading the front-end components of the page – images, scripts, CSS and so on. Therefore, fewer components to download mean fewer HTTP, thus faster response time of your page.
Compress to Impress
Here is another marvellous way to improve the performance of your site. Compression helps decrease the response time of a site by reducing the size of the HTTP response. The most popular method to do this is Gzip. It helps you reduce the response size by almost 70%. Generally, servers choose which file types to Gzip, and while most sites compress their HTML content, you can take it one step further and compress even your scripts and stylesheets. This will reduce the weight of your site, and significantly improve user experience.
Stay Away from Redirects
Unless it is absolutely necessary, avoid redirects as they notably slow the response time. This happens because during the redirection nothing in the page can be loaded until the HTML from the new location arrives. Just don’t insert any redirects unless you really have to.
Monitor Your Server Performance
Even if you have the best performing and most beautifully and efficiently designed site ever, if your server is not working it is all useless. Escape from this danger by signing up here.
To read more useful tips on site optimization visit the Yahoo!Developer Network and Six Revisions sites. Feel free to share your own tips and tricks to improve website response time in the comments below.