Security notification for CVE-2015-0235 (GHOST vulnerability)

January 28th, 2015
Posted in Industry News, WebSitePulse News, Tech

A significant Linux vulnerability that allows remote code execution to Linux server(s) was announced late yesterday, named GHOST: CVE-2015-023. Full details of the vulnerability are available at http://www.openwall.com/lists/oss-security/2015/01/27/9. While the issue has been fixed as early as Mar 21, 2013 it was not marked as a security threat and as a result the patch was not backported to most of the stable and long-term-support distributions like RHEL, Centos, Ubuntu 12.04 etc which left them vulnerable.

Updates for CentOS are already available in the Updates repository so a simple "yum update" will install the required patches to mitigate this vulnerability.

Qualys have provided a simple C program to test if a machine is vulnerable

cat > GHOST.c << EOF
#include <netdb.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <errno.h>

#define CANARY "in_the_coal_mine"

struct {
  char buffer[1024];
  char canary[sizeof(CANARY)];
} temp = { "buffer", CANARY };

int main(void) {
  struct hostent resbuf;
  struct hostent *result;
  int herrno;
  int retval;

  /*** strlen (name) = size_needed - sizeof (*host_addr) - sizeof (*h_addr_ptrs) - 1; ***/
  size_t len = sizeof(temp.buffer) - 16*sizeof(unsigned char) - 2*sizeof(char *) - 1;
  char name[sizeof(temp.buffer)];
  memset(name, '0', len);
  name[len] = '\0';

  retval = gethostbyname_r(name, &resbuf, temp.buffer, sizeof(temp.buffer), &result, &herrno);

  if (strcmp(temp.canary, CANARY) != 0) {
    puts("vulnerable");
    exit(EXIT_SUCCESS);
  }
  if (retval == ERANGE) {
    puts("not vulnerable");
    exit(EXIT_SUCCESS);
  }
  puts("should not happen");
  exit(EXIT_FAILURE);
}
EOF

$ gcc GHOST.c -o GHOST
$ ./GHOST

We have verified that all WebSitePulse servers have latest updates installed and are not vulnerable.

 

 

 

 

For DBAs: "Think of backup strategies as restore strategies"

July 24th, 2014
Posted in Tech

Andrew Pruski is an SQL Server DBA currently working for Ding in Dublin, Ireland. He’s previously worked for Occam-DM Ltd and the United Kingdom Hydrographic Office in England. He’s benefitted immensely from the online DBA community, and his blog is an attempt to give something in return.

Aside from the SQL Server, he’s keen on running and playing rugby. He can also play the guitar (badly, but it doesn’t stop him).

How come you got involved in IT?

I always enjoyed working with computers when I was in school and so it seemed a natural area to study at the university. I enjoyed studying database design so once I completed my BSC in 2005, I started looking for a database developer position.

Whilst working as a junior developer, I realised that the code I was writing took a long time to execute (mainly because of all the cursors I was using). I did some research online and found multiple websites dedicated to SQL Server and opportunities to work as a DBA in particular. I read tons of online articles on websites such as SQLServerCentral.com and the more I read, the more interested I became in that area. It was then that I decided to pursue a career as a DBA.

What are the most common and the most challenging issues you've handled recently?

The most challenging issue I have ever had to perform was a task given to me when I was a relatively inexperienced DBA. A production data warehouse had a different collation to the development and staging versions and I was tasked with bringing the production database in-line with the other environments.

Why not change the collation of the databases in the other environments? Well, as I said, I was inexperienced and looking to prove myself, so off to work I went. The collation of a database can be changed quite simply from the Properties menu. The problem is that the setting only changes the collation for all new objects created. Everything else in the database remains on the old collation.

So after what felt like an age…I came up with the following procedure to change the collation of all the current objects with the database:

  1. Script out all database objects
  2. Drop all functions
  3. Drop all views
  4. Drop all foreign keys
  5. Drop all primary keys
  6. Drop all indexes
  7. Drop all statistics
  8. Change column collation
  9. Change database collation
  10. Recreate indexes
  11. Recreate primary keys
  12. Recreate foreign keys
  13. Recreate statistics
  14. Recreate views (and associated indexes)

I tested and tested, and tested, so when it came to deployment, it all ran smoothly. However, I’ll always think of this project as one of the most challenging I’ve worked on, not just because of the technical side but because of the human aspect of it as well. It taught me a lot about relationships within an SQL Server database but it also taught me that part of a DBA’s job should be to question why something is being done and to provide alternatives where possible. I should have questioned why the need to change the collation of the production database was felt. It would have been simpler (and a lot less risky) to change the development databases’ collation.

The most common issue that I deal with are blocking/deadlocks occurring within production databases. There is monitoring in place which sends out alerts (via email) meaning I can respond quickly to resolve any issues. An SQL Server agent polls active connections to check for blocking and Service Broker utilising Event Notifications has been implemented to generate deadlock alerts. This allows me to capture the offending query and see what action needs to be taken in order to prevent future incidents (more often than not, either rewriting the query or creating an appropriate index).

However, preventative measures are always better than having to react to issues in real time. I work closely with developers to ensure these issues are kept to a minimum by implementing good database design and by writing efficient SQL queries.

Do you use any external tools for database monitoring? Which ones?

I have worked with Nagios, Zabbix, RedGate, Idera and Confio monitoring tools. All are good systems with their own positives and negatives but I strongly feel that any DBA should be able to implement their own monitoring system. I wrote a short blog post detailing my reasons why.

In your opinion, what server status variables should a DBA log and monitor?

I always create my own database in any SQL instances that I administer. This allows me to setup monitoring for the following:

  • Blocking
  • Deadlocks
  • Auto-Growth
  • Failed SQL Server Agent jobs
  • Backups
  • Percentage transaction log used
  • Corruption alerts
  • Wait statistics
  • Index usage statistics
  • Index fragmentation
  • Error log auditing
  • Disk space
  • Memory Usage
  • Page life expectancy vs. lazy writes
  • IO subsystem performance.

What about disaster recovery and prevention? What is your approach?

Every disaster recovery plan should be tailored to each individual system. It is one of the key aspects of a DBA role within the company to work closely with system owners to define Recovery Point Objectives (how much data can be lost it the event of a failure) and Recovery Time Objectives (how long it should take to bring the system back online).

Once the RPO & RTO have been established, a high availability solution can be built and a disaster recovery strategy implemented. High availability covers SQL Server features such as mirroring, clustering and always on. Disaster recovery strategies start with a backup schedule for the databases but can include features such as log shipping or asynchronous mirroring to a DR site.

I attended an SQL Skills training course last year in which Paul Randal talked about how to approach backup strategies. His advice was to think of backup strategies as restore strategies and that DBAs should consider performing many restores to bring the database online in the event of a system failure. This is one of the best pieces of advice I have been given as a DBA. It is all well and good to back up database transaction logs every 15 minutes, but how many of those will need to be restored in the event of a failure? I really do not want to be restoring hundreds of log backups at 3 o’clock in the morning (because this is generally when failures happen for me, for some reason).

Contact him at: 

Email       dbafromthecold@gmail.com

Twitter     @DBAFromTheCold

Blog         dbafromthecold.wordpress.com

LinkedIn  ie.linkedin.com/in/andrewpruski/



Facebook suffers global outage

June 19th, 2014
Posted in Monitoring, Industry News, Tech

facebook.com went down for aboutt 15 minutes between 4:00 and 4:15 AM EST. Users trying to connect to the site were seeing an error message "Sorry, something went wrong".  The issue has been confirmed by multiple locations around the world and as far as we can tell all Facebook users were affected.

 

 

The Facebook Platform Status showed a sharp increase in API response times for the time of the outage.

Facebook Platform Status

The last information posted by Facebook is

Sitewide issue resolved

Earlier this morning, we experienced an issue that prevented use of the API for a brief period of time. We resolved the issue quickly, and we are now back to 100%. We're sorry for any inconvenience this may have caused.

We are awaiting an official statement and more details from Facebook and we will update this post as soon as more information is available.

 

 

Test Your Server Against the Heartbleed OpenSSL Vulnerability

April 8th, 2014
Posted in Tools, Tech

A major vulnerability in OpenSSL software was announced late yesterday, impacting all servers having the Heartbeat TLS extension enabled with OpenSSL versions states above.

The "heartbleed" vulnerability, has been already recorded as CVE-2014-0160. Further details can be found at http://heartbleed.com and https://www.openssl.org/news/secadv_20140407.txt.

The bug has already scared a lot of system administrators and site owners, and the one that we've done on WebSitePulse was to release a test against this vulnerability.

So, if you want to check whether your secure server is affected or not, please visit: http://www.websitepulse.com/heartbeat.php

How to set Up Your Reports Properly

March 18th, 2014
Posted in Tech

There are several reports that can quickly provide you with information related to the latest events of your targets. 

Two of the modules of your account dashboard provide information only on a limited number of events.

Click to Enlarge

If the current status icon is green, it means all the active targets in your account are up right now and your sites and servers are not experiencing any problems. Red means we are detecting problems with some or all of your targets for which you will be able to see details in the Current Status module.

The next section below the Current Status section is the Most Recent detected problem. Clicking on the target label link will take you directly to the target details section for the specific target. The status link will then provide more information regarding the error/warning we've detected. You will see the detailed breakdown of the response times as well as the confirmation details from an independent secondary location. The same goes for the Most Recent Alerts Sent section where you have details regarding the notifications which our system has sent. You can change the time interval for these sections and choose from:

  • 1 hour
  • 12 hours
  • 24 hours
  • 3 days
  • 7 days

If you rarely experience any problems with your targets, you might want to choose the 7-day interval, so you will be able to quickly access specific errors. The default interval is 24 hours for these specific sections.

If you want to see more than 10 events at the same time, you should click the full report link or go to Reports -> Most Recently Detected Problems, or Most Recent Alerts Sent reports. You will be able to see all the events for the specific period and not only the last 10 ones. You will also be able to choose a 30-day interval which is not available for the dashboard modules.

You can download the data in several formats:

Click to Enlarge

This is a useful feature should you need to analyze this information later.

To get a more detailed report and filter out only a specific target or contact, go to Reports -> Event Log.

Click to Enlarge

Clicking the Show Log button without making a custom selection will get you a list of all targets and locations, and if we have not detected any events, the data will be in green:

Click to Enlarge

If, on the other hand, we have detected any problems for a specific target, you will get a more detailed report:

Click to Enlarge

In case this report shows we have sent out specific alerts which you have not received, contact us immediately so we can investigate further. 

If you check the Hide targets without events box before you generate the report, we will show you only the data for the targets that have experienced problems for the specific period. You can select three periods for the report:

  • Day
  • 2 Days
  • Week

To generate a weekly report for the last week, for example, you should choose the starting date 7 days before today.

From the Select Target drop down menu, you can choose a specific target or all targets. For a specific target you can select a specific location as well. Suspended targets are also shown in this drop down menu.

You can also select a specific notification contact and get the data only for that contact. This is a very useful feature if you want to check how many alerts a specific contact has received for example.

These reports are very useful if you want to get the detailed information for a specific error or notification quickly. You can also track your usage of the paid types of notifications (SMS and voice calls) more easily.