Posted in Monitoring, Guest Posts, Tech
image courtesy to: exotel.in
Although cloud adoption is growing at an amazing rate, many CIOs still have doubts when it comes to choosing the right service provider. Certainly, making such a decision is by no means an easy task. Still, even if you’re sure a particular company offers all you need, you shouldn’t rush into signing a SLA with them. Before signing a contract with a cloud provider and making a long-term commitment, you might want to contemplate several important things.
Security and authorization
If you’re familiar with the way cloud technology works, you must have realized the potential risks of storing your data there. Security has always been a heated topic in the cloud, which is why you must make sure your provider uses high security standards. Enquire about the encryption strength they use to protect your data, as well as about the people who are authorized to access it any time. In most cloud companies there are groups of employees who are allowed to access your data in case you encounter problems. What you need to sort out with your provider is that nobody else would ever have the possibility to do so.
Maximum uptime is a crucial thing for every cloud user, especially for businesses or commercial websites. While many providers guarantee 100% uptime, you must be aware this isn’t actually possible. Every service needs to have network maintenance every now and then, so you might want to check your provider’s schedule. Additionally, make sure service uptime doesn’t refer only to scheduled uptime, as providers tend to exclude what is called ‘planned downtime’ from this digit. Planned downtime allows service provider to check bugs or introduce features at a specific time of the week or month. Usually this is a short period at points when the network traffic is least intensive.
As cloud providers usually establish their server farms across different states or even countries, it may be useful to know where your data is actually stored. In case your data crosses national boundaries, it becomes a subject of different laws. This issue is why many European businesses restrained from using US-based cloud services, as this would most probably make their data accessible to the US government under the Patriotic Act. Anyways, wherever your business is located, you might want to check this issue in order to avoid possible inconsistencies with local legislation.
Service price was probably one of the major factors that influenced your choice of a specific cloud vendor. However, even though the contract costs are always neatly listed on the SLA, there is a possibility that some extra fees, taxes or upgrading costs are outlined somewhere in the back of it. It is important that your provider make these things clear, so that you’d avoid any potential misunderstandings. Enquire about the exact situations when you may be required to pay extra and make sure there won’t be any possibilities for your provider to charge anything you weren’t already prepared to pay.
Obviously, there are plenty of reasons why you should carefully read every single sentence in your SLA, especially the ones that may be listed in tiny print. Here you may find about the things mentioned above and thus eliminate any doubts that the provider you chose would actually meet your cloud expectations.
Posted in Guest Posts
Whether you’re looking at your business computing network from the perspective of your customers or with an internal intranet system in mind, it is important that you have the necessary level of monitoring in place so that you can deliver a satisfactory experience 100% of the time. The choice faced by businesses boils down to whether they wish to come up with an in-house monitoring plan, or outsource this function to a remote company or operator.
What are the pros and cons of each option, and does one definitively beat the other in terms of usage?
This option involves a business paying IT professionals to work within the company.
By far the biggest advantage of in-house monitoring is that a business owner will always know what’s going on from an IT perspective. There can be daily briefings with the IT team and any developments can be tracked easily. There’s also the added benefit of a business owner knowing who is accountable for their network health; if any issues do slip through the net they can easily be picked up at management level and passed on immediately.
The main disadvantage is that it can be expensive to assemble a highly qualified team of IT professionals, particularly if the business is not IT-based in the first place, and is literally only employing these people out of necessity. With in-house monitoring, a business will often be limited to development within the knowledge of the IT team. A monitoring company with a large team and a substantial research and development budget, for example, will have more potential for better performance.
Remote monitoring, where all of the network functions of a business are outsourced, is the option favoured by the majority of companies today.
The ability to enjoy the same level of expertise, but at a reduced cost in comparison to in-house monitoring, is the main motivating factor that makes it a popular choice. Many networking functions work when they’re ‘out of sight, out of mind’ for a company. It isn’t something most businesses need to concentrate on, so why spend the money on equipment and people to do so when it can be done remotely, with business leaders only getting involved when absolutely necessary. A remote network manager will take care of everything from system backups to dealing with any issues that may occur. The business contact will merely receive an overview of the things they need to know.
In addition, a remote monitor will be able to implement upgrades or developments without putting any time strain on yourself. They will also ensure a more robust content delivery network by using the resources they have at their disposal to meet your needs. Contrast this with in-house monitoring, where you’ll often find content delivery is compromised if you don’t have the resources or development capability to make improvements or provide the necessary ‘juice’ for your systems.
If there is a disadvantage when it comes to remote monitoring, it is that you might not know exactly how much time is being spent on your network. Yes, a monitoring company has promised you round the clock monitoring, but if they have a portfolio of dozens of clients, how sure can you be that yours is always getting the attention it needs?
In-House or Remote Monitoring?
Although there are clear advantages and disadvantages associated with each monitoring option, remote monitoring offers the more rounded option when it comes to dealing with IT in-house. Both small and large businesses can save time and money by using remote monitoring services, which will make their content delivery network stronger and deliver an all-round more efficient solution.
Posted in Guest Posts
Cloud computing continues to gain momentum as technology and accessibility improve. Not only are businesses adapting new cloud-ready applications, but they are also migrating legacy systems to the cloud.
While the decision to employ a new application in the cloud may be a no-brainer, the decision to move an existing application may not be so easy. Careful consideration to these moves is required as it is easy to make mistakes and the results can be catastrophic.
In many cases, experts don't agree as to when it is appropriate to migrate a legacy system to the cloud. However, there are some common sense criteria that almost everyone agrees on. Here are a few to consider as a starting point in your decision making process:
- Stand-alone applications that do not interface with many other applications may be good candidates. Interfacing applications in the cloud with applications that still run locally can be problematic.
- Migrating applications that are already virtualized on a local server may be easier than those still run on dedicated servers.
- These applications are more likely to run well on the variety of hardware systems that may be deployed in the cloud.
- Fragile applications that continue to have problems when run locally are not good candidates for migration to the cloud. The existing problems are likely to get worse in the new environment.
- Latency can be a problem when running applications in the cloud. If this is critical for you, it's important to do some extensive evaluation and testing to ensure that the new environment will not cause problems. There are a variety of other architectural constraints that may affect your decision. For example, if you are planning on benefiting from scalability in the cloud, make sure that your software application is designed and configured to take advantage of it.
- If you are a government agency or a private company operating in regulated environments such as banking or healthcare, regulations may prohibit you from migrating to the cloud. If applicable, be sure to investigate these constraints before you waste your time on further analysis.
- Carefully review the existing licensing contracts on your applications before considering migration to the cloud. These licenses may limit the number of servers your application can run on or may result in higher licensing fees when hosted in the cloud.
- If you are in the middle of an upgrade or other substantial development projects related to the application, it's probably not a good idea to muddy the waters with a move to the cloud. Wait until the environment has stabilized before considering a move.
- Do the math! Be sure to carefully evaluate both the current costs of your local environment and the costs of a cloud installation. Despite the benefits, many small businesses with stable, low maintenance applications may find that the additional monthly fees associated with a cloud installation are prohibitive.
- Security has always been the elephant in the room when it comes to cloud computing. Most of these issues have been addressed in recent years and while there may still be some security risks in the cloud, there are also risks in local environments. Nevertheless, it's important to do your homework and thoroughly understand all sides of the security equation before you take the plunge.
- Plan ahead! If you plan to replace or upgrade your existing software applications in the near future, it may be best to wait and select a cloud based service when you make the change. If your servers are old and subject to failure, you may want to evaluate the benefits of a cloud based application sooner rather than later! Clearly, eliminating server replacement is one of the significant cost benefits of migrating to the cloud.
Cloud computing is not an all or nothing scenario. While the conditions listed above refer primarily to public cloud installations, some of the issues cited may be mitigated with private or hybrid clouds.
One thing is for certain, the case for migrating applications to the cloud will continue to strengthen. For most companies, it's not a matter of IF they move their applications to the cloud, it's WHEN!
Posted in Guest Posts
Enterprises have started adopting mobile technologies as desktop solutions or web applications do not satisfy the needs of enterprises anymore. Majority of employees in large organizations are taking their own devices to work, and their IT teams are charged to find a way to set up and implement smart BYOD standards.
Although there are several obstacles to enterprise adoption of mobile apps, the need of the employees forces enterprises to opt for mobile solutions. The post-PC future envisioned by Steve Jobs is knocking at our door, and mobility solutions are the need of the hour. So, if you are a mobile developer, you will, sooner rather than later, work on mobile apps for enterprises.
Most mobile developers are used to creating apps for the app stores. But when you work on internal mobile apps for large businesses, all that you have learned earlier will not be of much use. The needs of every single enterprise will differ, and you will have to put on your thinking hat and find solutions to their specific concerns.
There are several challenges that will make developing that first app difficult:
This should not be your headache, but it will become at some point. While some enterprises will let its employees use only one mobile platform, most use multiple platforms. If the company has implemented BYOD, you may need to create an app for a vast array of devices and mobile operating systems.
As a mobile development company providing the services, you need to analyze and decide the approach: is it worth the time to create separate versions of the app for different operating systems, or should you use HTML to create native apps? When you consider that most enterprise mobile apps will be more complicated than the projects you usually handle, you may get an anxiety attack.
Most business organizations already have a complex IT infrastructure. The mobile applications that you are building for them are just an icing on their cakes. Most of the times, your app is not replacing any back-end system – it is simply augmenting the existing infrastructure by providing mobility. This will require complex cross-platform coding and an in-depth understanding of the client's business processes.
Mobile developers who do not possess domain knowledge will face a very steep learning curve. Not only will you need expertise in a different sort of programming, but you will also have to spend more time understanding the client's existing IT infrastructure.
Developing desktop or web apps for enterprises is a long-term process: software developers take months or years to create one large-scale enterprise app. But if you are creating a mobile app, people expect you to get it done within a few weeks. Most mobile developers, however, will need to spend a lot of time understanding the client’s requirements and conceptualizing an app that can provide the perfect solution.
Add development and integration to it, and you will realize that you need a lot of time. Collaboration with the client's IT team and other project stakeholders may also be necessary. The only way you can cope with the workload is by using agile methodologies. And if it is your first time, you are going to have a wild time!
To wrap it up
Platform-compatibility, back-end integration and difficult deadlines are just the major challenges you will face. Finding out ways to secure the app, helping the client deploy it and testing the app will also take a lot of focus, effort and expertise. Best of luck!
Posted in Guest Posts
Organisations underestimate the value of their data security, yet the cost implications of doing so can be catastrophic. Whilst daily systems and processes may be sound from a workflow viewpoint, how secure are they? This article outlines the three key areas for consideration and specifies what you should demand from your software solution.
Safeguarding your data is vital in ensuring efficient continuity of operations.
There are many reasons why business operations fall over not least due to the sheer volume of systems and processes in operation.
The technologies and systems in place may be top-notch and increase corporate efficiencies and productivity many times over. But is the data secure when in transit and storage? Corporate data is to an organisation what oxygen is to red blood cells and the infrastructure through which it flows is critical and must not be underestimated.
When considering the security of your data, you need to address these three key areas:
- Data in situ.
- Data in transit.
- Authentication of identities and transactions.
Data in Situ
One perspective of protecting data in situ involves whole disk encryption solutions that prevent stored data from being unlawfully accessed on the computing endpoints. This can be centrally managed on premise but increasingly organisations are seeing the financial and logistical benefits of outsourcing to cloud-based managed data encryption service providers. Solutions offering consistent protection on multi-platform devices with rapid deployment, web-based management, easy secure recovery and strong encryption remain key considerations.
Data in Transit
A primary data on the move application is email where businesses need to be protected from ‘organised crime’ inbound threats and either deliberate or ‘accidental’ data breaches via outgoing email. The threat management technology of choice not only has to be the solution delivering protection from spam, viruses, phishing, spear phishing but should also enable a framework of control that identifies then protects intellectual assets through policy-based encryption. Both structured and unstructured data must be protected. Human error prevention is crucial in this area so choose a solution that helps avoid costly fines and ensures regulatory compliance.
Authentication of Identities and Transactions
This area requires the efficient management of digital certificates and cryptographic keys. Failure to do this has recently been quantified by the first Annual Report from the Ponemon Institute exposing the true cost of misplaced trust as a result of the mis-management in this area. Alarmingly over 50% of recipients didn’t know how many keys and certificates were in operation within their organisation.
Managing this critical area can be simplified and completely automated with software solutions that initially identify and record every single key and certificate on a computer estate. The software creates a database to record them, establishes their respective expiry dates, monitors and subsequently alerts in advance of expiry.
Additionally, human error can be eliminated by automating the enrollment and application of expiring keys and certificates thus closing the door to security breaches that potentially bring organisations to their knees. Seamless continuity within this area is achieved from those software solutions that also automate the download and deployment of these keys and certificates.
Choosing a digital certificate and encryption key provider should also be seen as a vital component to ensuring continuity of operations. If you don’t have the budget available to automate the process then you need to minimise the downtime experienced if a key or certificate expires. Some providers can take up to 48 hours to deliver replacements yet vendors exist that provide replacements within 2 hours.
What are the Costs of Data Mis-Management?
The implications of mis-managing data within your organisation can result in significant unforeseen direct and indirect costs.
Direct costs include loss of imminent sales. IT professionals diverted from other projects to resolve data compromise issues. Financial penalties levied from professional/legal/government bodies and complete technology and costs associated from systems overhauls and resulting new capital purchases.
Data held within the cloud itself can also be easily encrypted although the emphasis here should be on cloud ‘access’ security. Even though storage of the data in the cloud can be easily protected, it is the entity accessing the data that needs to be validated and their actions verified. Cloud security, by its very nature, has to place trusted entities and communication integrity as its priority. Cloud data storage moves the emphasis away from the security of the data itself towards the integrity of interactions and the security processes permitting entities from accessing the data in the first place.
Failure to protect your data, or to protect the integrity of your cloud computing infrastructure can, indirectly, erode your corporate reputation as trust in your organisation has been compromised.
There are many potential pitfalls within the management of data security but obtaining professional advice from IT consultancy companies can steer you through. When seeking advice it is always best to use the services of IT security professionals that are security cleared.