Request a Quote Schedule a Tour

Follow Us

Subscribe via Email

Your email:

Xand Blog

Current Articles | RSS Feed RSS Feed

Shell Shock Bug: Tips to Protect Your Mission-Critical Systems

By Christian Lappin, Xand Sales Engineerchristian lappin blog

Posted September 30, 2014

“Shell Shock" is the first major Internet threat to emerge since the discovery in April of Heartbleed, which affected encryption software used in about two-thirds of all web servers, along with hundreds of technology products.

This bug is being compared to Heartbleed in part because the software at the heart of “Shell Shock”, known as Bash, is also widely used in web servers and other types of computer equipment.

“Shell Shock” is unlikely to affect as many systems as Heartbleed because not all computers running Bash can be exploited. Still, the new bug has the potential to cause greater issues because, unlike Heartbleed, which only allows attackers to read sensitive information from vulnerable Web servers, Shellshock potentially lets attackers take full control over exposed systems.

The Shellshock bug affects countless systems, so determining those machines and devices that are vulnerable in your environment, and then developing and deploying fixes to them is likely to take time.

Many sources have expressed concern about remote user’s accessing machines. Remote users do not directly use Bash, but it is a common shell for evaluating and executing commands from other programs that often are present on web servers. For example, if an application calls the Bash shell command via web HTTP or a Common-Gateway Interface (CGI) in a way that allows a user to insert data, a web server could be hacked in this manner.

This is a serious risk to Internet infrastructure, just like the Heartbleed bug, because Linux not only runs the majority of the servers in use today but Mac OS X laptops and Android devices are running the vulnerable version of Bash software as well.

Large numbers of embedded devices often referred to as “The Internet of Things” can also be affected. NIST vulnerability database has rated this vulnerability “10 out of 10” in terms of severity.

Hackers are already exploiting "Shell Shock” using worms to scan for vulnerable systems and then infect them. Russian security software maker Kaspersky Labs reported that one of these computer worms has begun infecting computers by exploiting "Shell Shock".

The malicious software being deployed can take control of an infected machine, launch denial-of-service attacks to disrupt websites, and also scan for other vulnerable devices, including routers.

US-CERT has a list of operating systems that are vulnerable. At this time the following all have patches available: (more being added daily in addition to hardware systems and devices)

Operating systems with updates include:

The U.S.-CERT’s advisory includes a simple command line script that Mac users can run to test for the vulnerability. To check your system from a command line, type or cut and paste this text:

env x='() { :;}; echo vulnerable' bash -c "echo this is a test"

If the system is vulnerable, the output will be:

vulnerable
this is a test

An unaffected (or patched) system will output:

 bash: warning: x: ignoring function definition attempt

 bash: error importing function definition for `x'

 this is a test

Checking UNIX or Linux for a vulnerable shell

Red Hat has developed the following test: http://www.kb.cert.org/vuls/id/252743
Run the following command line in your Linux shell:

$ env x='() { :;}; echo vulnerable' bash -c "echo this is a test"

If the system is vulnerable, the output will be:

vulnerable
this is a test

What should you do now? Review your environment with your IT team and put a plan of action together. Prior to patching it is recommended to disable CGI scripts that call on the shell, but this does not fully mitigate the vulnerability. Many of the major operating system and Linux distribution vendors have released the new bash software versions including:

  • Red Hat Enterprise Linux (versions 4 through 7) and the Fedora distribution
  • CentOS (versions 5 through 7)
  • Ubuntu 10.04 LTS, 12.04 LTS, and 14.04 LTS
  • Debian

If your system is vulnerable to this Bash bug, then I it is highly recommend that you upgrade your Bash software package as soon as possible.

Insure that you are obtaining patches from legitimate and credible sources, in addition to working with your security partners to insure that the exposure / vulnerability has been mitigated after taking corrective action.

Keep up to date with release notices from both CERT and your vendors, as some of the initial patches were incomplete and require additional patching. An entirely new NCAS vulnerability definition has been created to cover the minor issues that remain exposed, with a separate fix to be made available at a later time. It is advised however to apply these fixes as they are made available to avoid leaving systems and devices exposed.

Caution:  As the extent of effected devices and systems continues to grow, be aware of non-trusted sources requesting that you run software or scripts which could further expose potentially exposed systems. 

PCI DSS Version 2.0 vs. 3.0 – Important Changes for Data Security

By Paul Mazzucco, Xand Chief Security OfficerPaul Mazzucco blog

 Posted September 16 2014

 

With the latest round of credit card and personal data breaches in the news, the release of the new PCI DSS 3.0 Security Standard is timely indeed. The overall need of data service providers in every level of the transaction process to develop security best practices is now more important than ever.

With Version 3.0, the PCI Security Standards Council (PCI SSC) focuses on flexibility, education, awareness, and security as a shared responsibility. There are several important changes taking place in the jump from Version 2.0 to the new 3.0 framework, and IT Decision Makers will want to make sure their infrastructure and service providers are up to date to ensure maximum levels of security for their critical data.

Key drivers for PCI DSS Version 3.0 include an overall lack of education and awareness from the Council in terms of coverage responsibility, especially in terms of emerging technologies such as Cloud and Virtualization. Weak passwords and authentications challenges, third party security, slow self-detection in response to malware and other threats, and an inconsistency in assessments were also factors in the update.

When surveying the PCI DSS landscape, it’s critical for those charged with protecting cardholder data to be aware of the multiple access points to their information and where responsibility falls when working with complex infrastructure systems.  The PCI Council sets various standards and benchmarks for manufacturers, developers, and providers.

For example, at Xand our data center facilities fall into the Service Provider category. This places our company under the PCI Data Security Standards (PCI DSS) umbrella.  When searching for a managed services provider, be sure that the level of PCI classification is clearly provided upfront, as this is vitally important in determining lines of demarcation in data protection responsibilities.

Lack of knowledge around payment card security and, more telling, poor implementation and maintenance of the PCI standards are huge contributing factors in why security breaches happen. In my role as Chief Security Officer, I spend each day working to make sure Xand’s systems are up to date with the latest compliances.

Although the PCI DSS standards serve as a great guide against which we test ourselves, building an overall security policy and a proper employee training program is key to make sure that the human element of our security standards remains tight. Standards of security are unfortunately always playing catch-up against the newest attack vectors and companies cannot simply allow a stamp of compliance to govern their security mandates.

Security is a dynamic field, and those who rest on their laurels often find themselves quickly exposed. When dealing with outsourced solutions providers or managed services vendors, don’t just accept a logo on their website as a rubber stamp for security. Be sure to ask:

  • What version of the compliance they adhere to,
  • When the last update was conducted, and
  • How often the organization undertakes audits

These criteria separate the wheat from the chafe in IT security. 

In regards to PCI, the PCI Security Standards Council has made several important improvements in the PCI DSS certification in version 3.0. The updated version of PCI DSS tackles the following:

  • Provide stronger focus on some of the greater risk areas in the threat environment
  • Provide increased clarity on PCI DSS & PA-DSS requirements
  • Build greater understanding on the intent of the requirements and how to apply them
  • Improve flexibility for all entities implementing, assessing, and building to the Standards
  • Drive more consistency among assessors
  • Help manage evolving risks / threats
  • Align with changes in industry best practices
  • Clarify scoping and reporting
  • Eliminate redundant sub-requirements and consolidate documentation

Ask your provider which version of PCI DSS they are certified for. Version 2.0 will be supported until December 2014 and many companies will hold off on updating until the last possible moment. With greater transparency and a more nuanced approach to Cloud, Virtualized, and Multi-Tiered environments, taking the extra steps to ensure your provider is up to date with PCI DSS Version 3.0 may save some tremendous security headaches down the road.

Updating frameworks can be a cumbersome process, but I felt it was of upmost importance to secure the latest PCI DSS update for Xand to give our clients the maximum level of protection available. Xand is privately owned and funding is in place to fully support security initiates. However other providers may be hampered by financial restraints, operational shortfalls, or simply a lack of expertise to keep up on the vast changes coming from PCI.

In addition to maintaining a wide scope of compliances and managing several security systems, I’m often called to take part in client meetings at Xand, where I answer questions and scope out security concerns. The point here isn’t to outline my day (busy!) or sell you on Xand (although we love new customers!) but rather to highlight the importance of having dedicated in-house security personnel. Not every Cloud or Managed Services Provider is in a position to have such dedicated security resources. Use this as another benchmark when seeking a partner for PCI DSS compliant systems.

Overall, the jump from PCI DSS Version 2.0 to 3.0 is an important one, not just for MSPs but for the industry as a whole. Even those who don’t deal directly with cardholder data would do well to seek out infrastructure solutions partners who adhere to PCI DSS mandates, as the practices set forth by the framework can do much to hedge against the risk of a unmitigated security disaster.  

All Clouds are the same... RIGHT? (Part 2)

By Christian Lappin, Xand Sales Engineerchristian lappin blog

Posted September 9, 2014

Part 2 of a 2-part series

In the first part of this blog series, I offered my recommendations for five of the “Top 10” questions you should ask of, and expect answers from, solution providers who are vying for you to move your business applications to their cloud.

Below are the final five questions to ask.  These recommendations are based on Xand’s history of working with business of all sizes and across multiple verticals to match their requirements with the most appropriate cloud-based solutions.

5 Key questions you should be asking your existing or potential Cloud Provider

  1. Do your policies, procedures and implemented technologies allow me to comply with applicable regulations (for example SOX, HIPAA, PCI and others)? Many businesses need to comply with regulatory mandates. Because the ultimate responsibility usually lies with the company that has been entrusted with the customer data, enterprises need to make sure that the cloud provider adheres to the applicable regulations with the same thoroughness and discipline. A common concern is where the data is located and where it can be moved (for example, in case of major disaster), because transferring data out of the country might be a violation of privacy laws in some cases.
  2.  How easy is it to migrate to another cloud provider? Although cloud standards are emerging, interoperability among various cloud providers is still immature. Businesses need to make sure that, when they decide to terminate the service, they will be able to migrate to another cloud provider easily rather than suffering lock-in with their current provider. Another point to investigate is how the provider guarantees that all data has actually been removed from the systems after a customer has left.
  3.  What types of monitoring and service management practices are in place? Although most companies have implemented monitoring systems and tools, and adopted ITIL oriented practices to manage their IT shop, little is known about how cloud providers manage their own IT environments. So you should investigate whether the maturity level of the provider is better or worse than your own.
  4.  How many reference customers do you have? Getting the impressions from other customers who are already running workloads on the provider’s cloud can give valuable insight about the strengths and weaknesses of the provider, and lessons learned in the migration process.
  5.  What is your financial status and profitability? A cloud provider who makes no profit with the service might be tempted to relax some costly aspects, such as provisioning enough extra capacity to allow for elastic scalability; dedicating enough skilled resources to security and monitoring; providing a comprehensive disaster recovery solution; investing in new functionalities and enhancements; and more. Thus, it is important for companies to evaluate the financial status of the provider and the outlook, even though in some cases this might be challenging because many providers are not releasing this type of information.

I hope the 10 questions and explanations provided in this 2-part blog series offer valuable insight on your journey to the cloud. Because as you look toward the cloud for your business requirements, it is critical to think about not only how your environment will run and be protected but also how you will get it there.

To learn how Xand answers these questions and to hear more of our success stories, please contact us.  We would be happy to share our experiences with you and your team.

All Clouds are the same... RIGHT? (Part 1)

By Christian Lappin, Xand Sales Engineerchristian lappin blog

Posted September 3, 2014

Part 1 of a 2-part series

We hear this question all the time when engaging with business leaders. Along with: Why can’t we easily compare proposals for cloud? After all, aren’t we talking about the same thing?

Before you go shopping for cloud, this 2-part blog series will help you determine the Top 10 questions to ask potential service providers before shifting your business applications to the cloud.

The first step is to review your business requirements to gain the insight that will help match the right type of cloud to those business needs. In prior blogs we have discussed the three basic and industry agreed upon versions of cloud.

  • SaaS – Software as a Service. In a word, SaaS means applications. This includes CRM, Email, virtual desktop, communication tools, games and other hosted software.
  • PaaS – Platform as a Service. Execution runtime, databases, web servers, development tools and more.
  • IaaS – Infrastructure as a Service. Virtual machines, servers, storage, load balancers, network, security appliances, firewalls and more.

Once businesses have a better idea of what is needed to move their business forward, we hear questions like:  How do we know that we are getting a fully thought-through solution? Also how do we know that a provider has experience with our specific business challenges?

Below are five of the “Top 10” questions you should have answers to before shifting your business to a solution provider and into their “cloud”.  These questions have come from many discussions across several verticals. Being completely transparent, I can tell you that less than half of them are being asked on most engagements. So in addition to the questions you should ask, I have also provided the why you should want to know the answers.

Top 5 Questions you should be asking your existing or potential Cloud Provider

  1. Which service-level agreement (SLA) do you provide? Businesses need to make sure that the SLA offered by the cloud provider meets or exceeds the service level requirements of the workload to be deployed.  For example, uptime guarantees, service performance, incident response times, and others. And SLA penalties must be analyzed because, in some cases, automatic credits or reimbursement of the charges might not be enough compensation for an enterprise in case of service outage. Finally, businesses might be interested in learning how SLA achievement status is reported and how frequently these reports are produced or updated.
  2. What are your Disaster Recovery and Business Continuity offerings? Businesses should find out how data is backed up and how quickly service would be restored after a major disaster. These questions relate to the well-known factors of RPO (recovery point objective) and RTO (recovery time objective). For example, you should investigate whether you have to take care of backup and disaster recovery yourself or if the provider is doing it for you. In the latter case, companies need to understand basic factors, such as whether incremental backups are used; how easy an image of the data can be reconstructed; how far in the past the backups will go; or how granular the backup is.
  3. How is my data isolated from other customers sharing the same infrastructure? A public cloud environment relies on sharing the same infrastructure for separate customers simultaneously while, at the same time, isolating them from each other. A common concern of companies is that competitors or malicious users can get their hands on their data. Thus, it is important to find out the measures the provider has put in place to guarantee that customers are not able to access, edit or delete other customers’ data.
  4. How do you handle customers who exhibit malicious behaviors? Customers who do not configure the security of their virtual servers properly can expose the rest of that cloud’s customers who share the same infrastructure. For example, a virtual server that is compromised because of a security fault (such as setting the password to password) and is used for criminal activities might yield to the confiscation of the whole physical server – including other legitimate customers’ data – by law enforcement authorities. Therefore, companies need to make sure that malicious behavior is quickly identified and the appropriate measures (such as account shutdown) are taken immediately.
  5. How is my data protected from external attacks? A public cloud environment is usually connected to the Internet, which means it can be subject to attack by malicious users from all over the world. Companies need to understand how external attacks are detected, logged and prevented, as well as the process to handle intrusions and security breaches. Additionally, physical security of the data center facilities must not be disregarded. Presence of surveillance equipment, 24×7 human guards, biometric authentication systems to control physical access to the facilities, and other aspects to prevent physical intrusions should be of interest to companies. 

In the second part of this blog series, I’ll reveal another five of the “Top 10” questions to ask solution providers who are vying to take your applications to their cloud.

The Data Center of 2020 May Look like the iPhone of Today

yatish-mishra-xand-ceo-blogBy Yatish Mishra, Xand President and CEO

Posted August 21, 2014 

What will the typical data center look like in the year 2020? Believe it or not, the answer may be in the palm of your hand. If not, it’s likely in your pocket or perhaps sitting beside you on your desk. The answer is your mobile smartphone.

Considered something of a novelty just a few years ago, smartphones, tablets and other high-performance mobile devices have exploded in popularity and essentially have become the norm in the business world and beyond.

Consider this: Every smartphone function, every app, every swipe has a data ramification. It obviously wasn’t possible to carry around floppy disks or CD-ROMs in a pocket-sized device, so what happened? Manufacturers developed an entire new set of components to fit into the world of cell phone: flash memory, compact multi-core processors, and miniaturized modems, Wi-Fi antennas and more.

The end result has been something remarkable: Touchscreen supercomputers that fit into our pockets. We’ve come a long way from IBM desktops with monochrome monitors.

For data centers, a similar sea change is underway. The industry is moving from a model based on stacks of proprietary hardware to one that’s built on largely software-defined systems and resources. What this means is that that the software-as-as-service (SaaS) model is now being applied at the core infrastructure level. 

A recent article in Wired highlights the trend. As cell phones have gotten ubiquitous, so shall Software-Defined Data Centers (SDDC).

Read “The Data Center of Tomorrow Will Use the Same Tech Our Phones Do” on Wired.com

Looking at the hypothetic data center of 2020, I would expect to see a plethora of compute, memory, storage and security options available in both hardware and software form. Spinning up servers, switches and other infrastructure pieces virtually is something we do right now at Xand every day. In 2020, the volume of these activities will be greatly increased and the capacity to handle dynamic states of never-ending changes as part of normal operations will be a given.

Services will be crucial in successfully orchestrating the data center of 2020. With enormous opportunities presented by pools of standardized hardware and flexible virtual resources, a conductor sitting on top of the stack will be greatly needed to orchestrate infrastructure workflow and optimize performance. Monitoring, patching, updating, alerting and constant maximization of available resources will all fall under the bucket of Managed Services. These services are already in demand and will see an enormous increase in the years leading to 2020.

Technology is an amazing thing and has shaped our world in ways we could not have imaged only 10 years ago. The rapid adoption of mobile devices in all of our lives tells us something about the ability of the IT sphere to adapt to new demands and produce new models for making what seemed impossible only yesterday a reality today.

The cloud and data center industry stands directly at the intersection of the rise of mobile connectivity and the surge of Big Data. Taking cues from our iPhones and Android devices, data center and colocation service providers have a tremendous opportunity to move the needle forward for infrastructure.

2020 is only six years away. Can you imagine what innovations await us there?

Supply Chains Show Up on Cyber Attack Threat Radar

By Paul Mazzucco, Xand Chief Security OfficerPaul Mazzucco blog

 Posted August 14 2014

 

If you’re responsible for managing the risk factor of an organization, what topics immediately come to mind? Most likely you think of fires, floods, hurricanes, tornadoes, maybe a terrorist attack or other disturbance. Ask yourself this: Outside of the firewall sitting in front of your computer infrastructure, how much have you thought about the risk of cyber attacks throughout your businesses entire supply chain?

Don’t feel alarmed if you haven’t considered the possibility, you are far from alone. Cyber attacks against infrastructure used to be the concerns of nations and states. However, a recent report by the D.C.-based Bipartisan Policy Center and the University of Pennsylvania’s Annenberg Public Policy Center featuring members of the original 9/11 Commission said that the growing threat of cyber attacks now extends directly to private-sector systems. According to the report:

“Denial-of-service attacks have tied up companies’ websites, inflicting serious economic losses. A Russian teenage hacker may have been behind the massive malware attack on the U.S. retailer Target, compromising the credit card data of 40 million customers.”

Echoing these statements, a recent story published by Business Insurance highlights the threat cyber attacks pose not just to internal computer systems, but to each link in the normal business supply chain. In fact, the landscape is shifting so rapidly that insurance companies are now developing and selling products to account for cyber-related risk.

Read “Supply Chains Becoming Increasingly Vulnerable to Cyber Attacks” at BusinessInsurance.com

Insurance is great, but even by the industry’s own admission it doesn’t come close to covering all the bases.  Working with a trusted managed services provider who can own and take responsibility for key segments in your infrastructure is a key first step in securing your systems. Making use of tools such as DDoS mitigation and advanced security monitoring can save your company the embarrassment and cost of a down website, application or critical resource.

The average cost of downtime for such resources is over $10,000 per MINUTE. This is not a loss most businesses can routinely absorb.

In addressing security concerns, it’s increasingly important to understand the true threat level. If a firm like Target can fall victim to infrastructure breaches and be made vulnerable by association with their supply chain vendors, any business or any size is at risk.

Contact me today to discuss your IT security concerns. I lead all security efforts here at Xand and also serve as a certified ethical hacker, studying the methods the threat perpetrators use to breach systems. If you have concerns about the state of your infrastructure, reach out to me and I will be happy to discuss a plan of action.

Automating Business Operations with Private Cloud Technology

By Christian Lappin, Xand Sales Engineerchristian lappin blog

Posted August 7, 2014

When it comes to business and technology, the main objective has long been automate, automate and automate some more. We’ve seen the advent of VMware and Microsoft Hyper V systems drastically alter the landscape of local network management for the better, replacing cumbersome physical machine maintenance with streamlined, software-defined workflows handled virtually.

Now it’s time to take those benefits to the world of IT infrastructure. Just like replacing desktops with virtual machines, virtualizing stacks of data center gear—servers, switches, firewalls, etc. – brings innumerable benefits to both administrators and end-users.

When we’re talking cloud, we mean large public providers like Amazon and Google, right? Not so fast. There’s an entirely new middle ground arising, where companies are standing up or leasing their own cloud computing platforms. These private clouds offer the best of both worlds for infrastructure—security, control and the management accessibility of traditionally collocated systems paired with the flexibility, efficiency, and nimbleness of the cloud.

A recent InfoWorld article highlights the benefits of the private cloud phenomenon. In the article, businesses are encouraged to borrow from public cloud architecture and technologies to weave a new management layer around virtualized data center systems.

This is exactly what we do here at Xand. Our team of Engineers and Solutions Architects work hand-in-hand with our clients to scope out, design and construct scalable private cloud systems that meet operational mandates, compliance terms and the growing challenges presented by Big Data and the expanding digital world.

Read “Build Your Own Private Cloud” on InfoWorld.com

If you’re drawn to the benefits of cloud computing but skittish at the thought of turning over the keys to the castle to a large public provider like Amazon, Google or Microsoft, then you need to consider private cloud solutions. By building your own cloud architecture and housing it in a fully-redundant and secure data center facility, you can easily put the cloud to work for you without taking the risk of dumping your entire IaaS deployment into open waters.

Contact us today if you are interested in learning how private cloud solutions can help power your business. 

Cloud Security and HIPAA Compliance Top Concerns for Healthcare Providers

A recent article by Web Hosting Industry Review (http://www.whir.com) reports that security and compliance top the list of concerns for U.S. healthcare providers looking to adopt cloud computing platforms.

Click here to read “HIPAA Compliance and Security Top Cloud Adoption Concerns for U.S. Healthcare Providers”

There are many telling statistics in the article. First, a whopping 80 percent of healthcare organizations are already using the cloud in some aspect as part of their operations model. Of those who have not yet adopted cloud, two-thirds aim to move applications and resources to cloud systems in the future.

It’s clear the benefits of cloud platforms have become apparent to healthcare providers. What remains elusive is an assurance and confidence that such systems will meet the high levels of regulatory compliance the healthcare industry faces.

xand-xcloud-cloud-computing-platform

At Xand, we specialize in designing customized cloud systems from the ground up. Our xCloud platform provides highly-secure Private and Hybrid cloud solutions. Our xCloud clients are some of the top healthcare providers in the country, including the largest health insurer in Rhode Island and several hospitals, universities, and other providers. When it comes to vital information such as patient data, a commodity approach does not fit the bill. Security and compliance with unique regulations such as HIPAA require a customized approach and concierge services.

Watch our Webinar “Secure Your Cloud from Cyber Attacks”

If you’re a healthcare provider looking for more flexibility and security for your mission-critical applications and resources, contact us today. At Xand, we know that one size does not fit all, especially in healthcare. 

Request a Quote

Top 5 Considerations When Moving to the Hybrid Cloud

By Christian Lappin, Xand Sales Engineerchristian lappin blog

Posted July 17, 2014


Cloud, cloud, cloud. It’s in the news everywhere.

Here at Xand, we deal in deploying practical cloud solutions that meet the needs of businesses, hospitals, universities and financial firms. Talk about mission-critical infrastructure—our clients simply can’t be mired in bureaucracy or placed in rigid frameworks and be successful in meeting their goals. Increasingly the solution to mission-critical infrastructure lies in the Hybrid Cloud, a blend of cloud platforms and services custom tailored to meet specific performance, security, and regulatory benchmarks.

If you’re a CTO or Director of IT looking to transition your infrastructure to a more elastic and scalable model, chances are you’ve bumped into Hybrid Cloud in the marketplace. You’ve scouted the benefits (reduction in capital expenditures, higher degrees of flexibility) but questions may still remain. Here’s a guide to what I see as the top five criteria when putting together your Hybrid Cloud solution plan:

1.       Security

Security always needs to be at the top of the list. When it comes to protecting the data of your customers, users and internal workforce, Cloud is not the problem, it’s the solution.

A few years ago, security was the boogeyman in the Cloud world. As more businesses and technology decision makers began to see the compelling benefits of cloud platforms, security solutions have quickly adapted. Using technology like VPN, NAT, DDoS Mitigation and Attack Detection, cloud systems provisioned with security in mind offer robust data protection for you and your end users. Furthermore, Cloud platforms can be customized to meet industry specific regulations and compliance, including PCI DSS, HIPAA, GLBA, SOX and more.

 Security can’t be an afterthought or a last minute add-on. Be sure to address your unique security concerns at the onset of designing your Hybrid Cloud. It will save numerous headaches (and possibly a few jobs!) down the road.

2.       High Availability

What good is your nimble, fast and scalable cloud if you can’t access it? No good at all. Availability goes hand-and-hand with security as a chief concern when standing up your Hybrid Cloud.  

When migrating core production applications and resources to a new Cloud environment, it’s vitally important that underlying infrastructure is up to the task of supporting these mission-critical systems. Be sure to ask fully investigate power, cooling, and other key operational components to make sure your Cloud is backed by the redundancies you need to stay in business.

Server rooms in the basement likely won’t cut it as a logistical home for your cloud. Seek colocation partners and verify they hold SSAE-16/SOCII certifications and are located safely away from floodplains and urban threat zones.  Also consider the capital investment required to back your cloud with in-house generators, power feeds and distribution systems, let alone the staff needed to maintain such equipment. Do you have the available resources to be in the infrastructure business?

If the answer is no, seek a colocation partner with expertise in hosting private and hybrid cloud architecture. Ask to see their generators, their power systems, and other key components.

3.       Flexibility

Massive Public Cloud vendors such as Amazon and Microsoft offer a lot of seemingly quick solutions to complex problems. However, there is a “lock-in” factor to be aware of. With critical systems on the line, it’s important to make sure all options are available at all times. There’s nothing worse in technology than the dreaded vendor lock down, and it’s no different with Hybrid Cloud.  

The key word is Hybrid. The ability to mix and match platforms and services to deliver a solution that works efficiently is what Hybrid Cloud adoption is all about. By signing with rigid providers with one-size-fits-all approaches, the benefits of the Hybrid Cloud greatly diminish. Make sure your Hybrid Cloud is created in such a way that it can be put to work for you dynamically, today and tomorrow.

4.       Carrier Neutrality

Yes, the network still matters. Data circuits form the connective highways that bring users and the Hybrid Cloud together. Clear access paths and multiple points of entry will significantly improve access and end-user experience when utilizing the applications and resources hosted on your Hybrid Cloud. All roads lead to the Cloud, and the network is the roadway.

Housing your cloud infrastructure in a carrier neutral environment ensures that you’ll have plenty of roads open to reach your vital systems. Carrier neutral facilities also provide exponentially more options for cross-connecting hosted infrastructure with multi-site offices, branch locations, and even other data centers.

5.       Managed Services

So you’ve followed items 1 – 4 and have architected a Hybrid Cloud system that meets your security needs, is housed in a highly-available environment, provides flexible technology options, and is able to connect to multiple high-speed networks. What’s missing? A Managed Services plan.

Often overlooked, services are a key element in making sure your Hybrid Cloud is running at optimal performance. If a blade chasis fails at 3 a.m., who has your back? Do you have staff on-call 24x7? If a disk burns out, can you be available to replace it in an acceptable amount of time? If hackers are trying to breach your security and grab data, are you monitoring and checking logs?

Make the Cloud Work for You

Hybrid Cloud provides incredible benefits for those responsible for managing enterprise IT systems. By addressing the key concerns of security, availability, flexibility, carrier neutrality, and managed services requirements, you can be sure to set your Hybrid Cloud on the rails of success.

As always, if you have any questions about Hybrid Cloud solutions, please contact the Xand team. We’re happy to help demystify any concerns around making cloud computing work for your organization.

 

Request a Quote

 

 

Can You Control Your Cloud?

As Big Data Gets Bigger, the Complexity of Managing Even ‘Simple’ IaaS Platforms Grows

denoid-tucker-xand-senior-vice-president-technology

By Denoid Tucker, Xand Senior Vice President of Technology

Posted July 8, 2014

 

As I meet with executives and technology decision makers across the country, I’ve found a growing trend rising among them. Many have been sold on large cloud platforms such as Amazon Web Services (AWS) as a one-size-fits-all panacea for their entire infrastructure backbone. What most didn’t anticipate and are now facing are the complexities and limits apparent in such platforms.

AWS is fantastic at what it does. Want to spin up a server? Mission accomplished with a few clicks of a mouse. However, as one moves beyond the tactical to strategic infrastructure design and implementation, managing AWS can become just as complex and time consuming as managing in-house hardware and data center equipment. Whether physical or virtual, the question remains the same—do you have the available resources to dedicate solely to managing infrastructure? If you’re a busy service provider or application developer, chances are the answer is a resounding “no.”

Whether with AWS, an in-house private cloud, or a hybrid combination of platforms, the reality is that modern IT infrastructure still requires management, oversight and investment. The online portals and slick presentations wrapped around platforms like AWS make it seem like managing cloud infrastructure is as easy as logging into Gmail. Anyone who has dealt with being responsible for data security, application performance and resource uptime knows that dealing with truly mission-critical infrastructure is a much more demanding challenge. As Big Data balloons the footprint of infrastructure, the challenge of strong infrastructure management follows suit.

It’s also important to note that AWS and large public clouds quite often are not flexible enough to accommodate an organization’s total infrastructure footprint. For example, say Company X has 10 racks of servers and storage in colocation. Using AWS, they’re able to virtualize 80% of the infrastructure. However, Company X also has a couple stacks of IBM gear running a legacy application that cannot simply or easily be recreated in AWS. What happens with that critical infrastructure component? There’s likely someone on staff who “owns” the management of it. Will that person now also be tasked with making sure the legacy system works seamlessly with the new (and untested) AWS platform? What happens when AWS updates APIs or rolls out a new update? Will it still play nicely with the legacy app? Who is going to keep the entire infrastructure working in concert? How secure is the connection between the two? The questions go on and on.

This is where a truly flexible and responsive Managed Services Provider comes into play. There’s a new market taking shape calling for holistic, comprehensive cloud management. Saying “we’re with Amazon” no longer means all IaaS concerns are magically solved or so wonderfully automated that resources are not needed to manage it. On the contrary, as the Big Data push puts increased pressure on IaaS systems and the risk of security breaches lingers overhead, having a trusted partner to bringing all elements of the cloud together is needed more than ever.

Cloud computing has opened up a new world of flexible, software-defined solutions for IT infrastructure. The goal now is not to harness the benefits of virtualization, but to control and manage it efficiently. Infrastructure should power your business, not the other way around.

All Posts