Archive
SSH Key Management for Businesses
SSH is one of the most widely used protocols for connecting to and from UNIX and Linux systems. SSH, or Secure Shell, was developed to replace the plain text Telnet protocol with encrypted communications between clients and host systems. You probably are familiar with the free Windows PuTTY client software that performs SSH and other types of connectivity between Windows and UNIX or Linux systems.
SSH communicates over a secure connection and uses encrypted traffic between the client and server. That’s good but how does SSH authenticate a legitimate client against a server without sending unencrypted passwords over a connection? SSH uses keys. Using SSH keys, you can authenticate your client to a server without sending a password. This method reduces the threat of brute force password attacks to near nil. The reason is that it’s next to impossible to guess the correct credentials, while passwords, no matter how complex, are still sometimes guessable.
Keys are especially useful in situations where system administrators desire to use secure communications for automating tasks between remote systems.
Features of the SSH Protocol
- Strong encryption
- Strong authentication
- Communication integrity
- Tunneling and Forwarding
- Authorization
Keys also help identify servers to clients. For example, when you connect to a remote system for the first time, a key is stored in your personal key store. That remote system key is checked each time you connect to the remote host. If the host’s key changes or if someone is attempting to hijack your session, you’ll receive a notification that the host’s key has changed, warning you to go ahead if you trust the new connection information or to disconnect safely.
One of SSH’s best features is the ability to tunnel insecure protocols over its secure communications. Tunneling occurs when you setup a secure link between systems using SSH and a local port forwarded to a remote system’s insecure port. This allows you to encrypt traffic to a remote email server from your local system so that all traffic between the two is encrypted. For example, you can setup ssh to “listen,” on say, local port 2000, to communications with some remote system on the insecure port, 25 (Sendmail).
Business Drivers
SSH Keys seem like a no lose prospect for administrators. However, there’s no perfect situation or protocol. Keys must be managed. Users leave companies sometimes on less than desirable terms, systems go through changes and best security practices require that keys be refreshed. Most organizations either don’t handle key management at all or handle it on an ad hoc basis and in a manual way. For organizations of any size, such relaxed management is not only a security risk, it also violates some regulatory standards.
The answer is to manage those keys using a software suite that provides insight into your environment, helps you stay safe and compliant, and does so at a reduced cost compared to labor-intensive manual management.
Key Management
Key management basically answers the question, “Who has access and to what do they have access?” This is not an easy question to answer without a centralized management suite. In fact, if your organization doesn’t have any key management in place, ask the question, “Who has access and to what do they have access?” And, remind the person or persons whom you’re asking that active user accounts do not correlate with actual access.
In most environments, no one has the time nor takes the time for key management. Key management has become an afterthought. That is, until there’s a security breach or security compliance audit.
A key management suite shows you all relationships between systems and user accounts. You can see, graphically, who has access and to what they have access. Key management isn’t to be taken lightly. While it’s almost impossible to spoof a connection or masquerade as a legitimate system, it is possible through the use of stolen keys, to break into systems meant for simple automation or passwordless logins.
This does not imply that either of those scenarios is a security flaw or SSH flaw but a vulnerability that can be mitigated through the use of a good key management suite. A simple demonstration will convince you that you need to implement a key management program in your organization.
This post was written as part of the IBM for Midsize Business program, which provides midsize businesses with the tools, expertise and solutions they need to become engines of a smarter planet.
Linux as an Automation Host

Automation is a perennial technical buzzword among System Administrators (SAs) and in management circles alike. Business owners and managers demand automation with the thought that it will save “man hours” and possibly decrease the need for a full technical staff. System Administrators realize that this is not the case nor is staff reduction the inevitable result of automation. The bad news is that the purpose of automation isn’t to reduce staff numbers. The good news is that there are several reasons for automation that make it a worthwhile pursuit.
• Reduce human error – Computers execute the same task repeatedly with no typos.
• Decrease repetitive tasks – Computers aren’t susceptible to boredom.
• Increase speed – Computers perform tasks very quickly—much faster than typing.
• Decrease low-level tasks – Free up System Administrator time for higher-level tasks.
• Present human-readable data – Automated tasks can output data into HTML.
Using Linux for an automation host makes sense for those who’ve embraced Linux for their businesses but it also makes sense for those who’ve traditionally relied on commercial UNIX. The skills your UNIX SAs spent years acquiring on those platforms translate quite well to Linux. Linux looks and reacts like commercial UNIX “flavors.” In fact, many of the same open source tools that Linux uses are available on it commercial counterparts.
The point is that using Linux for automation won’t require a huge financial outlay for training. Your UNIX SAs have the skills they need to make it work.
Distribution Selection
There’s no single correct answer to the, “Which distribution should I use?” question for every company. The correct answer is to use the distribution that you’re comfortable with or that you currently use. For example, if your company uses Red Hat Enterprise Linux (RHEL) for other tasks, then RHEL or one of its free derivatives (Fedora, CentOS, Scientific Linux) is an appropriate choice.
If you haven’t yet selected a distribution, you should start with one of the so-called “top-level” or parent distributions such as Debian, SUSE, Red Hat, Gentoo or Slackware. Ubuntu Linux, based on Debian, is also very popular, easy to use and well supported.
Components/Tools
If you’re going for the minimalist approach to automation, then you have everything you need with a basic Linux installation: The shell. The shell, commonly the Bourne-Again Shell (BASH or bash), contains all of the necessary components that you need to successfully automate just about any procedure performed manually at the command line. But, often minimal isn’t optimal and plenty of tools exist that are both powerful and easy to use. For example, if an SA wanted to expand a bit on the lowly shell for automation, Expect is an excellent choice for that first expansion.
Expect is an automation tool and it’s one of the tools considered as “essential” among thousands of SAs. From the official Expect site at http://expect.sourceforge.net, their apt description:
“Expect is a tool for automating interactive applications such as telnet, ftp, passwd, fsck, rlogin, tip, etc. Expect really makes this stuff trivial. Expect is also useful for testing these same applications. And by adding Tk, you can also wrap interactive applications in X11 GUIs. Expect can make easy all sorts of tasks that are prohibitively difficult with anything else. You will find that Expect is an absolutely invaluable tool – using it, you will be able to automate tasks that you’ve never even thought of before – and you’ll be able to do this automation quickly and easily.”
Expect is a scripting language that allows you to automate keystrokes including anticipating prompts, entering passwords and automating “interactive” sessions in the shell. In addition to Expect, I suggest that anyone who doesn’t enjoy the tedium of watching and entering username, password and other shell prompts also install Autoexpect.
Autoexpect “watches” your interactive shell session as you login, respond to various system prompts and logout. You have to clean up your automated Autoexpect scripts but it beats the alternative of manual entry.
For the non-minimalist SAs, several other valuable automation tools to add to your toolbox include the following. The lists show equivalent options under each category but isn’t meant to imply that you need ALL of the listed items in each category.
Scripting Languages
• PHP
• Ruby
• Python
• PERL
Web Services
• Apache
• NGINX
• Lighthttpd
• Thttpd
Database (RDBMS)
• MySQL
• PostgreSQL
• SQLite
• NoSQL
PERL is a common scripting language that is part of the base installation on most Linux distributions. It’s a powerful, mature and well-supported language. The selection of tools, however, is often a matter of personal preference and not a matter of superiority of one technology over another.
Whatever choices you make for your automation host tools, remember to document and comment your scripts so that when you return to debug or upgrade them, you’ll understand their functions and purposes.
User Interface
How you display your data gathered by your scripts is another concern for SAs. Some scripts run and do their jobs without the need for output to the screen, to a logfile or to a text file of any kind. However, if your automation includes gathering performance data, executing commands that produce interesting or valuable output or information that requires review, then you need to think seriously about how you’ll display that data.
Plain text files can be ugly to look at, logfiles can be tedious to pore through and screen-captured text can be impossible to realign into a meaningful document. The great equalizer is HTML. Viewing output in the comfort of a web browser has its advantages. Formatting, searchability and customizability are a few HTML advantages that come to mind.
Screen output can be bound by the HTML tags to preserve formatting so that the SA can read information exactly as the system designers originally intended it. You can also present data in tables, in CSV files or in specific file formats for use with proprietary applications. A very good method for gathering and presenting data is to load the data into a database for extraction by a scripting language and presented via HTML.
But, the user interface required for your data depends on your audience. System Administrators have different preferences and requirements than those who’d like to view data in a more formal way.
Automation Strategies
Strategies can run the gamut from automating a few tedious processes to a full-blown, 90%+ automated system. But, there’s one catch: You still have to have someone that knows what to do when things go wrong. A few years ago, when the big automation push first reared its head, SAs worried that they would eventually automate themselves out of a job. Automation hasn’t taken away but a few of the lowest level job duties. Notice the term, “job duties” not jobs.
Automation can offer cost savings to a company in that it saves labor on lower-level tasks. Senior-level SAs shouldn’t have to locate files that haven’t been touched in six months and flag them for long-term storage. An automated process should take care of that and the results reported to a text file, database or web page for casual review.
The automated gathering of performance data can save companies money by flagging systems that are overloaded or those that are underutilized. Automated processes can track file and filesystem changes to provide a pre-emptive notification of a system breach. Tracking user activity to measure productivity is an excellent feedback method during your annual personnel evaluation time.
All of these strategies save money. They save money because they save time and time is money. Using Linux is a part of the overall money-saving strategy. Automating processes with Linux is a step in the right direction for leveraging automation for your business.
You have to decide how much automation is correct for your business or business unit. The rule of thumb is that if a human regularly repeats a particular task, then that task should fall under serious scrutiny for automation. Tasks that require decisions, very complex tasks or tasks that require random timing to perform should remain manual.
Summary
Automation can offer significant cost-savings and advantages over “old school” manual processes. The range of possibilities for automated processes is wide. From simple archive scripts to very elaborate content management systems to fully automated performance data gathering and reporting scenarios. It’s wise to assess each process individually and strategically instead of announcing a blanket mandate to “automate everything” in the data center. Automation has its drawbacks. As long as everything works as designed, automation is an excellent way to streamline processes. However, when something goes wrong, you’ll need the human element to intercede and you can’t automate that.
Linux systems offer businesses a cost-effective and perfectly-suited-to-automation set of tools to suit almost any need. And, generally speaking, the automation tools listed in this article are cross-platform, which means that they work on a variety of operating systems. This cross-platform feature also means that scripts created on one system are portable to others.

This post was written as part of the IBM for Midsize Business program, which provides midsize businesses with the tools, expertise and solutions they need to become engines of a smarter planet.
Linux has a Place in the Enterprise

From its meager beginnings as a hobby project to its extreme success among geeks, Linux has survived lawsuits, boycotts and onslaughts from every corner of the UNIX, Windows and Mac computing markets. Linux has, in spite of its critics, made its way into the world’s data centers. Linux enjoyed early success as a host platform for the Apache web server but now has blossomed into a formidable contender for rack space. For an operating system, Linux has the best mixture of vendor neutrality, open source code base, stability, reliability, scalability and affordability. It also provides the user or administrator the choice of graphical user interfaces or none at all.
Linux has one very significant advantage over all other operating systems: Hardware compatibility. It runs on a variety of hardware platforms from wristwatches to mainframes, although it’s most familiar playing field is on x86 metal.
Two decades of community development and support have brought Linux into the mainstream as an enterprise-level operating system that’s competitive on every level of computing. Linux hosts workloads of all sizes and types: Web services, databases, applications, network services, file services, virtualization and cloud computing.
With the notable exceptions of Microsoft’s Hyper-V and Solaris Zones, Linux-based virtualization solutions are the standard in contemporary data centers. And, the world’s largest cloud computing vendor, Amazon, uses Xen virtualization for its services. Though it’s rare outside of Internet Service Provider (ISP) realms, you can run “Zones” virtualization on Linux too. Zones, containers or jails are a popular method of securely compartmentalizing Linux applications from one another. ISPs use containers to separate users from one another on shared systems for shell access. It’s an effective and secure method of leveraging inexpensive hardware over dozens of users.
Though Linux has a dedicated following, corporate buy-in and support from the world’s largest hardware and software vendors, there are still those who aren’t convinced. As late as mid-2011, I found several articles and commentary challenging the viability of Linux as a data center operating system.
The problem with Linux adoption stems from a misunderstanding of the Linux support model. Linux, as a kernel, and generally as an operating system, is free. Free means its code is free to use, change and adapt to any purpose. Some refer to this freedom as open source. Open source does not necessarily mean free. Proprietary software can be open source but it isn’t free to change, rebrand, etc. Linux is free software and it’s also open source.
It also means that Linux is free of charge. Vendors charge for media, consulting, support and a host of associated services but they usually do not charge for the Linux software itself.
The very thing that makes Linux so desirable to geeks and those knowledgeable in the ways of free software is also the aspect that makes some company executives turn away from Linux as a data center operating system. Incorrectly, they assume that since something is free and doesn’t have strings attached that there must be something wrong with it.
Dispelling myths associated with Linux use requires a lot of energy and time. But, there is one sure test for Linux data center viability: IT Services Support.
Linux has support, financial and technical, from the biggest names in the IT industry. Each of these industry giants has its own Linux distribution preference but, whichever distribution you decide to use, you can purchase full support for it. You can purchase 24x7x365 support from a variety of sources, including directly from Linux distribution vendors.
Every major IT services company supports Linux, Windows and commercial UNIX flavors as part of its portfolio. Linux is a mainstream operating system that carries workloads for every sized company in the world. Linux is no longer cute or niche. If you use any online web hosting services, web-based CRM software, databases, virtualization or cloud services, chances are greater than 90% that you’re using Linux behind the scenes for those services.
Linux supports high availability, clustering, high-performance computing and a variety of hardware platforms. It also supports industry standard LDAP (Directory) services, large databases, journaling filesystems, SMP computing and major computer languages including an implementation of Microsoft’s .NET platform.
Linux has its place in your data center doing the enterprise-level heavy lifting at a lower cost than comparable proprietary systems. The days of the monolithic, single operating system data centers are long gone. Heterogeneous networks, including Linux, are today’s standard fare.

This post was written as part of the IBM for Midsize Business program, which provides midsize businesses with the tools, expertise and solutions they need to become engines of a smarter planet.
Has the World outgrown Commercial UNIX?
When you read articles about cloud computing or Enterprise computing, you rarely see the term ‘UNIX’ anymore. You see plenty of rhetoric about Linux and Windows but UNIX seems to have left the building, for good. And, by ‘building,’ I mean data center. However, that’s not the case. UNIX is alive and well in the world’s Enterprise data centers. It just doesn’t grab headlines like it used to. Does the fact that UNIX isn’t a newsworthy buzz term mean that it’s on its last legs as an Enterprise operating system. Certainly not. Commercial UNIX might have lost its “coolness” but it hasn’t lost its place running your business-critical applications and services.
Enterprise-level UNIX systems still rule the data center for the big workloads, the big databases and Big Data.
When selecting an operating system for your critical business needs, what do you look for? Reliability, availability, stability, versatility, virtualization, scalability, affordability, sustainability, supportability and sheer ability are terms that come to mind on which to judge an operating system. Plus you need a company behind the operating system that employs experts who understand the critical nature of your business. That’s the lure of commercial UNIX. That’s the decision point for many businesses: Support.
It’s fun to think of living in a world where a company can throw caution to the wind and use free software. The reality is that for a company to remain operational it must do so sometimes at greater expense. It’s wise to be frugal but you also can’t afford to gamble with your business’s livelihood based on whim or attitude founded on an ideal. Free software has its place in the data center. But, are you willing to risk your company’s mission-critical business on it?
So far, businesses say, “No.”
Commercial UNIX still wins in every category listed above. Yes, even affordability. Companies that use Linux never do so without also paying for support and 24x7x365 is expensive, even for Linux. Windows does some things well. Linux does some things well. Commercial UNIX does some things well. There is no single right answer to every problem. That’s why you don’t have many companies of any size that have a single operating system or platform anymore.
Don’t misunderstand me, I love Linux and free software but you have to realize that, when it comes to risking millions or perhaps even billions of dollars on your computing infrastructure, you have to use a time-tested, battle-proven technology. That technology is commercial UNIX.
Allow me to quote some statistics gathered by Gabriel Consulting Group (GCG) last year (2011) on this topic that’s published in their whitepaper, “Is Commercial Unix Relevant in the Midmarket?” The 300+ survey respondents represent companies of all sizes, including 44 percent from companies with more than 10,000 employees. However, the data in the report reflects the responses from the focus group (Midmarket, 4,000 or fewer employees), which is 46 percent of the total number of those surveyed. GCG has also included some data from the large company segment for comparison.
- >80% stated that UNIX usage is increasing.
- 49% said that 75% or more critical applications run on UNIX
- >75% report that >50% mission-critical applications run on UNIX
- Larger organizations state that 75% of their critical applications run on UNIX
- 90% said that UNIX is strategic to their business
- 98% of large company respondents stated that UNIX is strategic
What are the most important factors to those who choose commercial UNIX for their mission-critical workloads?
- Availability and Stability
- Operating System Quality
- Predictable Performance
- Vendor Support
- Raw performance, Speed, Scalability
What are the less important factors for those who’ve selected commercial UNIX?
- Easy Administration and Management
- Acquisition Price
- System Familiarity
- Virtualization Capability and Tools
These two lists say a great deal about commercial UNIX buying habits. One glaring point is that price is of less consequence than reliability. The top three reasons given are the reasons why commercial UNIX has many more years of life left in it for the applications and systems that are business mission-critical.
One point that’s a bit unclear in the survey is that of scalability. Sure, Linux and Windows are both scalable but only on PC hardware. And, that includes virtualization. PC hardware can’t compare to the “big iron” on which commercial UNIX runs. Of course, Linux (zLinux) does run on mainframe computers (z/OS) and that’s pretty big iron but most companies under the 4,000 employee levels don’t own a mainframe.
Although survey respondents said that raw performance, virtualization and scalability were not high on their most important aspects list, they’re still important. So is support. The people who write the checks still like to rely on companies that back their products. Companies have more confidence in commercial UNIX than they do in Linux, even when supported by primary vendors such as Red Hat or Novell. Although the Debian distribution is free of charge, companies would rather engage and pay for a real company’s support behind the product.
To drive home the point, the survey revealed that 41 percent of respondents feel that commercial UNIX support is superior to that of Linux vendor support at 30 percent. And, 47 percent believe that UNIX is more available and more reliable than Linux.
Most small to medium-sized businesses (SMBs), like most Enterprises, have a heterogeneous environment. They run their mission-critical applications and services on commercial UNIX and user-oriented services (File, Web, Intranet) on Windows and Linux. Almost three-quarters of those surveyed said they would use commercial UNIX well into the future. While commercial UNIX doesn’t have that “cool” factor that Linux does, commercial UNIX still owns the mission-critical market.

This post was written as part of the IBM for Midsize Business program, which provides midsize businesses with the tools, expertise and solutions they need to become engines of a smarter planet.
Why You Need Infrastructure as a Service (IaaS)
Infrastructure as a Service is that part of cloud computing that allows you to lease and manage computing infrastructure for your business needs. Computing infrastructure includes virtual machines (VMs), operating systems, middleware, runtime components, network, storage, data and applications. Cloud computing vendors provide the necessary underlying physical hardware (servers, network, storage) that they own and manage transparently in the background. The two worlds have little crossover. The cloud vendor and customer have a non-intrusive relationship with one another just as you currently do with your web hosting provider. They’re there when you need help but their direct involvement in your business is zero.
The cloud vendor also supplies you with a management interface in which you work with your infrastructure. You’re responsible for license management for your operating systems and software. You pay for compute resources per CPU, per hour, per gigabyte of bandwidth, per gigabyte of storage or a combination.
The Three Faces of IaaS
IaaS isn’t as simple as a single offering but the boundaries between types are well-drawn. First, you have the Private Cloud. Private IaaS is exactly what you think it is—a dedicated, private infrastructure. Think of your own data center setup as a Private Cloud IaaS. Of course, unless you have cloud infrastructure (virtualization, storage, extreme redundancy, etc.), it isn’t officially a cloud but you get the idea.
On the other end of the spectrum is the Public Cloud. A Public Cloud is a 100 percent hosted solution. You own no hardware. It is the Public Cloud that is the focus of this article.
If you combine the two cloud concepts, you have what’s known as a Hybrid Cloud. A Hybrid Cloud can be any percentage mixture of Private and Public infrastructures for your company. Most companies will evolve into this type of cloud from a traditional, private hardware infrastructure to a cloud-based one.
A Hybrid Cloud is the solution that cloud vendors recommend to their customers who’ve grown their own data centers and that are comfortable with that model. Mix your Private Cloud with the Public Cloud for a solid and complete solution for you and your customers. A Hybrid Cloud mixes the security and control of a traditional data center with hosted cloud infrastructure. Typically, companies will transition their disaster recovery efforts to the Public Cloud while retaining production operations in-house in a Private Cloud.
Industry experts view Hybrid Clouds as a transitional step to a full Private Cloud. They see this process as a stepwise migration. As leases and service contracts expire, companies will move their computing workloads from private data center architecture and Private Clouds to virtualized architecture in the Public Cloud.
Analysts predict that by the end of 2012, as many as 20% of businesses will exist completely in the Public Cloud.
Cost Savings But Not Where You Think
Cost tops the advantages of IaaS cloud computing. To purchase the same amount of physical computing power, to manage that computing power and to house that computing power costs many times more than bulk pricing from a cloud vendor. IaaS is basically hardware outsourcing. You don’t own the hardware. You don’t manage the hardware. You use the hardware but its care and feeding are not your problem.
Put whatever moniker on cloud computing’s IaaS that you want but it’s really no different than what you probably do now in your current data center. Unless you own your data center, pay its staff, maintain the facility and physically service your own hardware, then you’re already using hardware (infrastructure) as a service. The primary difference in a standard data center space lease and IaaS is that you don’t have to deal with any hardware.
IaaS frees you from purchasing or leasing hardware, having it shipped to the data center, paying someone to deploy it into a rack, paying for that rack space, managing the hardware throughout its life cycle and taking care of its disposal. That’s why traditional data center infrastructure management is expensive. You have to pay for the hardware, you have to pay for maintenance, you have to pay for management and you have to pay for the business services that required all of this expense in the first place.
On the other side of the argument, cloud opponents state that your TCO is no lower with IaaS than with traditional data center service. This might be true from a pure hardware point-of-view. After all, the IaaS vendor has to pay for the data center infrastructure and pass his costs on to you. However, the savings is in the form of labor costs.
Count the number of full-time employees (FTEs) you have on hand right now to manage your infrastructure. Now, count the number of FTEs you’ll require by not managing any hardware. Is there a significant difference between the two numbers? Add up the total cost of those FTEs who you won’t need anymore and multiply that number by three years (standard hardware lease length). That number is your savings.
Since your new virtual infrastructure comes with online management tools for creating new servers, installing operating systems, presenting storage and configuring network, you’ll need fewer FTEs to handle the job.
Lower Entry Barriers and Rapid Innovation
IaaS also lowers the financial and logistical barriers for startup businesses to enter the market and push their products and services to customers in a fraction of the standard timeframe. The IaaS model allows startups to start small and grow to any size on the pay-as-you-go plan. There’s not a huge outlay of capital on hardware and FTEs that traditionally built businesses have experienced.
Another advantage of IaaS is rapid innovation. For example, if you have an idea for a new service today, you could spin up a virtual test infrastructure for a few hundred dollars, test your service, demo your service and deploy a working business model in a matter of days instead of months. In a time when windows of opportunity are often very small, IaaS makes sense for who need a quick service build-out to show investors or potential customers.
Embrace the Elastic and Mobile Cloud
IaaS also makes your company mobile, elastic and global. You can manage your systems from anywhere, you can shrink and grow your computing infrastructure as needed and you can keep your global customers happy with zero downtime. And, since you’re not tied to a server room or data center, your office location is irrelevant. You can work from home and your employees can be spread across the globe.
Have you ever had to move systems, network and storage from one location to another? If you have, you know about expense, outage and failure. Most cloud vendors maintain geographically disparate data center locations to ensure zero downtime for your infrastructure. Sure, there’s an additional cost for the service but how much is your current disaster recovery solution costing you?
Summary
You need IaaS because you need mobility, agility, stability, availability, elasticity and frugality in your business. You can save money. You can beat the competition to market. And, you can do it with the peace of mind that someone else is minding the hardware foundation under your business.
Where to Go from Here
If you’re considering moving to an IaaS solution or you’re part of a startup, contact a cloud vendor and discuss your needs. Remember that not all cloud vendors can or will give you good advice. Look for experience, longevity, availability, customer service and customer satisfaction in your quest to migrate to the cloud. Remember that your partnership with a cloud vendor is an important one. It’s more than a simple landlord-tenant relationship; it’s a cohabitation. You’re domestic partners and you have to select wisely. So, you need to find a partner who can help you make a smooth transition to your desired level of cloud adoption, since you’re going to be there a very long time.
This post was written as part of the IBM for Midsize Business program, which provides midsize businesses with the tools, expertise and solutions they need to become engines of a smarter planet.


You must be logged in to post a comment.