By Gunjan Siroya
There are several significant changes occurring in the Healthcare industry, but very few have a direct impact to your earning potential like the transition from ICD-9 to ICD-10. While there have been many conversations focusing on the need for this upcoming ICD-10 transition, few have got into analyzing the risk and positive impacts to the business of healthcare.
If you have not started, or are in the process of making the ICD-10 transition, you need to know how to prepare your organization for this big change and assess the impacts in real business numbers.
Hello Health teamed up with Netspective to talk about ICD-10 and the Revenue Connection. Fred Pennic, a leader in ICD-10 transition, Healthcare IT consultant and a prolific writer on various topics about Healthcare IT, led this webinar.
Their shared objective was to provide advice on how to make the transition from ICD-9 to ICD-10, understand the revenue impacts from this change, and learn how this transition will help you manage business risk. This webinar reviewed important considerations for ICD-10 readiness to help your staff remain focused and avoid any revenue hiccups.
Netspective covered a lot of ground and provided numerous tips and tricks to plan or improve your strategy for this ICD -10 transition. The recommended approach is to focus on three things – People, Process and Technology. We understand that change management is a big challenge in this transition process, and discuss about how hard it is to rise above the daily grind; how to get the organization rallied for this change, how to build a measurable ICD -10 transition plan, and how to deploy this change across your organizations’ ecosystem.
The post ICD -10 Transition and it’s impact on revenues for healthcare provider appeared first on Netspective.
Netspective has been working on healthcare IT and medical device integration software for about a decade and has a extensive experience with outsourced and offshore teams. On our two recent unrelated projects I had the opportunity to work with two different offshore teams located in southern Indian peninsula. Both teams were well qualified in using the required development software and were able to successfully support the project needs.
In general both teams were comprised of experienced leads and many other hard working individual contributors. The leads demonstrated good knowledge of software engineering principles and appeared to be experts at the technologies they were using. I noticed that if the leads did not have an existing solution or needed to include a new technology or concept they were good enough to research and come back with good applicable solutions to meet the requirements. I also noticed that individual contributors on the team were very hard working, picked up any new technologies quickly and adapted well to developing the solutions for our clients. Given these qualities I understand that the offshore development teams never say “no” to a request made of them.
On this side of the ocean our management team is always looking to increase our effectiveness, to benefit our clients, by leveraging the skills of our offshore engineering team partners. Most of the time, the offshore development team has surprised me with their ability to deliver quickly. Since we use Agile development methodology (ref: Netspective Software Development Process Overview) on the projects we have used the following techniques to improve communication and delivery capabilities of our offshore partners:
I am interested in hearing your experiences, good or bad, working with offshore teams and your project outcomes where you have used an offshore development team.
The post 5 communications techniques to get the best out of your offshore development teams appeared first on Netspective.
The fiscal challenges confronting the healthcare industry around the world requires shifting the delivery of care from expensive centralized settings to lower cost settings while seeking to improve quality and patient experience. Organizations such as hospitals, Integrated Delivery Networks (IDNs) and newly created Accountable Care Organizations (ACOs) are trying to find the right mix of technology, facilities, clinical personnel, and information sharing to address these issues.
Telehealth and “connected care” experiments have shown that many types of expensive care that had been, in the past, reserved for office visits or hospital attendance can easily be done in the home or a lower cost setting. It seems every year at the American Telehealth Association (ATA) conference we see new technologies, creative solutions, and a strong desire to have Telehealth succeed.
Unfortunately, reality has not lived up to expectations. Until now.
So what have been the challenges faced by Telehealth? For starters, everything around healthcare revolves around reimbursement. Until recently, reimbursement structures for Telehealth services have been limited and spotty at best. The biggest payer in the country, the Government, is changing that. For example, on July 6, the Centers for Medicare & Medicaid Services (CMS) issued a proposed rule, which goes into effect on January 1, would for the first time would cover the following additional services would be covered when provided using telemedicine (with special qualifications):
Alcohol and/or substance (other than tobacco) abuse structured assessment and brief intervention;
Given how common the services listed above are and how many patients don’t get these kinds of services because they were harder to schedule office visits for them or most costly to deliver in person, many care delivery organizations can offer these kinds of services over telehealth solutions next year and increase business and effectively treat patients.
With Medicare’s lead and other commercial payers taking notice, more changes are coming as our healthcare system shifts from a volume fee-for-service to value pay-for-performance model. This is evident in the shared risks and transferred risks models that are now becoming prevalent with consolidated large healthcare system like IDNs and the ACOs nationwide. More recently both Medicare and the Veterans Administration have now come out with strong, robust guidelines for the use of Telehealth in everyday care, with Medicare also providing a thorough reimbursement structure.
So is Telehealth only for rural patients who cannot get to a major medical center? Not necessarily. Yes, rural medicine will see the greatest benefit but with all-in-one, easy to use, intelligent multimedia systems like all-in-one PCs or similar portable tablets like the iPad, the opportunity exists to look at your regular doctor visit in a whole new light. With built in Bluetooth capabilities in most computers today, you can easily connect home based medical devices that can record and track your vital signs, medication regimen, and even conduct genetic or molecular based testing from the comfort of your home. Telehealth can also be used to manage post hospital discharge care by providing timely interactions for any emergent situations that may arise.
Does this means we are ready for Telehealth is a runaway success? Not exactly, there are still issues around verification of service, technical complexity of solutions that will be used by patients at home, and general reconditioning of how we approach our healthcare interaction. For example Accenture conducted a Connected Health Pulse Survey of 1,110 U.S. patients and found that 90% of patients want to self-manage their health online but that “85 percent of respondents preferred to see doctors in person when needed rather than relying on alternatives such as telehealth consultations.”
While the 85% seems high, it means that already 15% of patients are fine with just telehealth solutions. The good news is that innovation is healthy and abounds with creative start-up companies that are taking on many of the usability and interaction challenges voiced in the Accenture survey. Enhancements in technology and patients getting accustomed to remote care will mean more patients will accept telemedicine over time, especially when they realize the immediacy of the interactions.
How Telehealth and Telemedicine should become part of your revenue streams
Having launched many healthcare technology solutions, my experience shows that probability of success of major patient-focused transformations at care delivery organizations is usually quite low unless you buy into the future of healthcare.
So if you are one of these institutions that is being asked to take on more risks and provide more care with fewer dollars, Telehealth is an option you should evaluate, keeping in mind the following considerations:
1. Technology Partner – the success of any efficient clinical delivery system is based on the information management infrastructure you deploy. This ranges from servers, client computers, and the software solutions you integrate into your clinical workflow. Make sure you pick the right partner that has shown a consistent commitment across all of their business units to support new delivery concepts like providing Telehealth and mobile healthcare. Every technology company wants to sell into healthcare, it’s a big industry where vendors see serious dollars. Pick a vendor that has shown vision, works with startups to bring new technologies to market and understands where healthcare is going from a real dollars and business model perspective.
2. Takes two to tango – remember for a successful Telehealth solution to work two parties have to work together toward mutual benefit; your patient and you. With the acceptance of smart and tablet based devices and Cloud based solutions, your patient’s expectations of your Telehealth solution will be fairly significant. So again, look for a technology partner that has demonstrated an understanding of how people like to work and interact from home. This way your hospital and/or clinic based solutions will be deployed in a manner that will be seamless to use for your patients and provide a high degree of customer satisfaction.
3. Minimize change to workflow – your clinical staff has been trained to take care of patients in the safest manner possible. Don’t turn the cart upside down by bringing to bear revolutionary changes. Technology for the sake of technology is not a good thing in healthcare. Smart technology that understands how your people work and provides better clinical care and services without overhauling how your staff works, will provide the greatest ROI as it will be adopted much more rapidly and efficiently by your staff. And remember, if your staff does not like the new solutions, they have no qualms on telling the patient on how the new system has made their work environment worse.
4. Educate clinicians on value of preventive care and wellness – as you take on more fiscal responsibility for your patients, promote preventive care and wellness activities. Preventive medicine can significantly lower the cost of chronic disease management and when combined with wellness care like obesity management services, can provide improved patient satisfaction and additional revenue streams. Telehealth is the most efficient way to provide these services.
Thanks to technology advances coupled with reimbursement changes, Telehealth has now become a viable option for hospital looking to extend their business model to the patients home, while decreasing their cost of delivery care. Just keep in mind common sense considerations on the impact to your clinical staff and choose a partner who has shown a commitment to the future of healthcare.
The post Telehealth means better care for patients and a new business opportunity for care delivery organizations appeared first on Netspective.
Federal agencies are facing critical needs for information technology upgrades and enhancements. Not only are many of today’s government systems antiquated, they are also expensive to maintain and manage. The core systems have undergone so many changes over the years that the source code has become virtually obscure. Add to this tardy application response times, clumsiness in data handling, problems with connectivity and integration, lack of flexibility to add new services and functionalities, lack of web capabilities, growing license fees and maintenance costs, and the dwindling number of resources capable of supporting these systems, and you have the perfect recipe for impending disaster. There is a general recognition that IT infrastructure modernization is necessary for meeting today’s expanded federal government needs.
A modernized IT infrastructure that is architected appropriately, would be much easier for Federal agencies to maintain, and less costly to secure. Since a modernized IT infrastructure would consist of components that cost less, last longer and require less labor to operate and maintain, the total cost of ownership would also be considerably lower. In addition, modernization would improve the interoperability of government IT and provide unified real-time access to information, as well as visibility across agencies to data residing on disparate systems. This will create a collaborative environment and contribute to faster and better decision making.
Modernization often entails migration from legacy systems and determining ways to achieve greater collaboration and interagency sharing, dealing more effectively with unstructured data, and consolidating silos of information. Modernization will involve migration of large volumes of data and complex business rules to new systems. An effective migration strategy needs to be put in place for identifying master and transaction data and moving them from existing systems to new enterprise systems or custom applications. To maintain a technological edge, Federal agencies must adopt an enterprise-wide service oriented architecture that is interoperable with systems in other Federal departments and can share information with non-traditional partners. Successful enterprise-wide solutions generally drive down the total cost of ownership while offering a single source for real-time online data that is available when needed.
Most legacy environments are expensive in terms of both hardware infrastructure, as well as software license fees. The need to reduce this expense is a significant driver for many organizations to modernize their legacy systems. CIOs have multiple options for application modernization, including redevelopment of applications, divestiture, and outsourcing. Redevelopment of such complex applications to be at par with modern industry standards would be a monumental task in terms of the costs involved, and the time it would take to complete development. While divestiture may meet key business needs in many cases, they often do have limitations, and here again, the cost will be prohibitive. Outsourcing may not be an option open to the Federal CIO, and even if it is, it can have serious disadvantages including loss of quality and scheduling control. There are various other available options however, that can be examined as a means to modernizing existing technologies. These include Cloud Computing, Unified Communications, Services Oriented Architectures (SOA) and Virtualization, all of which can also contribute substantially to reducing overall costs.
By using Cloud services government agencies can gain access to powerful technology resources faster and at lower costs. Government departments can save scarce resources for mission critical programs rather than spending it on purchasing, configuring and maintaining redundant IT infrastructure. Federal departments can significantly reduce their IT costs and complexities, optimize workload and improve service delivery by adopting Cloud Computing. It provides a way to increase capacity or add capabilities on the fly without investing in new infrastructure, training new personnel or licensing new software. Incorporating cloud computing into the data center consolidation plan can minimize the government’s carbon footprint, reduce IT fragmentation, improve resource utilization, and conserve electrical power and fuel.
Government agencies can transition to the Cloud at their own pace. Security risks are often cited as the number one concern while transitioning to Cloud Computing. Modernization offers a step-by-step approach that enables government agencies to move non-core functions to the Cloud first, and once that has been successfully accomplished, move core functionalities as well. Transitioning to a private cloud is one option – providing the same web benefits from within the boundary of an agency’s own firewall. A private cloud enables agencies to leverage benefits like pay-as-you-go licensing and elasticity – from within their own data centers, at their own pace. Another option would be to move a single application into the Cloud environment. Moving a single application will demonstrate the ease with which applications can be transitioned to a different operating environment while maintaining full agency control over the data.
New technologies like unified communications offer exciting opportunities for expanding human collaboration within organizations and hold tremendous potential for supporting business strategies that rely on increased customer self service, enhanced employee productivity and streamlined processes. Unified communication provides government workers with the flexibility to reach their colleagues and access the information they need anywhere, anytime. It enables faster, better informed, collaborative decision making, which allows governments to improve the way they serve and protect the citizens. Combining unified communications with Web 2.0 technologies, such as mash ups and blogs can enhance service delivery to citizens. When successfully deployed, Unified Communications helps organizations reach their goals and meet deadlines by enhancing communication and access to data. It increases efficiency and reduces the time taken to share information. Because these technologies are IP-based, existing infrastructure investments can be leveraged, new features can be added as and when needed, and under-utilized network capacities can be tapped. The best way to reap the benefit of Unified Communications without having to deal with the complexity of integrating and managing the different technologies involved, is to leave the heavy lifting part to a managed services provider.
Service Oriented Architecture (SOA) is useful for all major agencies because it offers the flexibility for rapid deployment of new software applications with minimum relative cost impact. SOA Integration involves re-using existing legacy systems by wrapping them with SOA interfaces. SOA Integration provides Federal agencies with increased agility as legacy components can be used as part of a new SOA based architecture. It allows government departments to adapt new technologies while responding to changing user needs. SOA reduces system complexity and deployment risks through a shared development style, uniform standards and common interfaces. By adopting a service oriented approach agencies can achieve the following benefits:
Virtualization abstracts software from hardware and enables greater flexibility in processing IT services on different resources, at different locations, and at lower hardware and maintenance costs. In addition, server virtualization can extend the use of existing data center space and existing power and cooling capacity while increasing operational efficiency. Using a standardized platform is another option for government agencies to cut costs and boost performance. Standardization allows service providers to deliver utility IT services to a number of clients, thus helping them achieve better economies of scale. This will enable them to provide the services at lower prices.
Government agencies can expect to realize a number of benefits through modernization of their IT environment, including the following:
Legacy systems are an organization’s biggest assets. The amount of data that these systems have accumulated over the years is invaluable and irreplaceable. Many Federal organizations depend on legacy systems for day-to-day operations. Though most of these systems have become obsolete and unwieldy, doing away with them altogether will be like throwing the baby out with the bath water. It is not an economically viable option. What is required is to leverage existing investments in IT applications so as to be able to address changing business requirements with agility. Legacy modernization using options like Cloud Computing, Unified Communications, Services Oriented Architectures and Virtualization is the answer.
The post Modernization of Administrative Process and IT Systems appeared first on Netspective.
Governments worldwide are beginning to emphasize that their IT departments adopt Open Standards because of the improved interoperability, organizational flexibility and responsiveness that such an initiative can result in, and also as a means for avoiding vendor lock-in. As technology becomes increasingly an integral part of other disciplines, this new-found preference for Open Standards is driving innovation in politics, healthcare, disaster management and countless other sectors.
In keeping with the general trend towards Open Standards, many government IT software procurement policies today specify that products and solutions should support and implement Open Standards before they can be considered. However, there are several challenges to be overcome if this is to be put into practice. The reality is that, sometimes, Open Standards may not be available or are not mature enough for a required technology. Also, in some cases, the usage of a de facto standard is so entrenched that it is not practical to ignore it.
By adopting Open Standards, Federal agencies can achieve the following:
Individual standards typically are developed in response to specific concerns and constituent issues expressed by both industry and government. U.S. industry competitiveness depends on standardization, particularly in sectors that are technology driven.
Standards seek to ensure that:
To effectively respond to the challenges posed by globalization, the emergence of new economic powers, and public concerns such as about climate change, and because of the need to stay abreast of evolving technologies, standards development organizations and the standards development process itself must be flexible as well as capable of adopting the most innovative and best performing technologies available.
Open Standards enable diverse products to work together. This gives governments choice among a diversity of applications from a wide range of suppliers/vendors, and leads to innovative technological developments. In the IT industry, standards are particularly important because they allow interoperability of products, services, hardware and software from different parties. Since the specifications are known and open, it is always possible to get another party to implement the same solution adhering to the standards being followed. Interoperability allows for better coordination of government agency programs and initiatives to provide enhanced services to citizens and businesses.
If Open Standards are followed, applications are easier to port from one platform to another since the technical implementation follows known guidelines and rules, and the interfaces, both internally and externally, are known. In addition to this, the skills learned from one platform or application can be utilized with less need for re-training. It is also in the interests of national security that Open Standards are followed to guard against the possibility of over-reliance on foreign technologies/products.
An interoperability framework needs to be put in place. This can provide baseline standards, policies, guidelines, processes and measurements for governments to adopt. The framework will detail how interoperability will be achieved among agencies and across borders, allowing the exchange and management of data and functionality. Combined with baseline audits of interoperability, interoperability frameworks can help create a pathway to greater interoperability through open IT ecosystems.
Baseline audit, mapping, and selective benchmarking efforts that are guided by a clear vision and goals make later policymaking more focused, effective and user driven. These efforts, if initiated with the early involvement of relevant stakeholders, will help identify systems silos that inhibit interoperability, and define areas where Open Standards are likely to have the greatest impact. Mapping standards means identifying all standards in use within and across agencies. An early mapping effort enables agencies to focus on making legacy systems interoperate and minimizes any disagreement over definitions that may impede progress.
Like in interoperability, Open Standards are the backbone of a service-based approach. In particular, a service orientation increases flexibility, modularity and choices. They ensure flexibility so that criteria and decisions are service-oriented and technology-neutral. They enable managers to combine, mix and match, and replace components without the expense and expertise of custom coding connections between service components. Service-oriented, Open Standards based interchangeable components give government organizations choices at the component level. Changes such as replacing legacy systems can be made without degrading the functionality of other parts of the ecosystem. Services can be built with modular components on different systems using a service-oriented architecture.
By following Open Standards, governments gain new efficiencies from increased competition, access and control. Greater competition among suppliers, products and services helps governments maximize their return on investments and performance. Openness can also strengthen a buyer’s negotiating position since they have more options. This ability to choose not only lowers costs but also gives end users more latitude to set requirements and performance criteria.
The ability to see, use, implement and build from an Open Standard allows managers and users to exert more control while determining if and when they need to add functionality, swap components or fix bugs. By relying on Open Standards, managers can decide when to upgrade and who provides software support. They can replace suppliers or even implement upgrades in-house. Organizations can keep pace with changing technology, and become more efficient and effective in meeting citizen and taxpayer needs.
Having a knowledgeable citizenry is necessary if governments are to sustain the advantages of open technologies, innovate and spur a society’s social and economic development. Education, R&D and training merit attention and resources in order to strengthen a nation’s knowledge base and its ability to share in innovation.
Governments must find ways to support and extend the work of collaborative communities, and where possible, formalize their role in a consultative process. User feedback, which often highlights smaller issues, may help identify new areas of growth for standards, evolve service-oriented approaches, test new designs or produce other innovations that enhance IT ecosystems. Collaborative development processes can also broadly impact openness in government and an economy, driving efficiencies, growth and innovation, as well as contribute to a society’s sustainability.
Open Standards are important to promote the wider adoption of standards and the corresponding development of interoperable and innovative technologies. There is often a degree of openness in the processes followed in the development of standards. However, it is the openness of the legal interests in standards – namely, users’ rights to access, use and share the technology embodied by a standard and its documented specifications – that is of fundamental importance in promoting interoperability and innovation.
In moving towards Open Standards it is necessary that the legal rights and restrictions that apply to standards and standard specifications are properly managed. In particular, it is crucial that copyright and patent interests are clearly disclosed to all developers and users of standards from an early stage and that the terms upon which these interests are licensed are made clear.
The post How Adoption of Open Standards Can Benefit Federal Agencies appeared first on Netspective.
With its huge potential for saving money and improving operational efficiency, virtualization has come as a boon to cash-strapped Federal CTOs. Not only can it substantially reduce the cost of running data centers and corporate networks through more efficient use of both hardware and software, it can also significantly reduce the number of physical devices on the network, thus considerably lowering the complexity of managing the network infrastructure. Whether it’s greater performance that you seek, or reliability, availability, scalability, consolidation, agility or a unified management domain, virtualization is your best bet for supporting public sector IT modernization goals.
Virtualization will however, result in more traffic in the consolidated area. Rather than merely adding more virtual servers, what is required is to create a converged infrastructure that allows all resources to be shared. This way storage, bandwidth, and applications can be reallocated based on current workload and organizational need, without them mixing with or disrupting other partitioned resources in the system.
As virtualization efforts evolve, you run into networking challenges, and that’s the time you should begin to think about laying the groundwork for the adoption of 10 Gigabit Ethernet in the data center. The shift toward 10 Gigabit Ethernet means more than moving to a higher bandwidth. It means re-examining your entire network architecture. Higher thorough-put from the 10 Gigabit Ethernet switches allows you to connect server racks and top-of-rack switches directly to the core network, obviating the need for an aggregation layer. Now that high-speed network technologies like 10 Gigabit Ethernet have become widely available, several new solutions have been developed to consolidate network and storage I/O into small numbers of higher bandwidth connections.
10 GbE technology is used primarily to interconnect switches and routers. By separating data and routing information, it allows you to control your own IP address routing and make the changes you want. You do not have to share your routing scheme with your service providers. And you can support both IP and non-IP based protocols. The latest 10 Gigabit Ethernet rack-top switches now support the same 48-port density as 1 Gigabit Ethernet switches, which means you do not lose valuable rack space when upgrading to 10 Gigabit Ethernet.
The primary advantage of 10 GbE technology is the sharp reduction in the number of adapters and ports required for a server. In addition to the usage of fewer physical devices, you also save valuable floor space, as well as power and cooling resources.
With applications becoming increasingly bandwidth-intensive, faster networking solutions are required to improve network connectivity while maintaining high reliability levels. 10 GbE technology provides the perfect solution to meet this requirement. In addition to increasing network speed, it also offers the following:
Upgrading to 10 Gigabit Ethernet is a great way for Federal agencies to get better results from their server virtualization and infrastructure consolidation initiatives. It provides a significant increase in bandwidth while ensuring full compatibility with existing interfaces, thus protecting your investment on cabling, equipment, processes, and Ethernet based training. It retains your existing Ethernet architecture, including the Media Access Control (MAC) protocol and the Ethernet frame format and frame size.
But the biggest benefits of 10GbE come from having fewer servers and less storage gear plugged in. You can deploy just two 10 Gigabit Ethernet instead of using four to eight 1 Gigabit Ethernet in each server, and still achieve full redundancy for availability and additional room for expansion. In terms of scalability a 10 Gigabit Data center enables terabits of aggregate traffic without adding more layers to the network. Additionally it simplifies the network design by eliminating congestion points and reduces the need for complex QOS schemes.
Here’s a run down on the benefits of a consolidated and virtualized 10 GbE data center:
With network traffic increasing inexorably by the day, Federal data center managers need to look for faster network technologies to solve increased bandwidth demands. 10 Gigabit Ethernet offers ten times faster performance than Gigabit Ethernet, allowing you to reach longer distances and support even more bandwidth hungry applications, making it the natural choice for expanding, extending, and upgrading existing Ethernet networks. 10 Gigabit Ethernet helps you get the best out of your virtualized environment.
The post 10Gb Ethernet: Taking Virtualization to the Next Level appeared first on Netspective.
The Obama Administration has taken a hard look at the Federal IT infrastructure, and has come to the conclusion there’s way too much of flab that needs to be trimmed. Federal CIO Vivek Kundra has been put on the job, and he’s got his task cut out – reduce information technology costs, lower energy consumption, minimize IT real estate space utilization, bolster information security, and expand the use of cloud computing. Kundra has laid out a roadmap that proposes to:
One major area of concern has been the proliferation of resource hungry data centers and server farms, which currently number more than 1,100. The government has been spending billions on these behemoths annually, but many of these remain under utilized. Data center consolidation therefore is high on Kundra’s priority list. It helps to contain server sprawl and simplify data center management. It reduces the load on servers during peak hours, resulting in improved performance, availability and scalability as well as lower maintenance costs.
Never has there been more pressure on Federal IT to deliver higher levels of service or greater degree of availability than today. Likewise, never has it been more constrained to cut back on costs than it has now. Making sure your technology environment is efficient and effectively managed has become absolutely essential. Your data center by its very nature is where a substantial portion of your IT resources are concentrated, and that’s where you should start if you want to improve your computing environment and cut costs at the same time.
Data center consolidation means more than merely combining servers. Detailed below are a number of other factors that go into it, all of which translate into an opportunity for reduced cost and greater manageability.
By simplifying and consolidating your data center environment, you achieve several goals including the following:
Data center consolidation can best be achieved through server virtualization. It enables your physical servers to leverage unused capacity to support multiple workloads on simultaneously running virtual machines. It can thus help you significantly reduce the number of servers in your data center, which in turn will result in less hardware, less rack space, less cabling, less cooling, and less energy being used. This translates into lower capital costs and a substantial reduction in your ongoing maintenance expenses as well.
But virtualization is not merely about reducing the physical footprint of the servers in your data center and the resultant cost savings. It’s more about dynamically launching applications, reducing latency and expediting disaster recovery. It’s about reducing the number of tiers on your network, aggregating traffic and adding multiple devices that work like one — all of which will contribute to simplifying your operations and ensuring your network’s performance and latency are at acceptable levels.
Server virtualization can bring you a host of benefits, some of which are enumerated below:
Federal agencies would do well to shift focus from merely maintaining their server farms and data centers on which about 70% of a typical IT budget is being spent today, and instead focus their attention on efficiency and innovation, and on improving the availability of their IT resources and applications through virtualization. They should look at building virtualized networks with the ability to scale across hundreds of interconnected physical computers and storage devices. This will not only result in enhanced performance, security and availability across the board, but also in the lowest TCO over the long term, and in saving billions of dollars of taxpayers’ money.
The post Virtualization: Your First Step to Data Center Consolidation appeared first on Netspective.
In March 2010, exchange of patient medical information at VA-DOD had to be shutdown because errors kept popping up. In May 2008, heavy flooding forced evacuation of 176 patients from Mercy Medical Center in Cedar Rapids, Iowa as flood waters and sewage seeped into its basement. They barely managed to save their medical records. In May 2007, a massive storm ripped through Greensburg, Kansas and razed the Kiowa County Memorial Hospital, along with 95% of the town. The patients and staff were rescued. It was initially thought that all 17,000 patient records had been lost. After searching through the rubble they were fortunate to find that most of these were secure in a file cabinet. These were close calls, but umpteen such events keep happening, and it’s not always that luck is on your side.
Like with every other segment that relies heavily on information technology, ensuring the availability of data is of paramount importance in the healthcare sector, more so because people’s lives depend on it. In fact, DR ranked top on healthcare providers’ IT shopping lists according to a recent survey on spending priorities of the global healthcare industry, with 44 percent considering it as their top IT investment priority. Moreover, Federal mandates such as the Health Insurance Portability and Accountability Act (HIPAA) and the Joint Commission on Accreditation of Healthcare (JHACO) regulations also require healthcare providers to have data backup, DR and emergency mode operation plans in place.
With hospitals, insurance companies, laboratories, physicians’ offices, clinics and imaging centers continually accessing the system to add or recover data, ensuring information availability is a very critical need. Even an hour of downtime can have severe repercussions and could negatively impact patient care, apart from causing a lot of other collateral damage. Two key metrics on which you should base your Disaster Recovery plans are the Recovery Time Objective (RTO) and Recovery Point Objective (RPO). These metrics define the allowable down-time and the allowable data-loss per application. The smaller the RTO and RPO window is, the more complex and expensive the recovery plans become. Given below are the basic steps to be followed while building your DR plan:
Listed below are some of the data backup and recovery options currently available. Choose one that is appropriate for your organization.
A robust DR plan is like a healthcare plan which guarantees that you will come out fine in the end, should calamity strike. Bad choices can fritter away scarce dollars, risk regulatory compliance, and fail to deliver access to your backups when most needed. Simply put, not having a good DR plan in place is courting disaster.
The post How to plan for disaster recovery in healthcare and other data critical environments appeared first on Netspective.
Recent times have seen frequent data center outages that have seriously affected the day-to-day lives of ordinary citizens. The latest to hog the headlines was the one at Virginia Information Technologies Agency’s (VITA) Storage Area Network (SAN) located at a large suburban Richmond Computing Center. The problem began on August 25, 2010 with the crash of a pair of three-year-old EMC DMX-3 memory cards. Technical glitches that grew from there soon mushroomed into a weeklong ordeal that affected 485 of the 4800 odd servers at the Center. The outage paralyzed some of the state’s core agencies and left Gov. Bob McDonnell fuming, prompting him to institute an independent inquiry into the incident. Northrop Grumman that had won the $2.4 billion 10-year contract to manage VITA’s data centers had a very irate administration on its hands, and plenty of explaining to do.
The VITA incident came close on the heels of another major SAN outage that hit the Kansas Department of Health and Environment crippling the functioning of initiatives like the Kansas Immunization Program; the Kansas Women, Infants and Children program; Bureau of Surveillance and Epidemiology; Health Occupations Credentialing; and the Child Care Program. Though these are isolated incidents, they have sparked serious discussion on whether SAN is really the answer for a flexible, high performance and highly scalable storage environment.
SAN is essentially a networked pool of high-speed storage devices connected to servers through special SAN switches, allowing rapid data backup and restore, as well as movement and sharing of data between multiple servers. What makes troubleshooting SAN blackouts and brown-outs a daunting task is the increasing complexity of SAN hardware, as well as the multitude of devices and services that exist between the application servers and the storage equipment today.
Servers make up a major portion of the SAN component stack, and that’s where you should start looking when troubleshooting performance issues. Current SAN architectures use multiple, independent paths to access RAID-protected data. And with the multi-pathing software being hosted on the servers along with the volume manager, the operating systems, the HBA drivers, and the HBA firmware apart from a host of other things, there’s a lot that can go wrong with servers. Unless you have each of these components configured as specified by the storage vendor, trouble is just waiting to happen.
Documenting the SAN topology and defining the performance baselines is an area you should focus your attention on. Capacity growth over time can bring in added complexity, leading to unacceptable performance reductions. You should document these changes to the SAN environment prior to implementation, and the performance baselines must be suitably updated. You should back up and store the switch configuration after every change to the SAN environment, preferably using an automated script. This will enable you to roll-back quickly in case of a mess up while executing the change.
SAN performance monitoring is a strategic component that should form an integral part of your SAN implementation initiative. Switch and storage management tools provide summary and configuration information about the SAN, but they are blind to the multiple layers of traffic on the SAN links that can impact performance or cause an outage. Your engineers should be allowed non-disruptive access to the multiple layers of traffic on SAN links to conduct real-time analysis if they are to minimize outages and slowdowns. They should be equipped with proper tools and backed by supporting processes to track down critical SAN problems such as application I/O slowdowns, fabric bottlenecks, and device failures at any layer in the protocol stack.
Metrics-based measurements are objective and repeatable. They accurately reflect network and application performance, and point the way to remedial measures that you should take to rectify problems. It is also advisable to have a metrics-based automated response system in place. Such systems trigger alarms to warn you when performance deteriorates beyond acceptable threshold limits, and will automatically shut down the device that caused the incident, while simultaneously collecting all possible data on the instance to help in the troubleshooting process.
Suggested below are some fundamental steps that you ought to take to prevent data outages and minimize their impact in the event of occurrence:
Even with these preventive and precautionary measures in place SAN failures can still happen, leaving a lot of stranded data that will be lost. The only option you have then is to fall back on to Plan B, which is to rebuild the lost data from scratch. This is not a very exciting prospect I concede, but you don’t have much by way of choice.
One very important aspect of successful project management is the creation of a Statement of Work (SOW). A Statement of Work can be defined as a narrative description of the products and services to be provided to a client under contract. Basically the SOW tells “what” needs to be accomplished rather than “how” it is to be accomplished, and clearly defines the scope of the project. Getting everyone to agree on the scope of a project at the very outset is important because it helps in minimizing scope creep. Scope creep occurs when new functionalities or requirements not envisaged in the SOW are introduced into the project plan. Uncontrolled scope creep can result in projects overshooting budgets and schedules. Having a clear understanding of the scope of a project will also provide clarity on the expected outcome of a project, and can help in avoiding misunderstandings, disputes and rework.
A statement of work should also clearly define the roles and responsibilities of various stakeholders involved in a project. The service provider and the client should take care to ensure that the SOW accurately reflects the specific tasks and obligations each party will have to fulfill in the course of project implementation.
From a technical point of view, a statement of work should define the action items that need to be completed and the deliverables that need to be produced as they relate to technology, equipment, and systems management. It should clearly specify what exactly needs to be done, what technologies will be needed to get it done, and what type of technical support needs to be made available. The topics being addressed will vary depending on the nature of a project. The SOW for an IT infrastructure project would for instance clearly specify the individual pieces of equipment and hardware required for the project. A software application development project would specify the technology to be used, the coding standards to be followed, the development methodology to be adopted, the type of validation to be carried out and so on.
Properly developing and managing a statement of work can be challenging, but it is essential for getting any project on the right track and keeping it there. It sets the standards for effective project management and ensures that the project meets the client’s established requirements and objectives. Not having a proper SOW can result in project failures and negative financial fallouts. Taking the time upfront to develop a detailed SOW and using that to manage a project throughout its lifecycle, will go a long way in averting project failures. A typical SOW will include the following:
Project Scope - The objective of the scope document is to ensure that the vendor and the client are on the same page as far as understanding of the project and its outcomes are concerned. The scope document places boundaries around the project, identifies a high level schedule, and broadly outlines the rolls and responsibilities of various stakeholders in the project throughout its life cycle.
Project Approach - This section of the document describes how the vendor plans to go about executing the project, the methodology they intend to adopt, and the engagement model they intend to follow while delivering on the project. It will lay down a road map that will lead to successful completion of the project.
Resource Allocation - This section will identify the resources who will be engaged on the project, and what their designations will be. It will include brief resumes of key personnel and an organizational chart showing the reporting structure. It will identify the point of contact for interaction with the client. It will also spell out what portions of an assignment will be done onsite, and what will be performed offsite.
Roles and Responsibilities – Clarity regarding roles and responsibilities is essential for the successful completion of projects. The SOW must provide that clarity, leaving no room for passing the buck. Key areas that need to be addressed are people, technology and processes.
Implementation Steps and Effort Estimation – This section will define the specifics of the work plan to a level of detail that will help the client understand how the process will work. It will include key milestones and estimated timeframes for achieving them. It will prioritize the tasks to be completed and evaluate the effort required for each task, based on which cost allocation can be determined. Proper sequencing of tasks will help reduce unforeseen costs.
Period of Performance - The period of performance is the term of the contract. It must be realistic. The performance period is usually longer than the estimated scope of the effort. One should ensure that the period of performance is compatible with clauses used in other parts of the contract agreement.
Deliverables - This section outlines specific outcomes and projected deadlines of a project. Sufficient details should be included in this section to provide a clear picture of the deliverables.
Acceptance Criteria – This defines the parameters that will determine whether or not a product or service is acceptable. Having this consensus upfront will ensure that all parties involved in the project understand and agree to the specifics of the project.
Costs - This section will unambiguously state the agreed price of the project, and can include penalties that may be imposed for failure to reach specific milestones. Clarity regarding this will prevent misunderstandings occurring later.
Billing Rates – This section is specifically for contracts awarded on a Time & Materials basis. The SOW should clearly specify the hourly rates for all categories of resources engaged on a project.
Payment and Invoicing - This part of the SOW defines the billing cycles and specifies the mode of payment, payee related information and the period within which the payment should be made. If there are any specific formats that should be followed while raising invoices, that also should find mention here. The tax quotient of an invoice should be shown separately in the invoices.
Assumptions – This section describes the assumptions based on which the service provider has submitted the SOW. Project assumptions provided by the service provider should be carefully examined to ensure they are acceptable.
Following these simple guidelines while accepting a statement of work will minimize the risk of failure and help clients get the expected results from their projects.