Netspective

Feed: 

Netspective

By Gunjan Siroya

There are several significant changes occurring in the Healthcare industry, but very few have a direct impact to your earning potential like the transition from ICD-9 to ICD-10. While there have been many conversations focusing on the need for this upcoming ICD-10 transition, few have got into analyzing the risk and positive impacts to the business of healthcare.

If you have not started, or are in the process of making the ICD-10 transition, you need to know how to prepare your organization for this big change and assess the impacts in real business numbers.

Hello Health teamed up with Netspective to talk about ICD-10 and the Revenue Connection. Fred Pennic, a leader in ICD-10 transition, Healthcare IT consultant and a prolific writer on various topics about Healthcare IT, led this webinar.

Their shared objective was to provide advice on how to make the transition from ICD-9 to ICD-10, understand the revenue impacts from this change, and learn how this transition will help you manage business risk. This webinar reviewed important considerations for ICD-10 readiness to help your staff remain focused and avoid any revenue hiccups.

Netspective covered a lot of ground and provided numerous tips and tricks to plan or improve your strategy for this ICD -10 transition. The recommended approach is to focus on three things – People, Process and Technology. We understand that change management is a big challenge in this transition process, and discuss about how hard it is to rise above the daily grind; how to get the organization rallied for this change, how to build a measurable ICD -10 transition plan, and how to deploy this change across your organizations’ ecosystem.

Please see the Webinar on ICD – 10 transition that took place on Wednesday, April 17th here.

If you have any specific questions you’d like us to answer, send them to  Fred or Gunjan  and we’ll be happy to respond.

 

The post ICD -10 Transition and it’s impact on revenues for healthcare provider appeared first on Netspective.

Netspective has been working on healthcare IT and medical device integration software for about a decade and has a extensive experience with outsourced and offshore teams. On our two recent unrelated projects I had the opportunity to work with two different offshore teams located in southern Indian peninsula. Both teams were well qualified in using the required development software and were able to successfully support the project needs.

In general both teams were comprised of experienced leads and many other hard working individual contributors. The leads demonstrated good knowledge of software engineering principles and appeared to be experts at the technologies they were using. I noticed that if the leads did not have an existing solution or needed to include a new technology or concept they were good enough to research and come back with good applicable solutions to meet the requirements. I also noticed that individual contributors on the team were very hard working, picked up any new technologies quickly and adapted well to developing the solutions for our clients. Given these qualities I understand that the offshore development teams never say “no” to a request made of them.

On this side of the ocean our management team is always looking to increase our effectiveness, to benefit our clients, by leveraging the skills of our offshore engineering team partners. Most of the time, the offshore development team has surprised me with their ability to deliver quickly. Since we use Agile development methodology (ref: Netspective Software Development Process Overview) on the projects we have used the following techniques to improve communication and delivery capabilities of our offshore partners:

  1. Holding requirements review meetings and requesting the development team to document their understanding of what needs to be accomplished. We request our development partners to provide a document that compiles the requirements that need to be met, modules they need to develop and associated test scenarios. This allows the development team to assume ownership easily as our senior architect helps improve their understanding by providing business scenarios to clarify the requirements. [ref: Module Design Document Template]
  2. Holding daily development progress meetings and to resolve any issues that the team may be running against. Agile development requires that our architects and project managers attend these meetings regularly to clear any development path obstacles. After experimenting with late night meetings we are now switching to early morning meetings in the US. At critical project milestones we meet with the development team at both ends of their workday. We use either a paid service from http://www.gotomeeting.com or a combination of available free tools like http://www.skype.com for voice calls and https://join.me/ for screen sharing during these meetings. There are pros and cons to both options which are beyond the scope of this article.
  3. Capturing minutes of all meetings so that onshore and offshore teams can stay in sync with each other without losing any decisions made in such meetings. It is important that the offshore team also takes the responsibility of documenting the minutes since you want to make sure that they are getting the message correctly and understand the actions expected of them. [ref: Daily Meeting Minutes Template]
  4. Using a project management tool to record action items as tasks and assigning tickets to appropriate team members for accountability. We use ActiveCollab (http://www.activecollab.com/) to manage our projects and for document collaboration in addition to client billing details. You must ensure that all assignments are assigned to individuals for proper accountability. Attaching the developers name to individual assignments has improved delivery on most occasions.
  5. Review, review, and review the developed system and code frequently. Our mantra is seeing is believing.

 

I am interested in hearing your experiences, good or bad, working with offshore teams and your project outcomes where you have used an offshore development team.

The post 5 communications techniques to get the best out of your offshore development teams appeared first on Netspective.

The fiscal challenges confronting the healthcare industry around the world requires shifting the delivery of care from expensive centralized settings to lower cost settings while seeking to improve quality and patient experience. Organizations such as hospitals, Integrated Delivery Networks (IDNs) and newly created Accountable Care Organizations (ACOs) are trying to find the right mix of technology, facilities, clinical personnel, and information sharing to address these issues.

Telehealth and “connected care” experiments have shown that many types of expensive care that had been, in the past, reserved for office visits or hospital attendance can easily be done in the home or a lower cost setting. It seems every year at the American Telehealth Association (ATA) conference we see new technologies, creative solutions, and a strong desire to have Telehealth succeed.

Unfortunately, reality has not lived up to expectations. Until now.

So what have been the challenges faced by Telehealth? For starters, everything around healthcare revolves around reimbursement. Until recently, reimbursement structures for Telehealth services have been limited and spotty at best. The biggest payer in the country, the Government, is changing that. For example, on July 6, the Centers for Medicare & Medicaid Services (CMS) issued a proposed rule, which goes into effect on January 1, would for the first time would cover the following additional services would be covered when provided using telemedicine (with special qualifications):

Alcohol and/or substance (other than tobacco) abuse structured assessment and brief intervention;

  • Alcohol misuse screening;
  • Behavioral counseling for alcohol misuse;
  • Depression screening;
  • Behavioral counseling to prevent sexually transmitted infections;
  • Behavioral therapy for cardiovascular disease; and
  • Behavioral counseling for obesity.

Given how common the services listed above are and how many patients don’t get these kinds of services because they were harder to schedule office visits for them or most costly to deliver in person, many care delivery organizations can offer these kinds of services over telehealth solutions next year and increase business and effectively treat patients.

With Medicare’s lead and other commercial payers taking notice, more changes are coming as our healthcare system shifts from a volume fee-for-service to value pay-for-performance model. This is evident in the shared risks and transferred risks models that are now becoming prevalent with consolidated large healthcare system like IDNs and the ACOs nationwide. More recently both Medicare and the Veterans Administration have now come out with strong, robust guidelines for the use of Telehealth in everyday care, with Medicare also providing a thorough reimbursement structure.

So is Telehealth only for rural patients who cannot get to a major medical center? Not necessarily. Yes, rural medicine will see the greatest benefit but with all-in-one, easy to use, intelligent multimedia systems like all-in-one PCs or similar portable tablets like the iPad, the opportunity exists to look at your regular doctor visit in a whole new light. With built in Bluetooth capabilities in most computers today, you can easily connect home based medical devices that can record and track your vital signs, medication regimen, and even conduct genetic or molecular based testing from the comfort of your home. Telehealth can also be used to manage post hospital discharge care by providing timely interactions for any emergent situations that may arise.

Does this means we are ready for Telehealth is a runaway success? Not exactly, there are still issues around verification of service, technical complexity of solutions that will be used by patients at home, and general reconditioning of how we approach our healthcare interaction. For example Accenture conducted a Connected Health Pulse Survey of 1,110 U.S. patients and found that 90% of patients want to self-manage their health online but that “85 percent of respondents preferred to see doctors in person when needed rather than relying on alternatives such as telehealth consultations.”

While the 85% seems high, it means that already 15% of patients are fine with just telehealth solutions. The good news is that innovation is healthy and abounds with creative start-up companies that are taking on many of the usability and interaction challenges voiced in the Accenture survey. Enhancements in technology and patients getting accustomed to remote care will mean more patients will accept telemedicine over time, especially when they realize the immediacy of the interactions.

How Telehealth and Telemedicine should become part of your revenue streams

Having launched many healthcare technology solutions, my experience shows that probability of success of major patient-focused transformations at care delivery organizations is usually quite low unless you buy into the future of healthcare.

So if you are one of these institutions that is being asked to take on more risks and provide more care with fewer dollars, Telehealth is an option you should evaluate, keeping in mind the following considerations:

1. Technology Partner – the success of any efficient clinical delivery system is based on the information management infrastructure you deploy. This ranges from servers, client computers, and the software solutions you integrate into your clinical workflow. Make sure you pick the right partner that has shown a consistent commitment across all of their business units to support new delivery concepts like providing Telehealth and mobile healthcare. Every technology company wants to sell into healthcare, it’s a big industry where vendors see serious dollars. Pick a vendor that has shown vision, works with startups to bring new technologies to market and understands where healthcare is going from a real dollars and business model perspective.

2. Takes two to tango – remember for a successful Telehealth solution to work two parties have to work together toward mutual benefit; your patient and you. With the acceptance of smart and tablet based devices and Cloud based solutions, your patient’s expectations of your Telehealth solution will be fairly significant. So again, look for a technology partner that has demonstrated an understanding of how people like to work and interact from home. This way your hospital and/or clinic based solutions will be deployed in a manner that will be seamless to use for your patients and provide a high degree of customer satisfaction.

3. Minimize change to workflow – your clinical staff has been trained to take care of patients in the safest manner possible. Don’t turn the cart upside down by bringing to bear revolutionary changes. Technology for the sake of technology is not a good thing in healthcare. Smart technology that understands how your people work and provides better clinical care and services without overhauling how your staff works, will provide the greatest ROI as it will be adopted much more rapidly and efficiently by your staff. And remember, if your staff does not like the new solutions, they have no qualms on telling the patient on how the new system has made their work environment worse.

4. Educate clinicians on value of preventive care and wellness – as you take on more fiscal responsibility for your patients, promote preventive care and wellness activities. Preventive medicine can significantly lower the cost of chronic disease management and when combined with wellness care like obesity management services, can provide improved patient satisfaction and additional revenue streams. Telehealth is the most efficient way to provide these services.

Thanks to technology advances coupled with reimbursement changes, Telehealth has now become a viable option for hospital looking to extend their business model to the patients home, while decreasing their cost of delivery care. Just keep in mind common sense considerations on the impact to your clinical staff and choose a partner who has shown a commitment to the future of healthcare.

 

The post Telehealth means better care for patients and a new business opportunity for care delivery organizations appeared first on Netspective.

Federal agencies are facing critical needs for information technology upgrades and enhancements. Not only are many of today’s government systems antiquated, they are also expensive to maintain and manage. The core systems have undergone so many changes over the years that the source code has become virtually obscure. Add to this tardy application response times, clumsiness in data handling, problems with connectivity and integration, lack of flexibility to add new services and functionalities, lack of web capabilities, growing license fees and maintenance costs, and the dwindling number of resources capable of supporting these systems, and you have the perfect recipe for impending disaster. There is a general recognition that IT infrastructure modernization is necessary for meeting today’s expanded federal government needs.

A modernized IT infrastructure that is architected appropriately, would be much easier for Federal agencies to maintain, and less costly to secure. Since a modernized IT infrastructure would consist of components that cost less, last longer and require less labor to operate and maintain, the total cost of ownership would also be considerably lower. In addition, modernization would improve the interoperability of government IT and provide unified real-time access to information, as well as visibility across agencies to data residing on disparate systems. This will create a collaborative environment and contribute to faster and better decision making.

What Modernization Entails

Modernization often entails migration from legacy systems and determining ways to achieve greater collaboration and interagency sharing, dealing more effectively with unstructured data, and consolidating silos of information. Modernization will involve migration of large volumes of data and complex business rules to new systems. An effective migration strategy needs to be put in place for identifying master and transaction data and moving them from existing systems to new enterprise systems or custom applications. To maintain a technological edge, Federal agencies must adopt an enterprise-wide service oriented architecture that is interoperable with systems in other Federal departments and can share information with non-traditional partners. Successful enterprise-wide solutions generally drive down the total cost of ownership while offering a single source for real-time online data that is available when needed.

Modernization: Available Options

Most legacy environments are expensive in terms of both hardware infrastructure, as well as software license fees. The need to reduce this expense is a significant driver for many organizations to modernize their legacy systems. CIOs have multiple options for application modernization, including redevelopment of applications, divestiture, and outsourcing. Redevelopment of such complex applications to be at par with modern industry standards would be a monumental task in terms of the costs involved, and the time it would take to complete development. While divestiture may meet key business needs in many cases, they often do have limitations, and here again, the cost will be prohibitive. Outsourcing may not be an option open to the Federal CIO, and even if it is, it can have serious disadvantages including loss of quality and scheduling control. There are various other available options however, that can be examined as a means to modernizing existing technologies. These include Cloud Computing, Unified Communications, Services Oriented Architectures (SOA) and Virtualization, all of which can also contribute substantially to reducing overall costs.

Cloud Computing

By using Cloud services government agencies can gain access to powerful technology resources faster and at lower costs. Government departments can save scarce resources for mission critical programs rather than spending it on purchasing, configuring and maintaining redundant IT infrastructure. Federal departments can significantly reduce their IT costs and complexities, optimize workload and improve service delivery by adopting Cloud Computing. It provides a way to increase capacity or add capabilities on the fly without investing in new infrastructure, training new personnel or licensing new software. Incorporating cloud computing into the data center consolidation plan can minimize the government’s carbon footprint, reduce IT fragmentation, improve resource utilization, and conserve electrical power and fuel.

Government agencies can transition to the Cloud at their own pace. Security risks are often cited as the number one concern while transitioning to Cloud Computing. Modernization offers a step-by-step approach that enables government agencies to move non-core functions to the Cloud first, and once that has been successfully accomplished, move core functionalities as well. Transitioning to a private cloud is one option – providing the same web benefits from within the boundary of an agency’s own firewall. A private cloud enables agencies to leverage benefits like pay-as-you-go licensing and elasticity – from within their own data centers, at their own pace. Another option would be to move a single application into the Cloud environment. Moving a single application will demonstrate the ease with which applications can be transitioned to a different operating environment while maintaining full agency control over the data.

Unified Communications

New technologies like unified communications offer exciting opportunities for expanding human collaboration within organizations and hold tremendous potential for supporting business strategies that rely on increased customer self service, enhanced employee productivity and streamlined processes. Unified communication provides government workers with the flexibility to reach their colleagues and access the information they need anywhere, anytime. It enables faster, better informed, collaborative decision making, which allows governments to improve the way they serve and protect the citizens. Combining unified communications with Web 2.0 technologies, such as mash ups and blogs can enhance service delivery to citizens. When successfully deployed, Unified Communications helps organizations reach their goals and meet deadlines by enhancing communication and access to data. It increases efficiency and reduces the time taken to share information. Because these technologies are IP-based, existing infrastructure investments can be leveraged, new features can be added as and when needed, and under-utilized network capacities can be tapped. The best way to reap the benefit of Unified Communications without having to deal with the complexity of integrating and managing the different technologies involved, is to leave the heavy lifting part to a managed services provider.

Service Oriented Architecture

Service Oriented Architecture (SOA) is useful for all major agencies because it offers the flexibility for rapid deployment of new software applications with minimum relative cost impact. SOA Integration involves re-using existing legacy systems by wrapping them with SOA interfaces. SOA Integration provides Federal agencies with increased agility as legacy components can be used as part of a new SOA based architecture. It allows government departments to adapt new technologies while responding to changing user needs. SOA reduces system complexity and deployment risks through a shared development style, uniform standards and common interfaces. By adopting a service oriented approach agencies can achieve the following benefits:

  • Improved agility and responsiveness
  • Ability to support net-centric operations and secure information sharing
  • Ability to enhance and maintain management visibility and provide decision support
  •  Reduction in cost of application development as well as that of operating an enterprise
Virtualization

Virtualization abstracts software from hardware and enables greater flexibility in processing IT services on different resources, at different locations, and at lower hardware and maintenance costs. In addition, server virtualization can extend the use of existing data center space and existing power and cooling capacity while increasing operational efficiency. Using a standardized platform is another option for government agencies to cut costs and boost performance. Standardization allows service providers to deliver utility IT services to a number of clients, thus helping them achieve better economies of scale. This will enable them to provide the services at lower prices.

Benefits of Modernization

Government agencies can expect to realize a number of benefits through modernization of their IT environment, including the following:

  • Significant reduction in cost
  • Reduced dependency on legacy skill sets
  • Elimination of data silos resulting in greater flexibility
  • Extended ROI from existing systems
  • Better connectivity and integration
  • Improved application response times and data handling capabilities

Legacy systems are an organization’s biggest assets. The amount of data that these systems have accumulated over the years is invaluable and irreplaceable. Many Federal organizations depend on legacy systems for day-to-day operations. Though most of these systems have become obsolete and unwieldy, doing away with them altogether will be like throwing the baby out with the bath water. It is not an economically viable option. What is required is to leverage existing investments in IT applications so as to be able to address changing business requirements with agility. Legacy modernization using options like Cloud Computing, Unified Communications, Services Oriented Architectures and Virtualization is the answer.

 

The post Modernization of Administrative Process and IT Systems appeared first on Netspective.

Governments worldwide are beginning to emphasize that their IT departments adopt Open Standards because of the improved interoperability, organizational flexibility and responsiveness that such an initiative can result in, and also as a means for avoiding vendor lock-in. As technology becomes increasingly an integral part of other disciplines, this new-found preference for Open Standards is driving innovation in politics, healthcare, disaster management and countless other sectors.

In keeping with the general trend towards Open Standards, many government IT software procurement policies today specify that products and solutions should support and implement Open Standards before they can be considered. However, there are several challenges to be overcome if this is to be put into practice. The reality is that, sometimes, Open Standards may not be available or are not mature enough for a required technology. Also, in some cases, the usage of a de facto standard is so entrenched that it is not practical to ignore it.

By adopting Open Standards, Federal agencies can achieve the following:

  • Introduce interchangeable components into their IT environments
  • Increase portability and scalability of their applications
  • Lower total cost of ownership
  • Improve interoperability
  • Gain access to better software products through increased choice
  • Reduce costs for switching and transferring data to different programs
  • Gain the ability to safeguard data over a long period of time
  • Reduce potential for unfair contract terms
  • Reduce lock-in to one system or one vendor
Standards Development

Individual standards typically are developed in response to specific concerns and constituent issues expressed by both industry and government. U.S. industry competitiveness depends on standardization, particularly in sectors that are technology driven.
Standards seek to ensure that:

  • Systems can be harmonized within and between organizations and across borders;
  • Different parties or entities can produce technologies that work together in order to foster mass adoption of those technologies by the community and to promote competition;
  • New players can more easily enter existing markets and manufacture new technologies or products that work with existing technologies and products; and
  • Consumers and users can be instantly familiar and comfortable with new systems, products and emerging technologies.

To effectively respond to the challenges posed by globalization, the emergence of new economic powers, and public concerns such as about climate change, and because of the need to stay abreast of evolving technologies, standards development organizations and the standards development process itself must be flexible as well as capable of adopting the most innovative and best performing technologies available.

Interoperability

Open Standards enable diverse products to work together. This gives governments choice among a diversity of applications from a wide range of suppliers/vendors, and leads to innovative technological developments. In the IT industry, standards are particularly important because they allow interoperability of products, services, hardware and software from different parties. Since the specifications are known and open, it is always possible to get another party to implement the same solution adhering to the standards being followed. Interoperability allows for better coordination of government agency programs and initiatives to provide enhanced services to citizens and businesses.

If Open Standards are followed, applications are easier to port from one platform to another since the technical implementation follows known guidelines and rules, and the interfaces, both internally and externally, are known. In addition to this, the skills learned from one platform or application can be utilized with less need for re-training. It is also in the interests of national security that Open Standards are followed to guard against the possibility of over-reliance on foreign technologies/products.

An interoperability framework needs to be put in place. This can provide baseline standards, policies, guidelines, processes and measurements for governments to adopt. The framework will detail how interoperability will be achieved among agencies and across borders, allowing the exchange and management of data and functionality. Combined with baseline audits of interoperability, interoperability frameworks can help create a pathway to greater interoperability through open IT ecosystems.

Baseline audit, mapping, and selective benchmarking efforts that are guided by a clear vision and goals make later policymaking more focused, effective and user driven. These efforts, if initiated with the early involvement of relevant stakeholders, will help identify systems silos that inhibit interoperability, and define areas where Open Standards are likely to have the greatest impact. Mapping standards means identifying all standards in use within and across agencies. An early mapping effort enables agencies to focus on making legacy systems interoperate and minimizes any disagreement over definitions that may impede progress.

Service Oriented Architecture (SOA)

Like in interoperability, Open Standards are the backbone of a service-based approach. In particular, a service orientation increases flexibility, modularity and choices. They ensure flexibility so that criteria and decisions are service-oriented and technology-neutral. They enable managers to combine, mix and match, and replace components without the expense and expertise of custom coding connections between service components. Service-oriented, Open Standards based interchangeable components give government organizations choices at the component level. Changes such as replacing legacy systems can be made without degrading the functionality of other parts of the ecosystem. Services can be built with modular components on different systems using a service-oriented architecture.

Improved Flexibility

By following Open Standards, governments gain new efficiencies from increased competition, access and control. Greater competition among suppliers, products and services helps governments maximize their return on investments and performance. Openness can also strengthen a buyer’s negotiating position since they have more options. This ability to choose not only lowers costs but also gives end users more latitude to set requirements and performance criteria.

The ability to see, use, implement and build from an Open Standard allows managers and users to exert more control while determining if and when they need to add functionality, swap components or fix bugs. By relying on Open Standards, managers can decide when to upgrade and who provides software support. They can replace suppliers or even implement upgrades in-house. Organizations can keep pace with changing technology, and become more efficient and effective in meeting citizen and taxpayer needs.

Other Benefits
  • Open Standards offer a balance of private and public interests that can protect IP with fairness, ensure clarity in disclosure policies, and promote reasonable and nondiscriminatory licensing.
  • Using Open Standards will also offer better protection of the data files created by an application against obsolescence of the application.
  • Open Standards make it an easier and, in some cases, the only possible means for local companies to participate as major players in supplying services and solutions to the government. The government can leverage Open Standards to mix and match solutions from different suppliers in order to give the local suppliers a chance.
  • Governments also benefit from the greater transparency that Open Standards bring to the IT ecosystem. This transparency enables organizations to determine the best balance between aspects such as protection, control, risk and cost. Open Standards allow government agencies to build on existing protocols and procedures, and to innovate on top of them.
  • As needs change or services expand, Open Standards can enable the evolution of a business case by allowing the future addition of components and functionality.
The Need for Building Awareness

Having a knowledgeable citizenry is necessary if governments are to sustain the advantages of open technologies, innovate and spur a society’s social and economic development. Education, R&D and training merit attention and resources in order to strengthen a nation’s knowledge base and its ability to share in innovation.

Governments must find ways to support and extend the work of collaborative communities, and where possible, formalize their role in a consultative process. User feedback, which often highlights smaller issues, may help identify new areas of growth for standards, evolve service-oriented approaches, test new designs or produce other innovations that enhance IT ecosystems. Collaborative development processes can also broadly impact openness in government and an economy, driving efficiencies, growth and innovation, as well as contribute to a society’s sustainability.

Open Standards are important to promote the wider adoption of standards and the corresponding development of interoperable and innovative technologies. There is often a degree of openness in the processes followed in the development of standards. However, it is the openness of the legal interests in standards – namely, users’ rights to access, use and share the technology embodied by a standard and its documented specifications – that is of fundamental importance in promoting interoperability and innovation.

In moving towards Open Standards it is necessary that the legal rights and restrictions that apply to standards and standard specifications are properly managed. In particular, it is crucial that copyright and patent interests are clearly disclosed to all developers and users of standards from an early stage and that the terms upon which these interests are licensed are made clear.

The post How Adoption of Open Standards Can Benefit Federal Agencies appeared first on Netspective.

With its huge potential for saving money and improving operational efficiency, virtualization has come as a boon to cash-strapped Federal CTOs. Not only can it substantially reduce the cost of running data centers and corporate networks through more efficient use of both hardware and software, it can also significantly reduce the number of physical devices on the network, thus considerably lowering the complexity of managing the network infrastructure. Whether it’s greater performance that you seek, or reliability, availability, scalability, consolidation, agility or a unified management domain, virtualization is your best bet for supporting public sector IT modernization goals.

Virtualization will however, result in more traffic in the consolidated area. Rather than merely adding more virtual servers, what is required is to create a converged infrastructure that allows all resources to be shared. This way storage, bandwidth, and applications can be reallocated based on current workload and organizational need, without them mixing with or disrupting other partitioned resources in the system.

10 GbE: The New Thing in Networking

As virtualization efforts evolve, you run into networking challenges, and that’s the time you should begin to think about laying the groundwork for the adoption of 10 Gigabit Ethernet in the data center. The shift toward 10 Gigabit Ethernet means more than moving to a higher bandwidth. It means re-examining your entire network architecture. Higher thorough-put from the 10 Gigabit Ethernet switches allows you to connect server racks and top-of-rack switches directly to the core network, obviating the need for an aggregation layer. Now that high-speed network technologies like 10 Gigabit Ethernet have become widely available, several new solutions have been developed to consolidate network and storage I/O into small numbers of higher bandwidth connections.

How 10GbE Works

10 GbE technology is used primarily to interconnect switches and routers. By separating data and routing information, it allows you to control your own IP address routing and make the changes you want. You do not have to share your routing scheme with your service providers. And you can support both IP and non-IP based protocols. The latest 10 Gigabit Ethernet rack-top switches now support the same 48-port density as 1 Gigabit Ethernet switches, which means you do not lose valuable rack space when upgrading to 10 Gigabit Ethernet.

The primary advantage of 10 GbE technology is the sharp reduction in the number of adapters and ports required for a server. In addition to the usage of fewer physical devices, you also save valuable floor space, as well as power and cooling resources.

The Benefits of 10 Gigabit Ethernet

With applications becoming increasingly bandwidth-intensive, faster networking solutions are required to improve network connectivity while maintaining high reliability levels. 10 GbE technology provides the perfect solution to meet this requirement. In addition to increasing network speed, it also offers the following:

  •  Potentially lowest total cost-of-ownership in terms of expenditure on infrastructure, maintenance costs and the involvement of human capital
  • Quick and easy migration to higher performance levels
  • Proven plug and play integration capability using your existing infrastructure
  • Familiar network management feature set that involves hardly any learning curve

Upgrading to 10 Gigabit Ethernet is a great way for Federal agencies to get better results from their server virtualization and infrastructure consolidation initiatives. It provides a significant increase in bandwidth while ensuring full compatibility with existing interfaces, thus protecting your investment on cabling, equipment, processes, and Ethernet based training. It retains your existing Ethernet architecture, including the Media Access Control (MAC) protocol and the Ethernet frame format and frame size.

But the biggest benefits of 10GbE come from having fewer servers and less storage gear plugged in. You can deploy just two 10 Gigabit Ethernet instead of using four to eight 1 Gigabit Ethernet in each server, and still achieve full redundancy for availability and additional room for expansion. In terms of scalability a 10 Gigabit Data center enables terabits of aggregate traffic without adding more layers to the network. Additionally it simplifies the network design by eliminating congestion points and reduces the need for complex QOS schemes.

Here’s a run down on the benefits of a consolidated and virtualized 10 GbE data center:

  • Enables flexible, dynamic and scalable network infrastructure
  • Reduces overall physical connection count
  • Provides high availability and redundancy
  • Improves server utilization and application efficiency
  • Reduces power consumption
  • Offers significant cost benefits

With network traffic increasing inexorably by the day, Federal data center managers need to look for faster network technologies to solve increased bandwidth demands. 10 Gigabit Ethernet offers ten times faster performance than Gigabit Ethernet, allowing you to reach longer distances and support even more bandwidth hungry applications, making it the natural choice for expanding, extending, and upgrading existing Ethernet networks. 10 Gigabit Ethernet helps you get the best out of your virtualized environment.

The post 10Gb Ethernet: Taking Virtualization to the Next Level appeared first on Netspective.

The Obama Administration has taken a hard look at the Federal IT infrastructure, and has come to the conclusion there’s way too much of flab that needs to be trimmed. Federal CIO Vivek Kundra has been put on the job, and he’s got his task cut out – reduce information technology costs, lower energy consumption, minimize IT real estate space utilization, bolster information security, and expand the use of cloud computing. Kundra has laid out a roadmap that proposes to:

  •  Promote the use of Green IT by reducing the overall energy and real estate footprint of government data centers;
  •  Reduce the cost of data center hardware, software and operations;
  • Increase the overall IT security posture of the government; and,
  • Shift IT investments to more efficient computing platforms and technologies.

One major area of concern has been the proliferation of resource hungry data centers and server farms, which currently number more than 1,100. The government has been spending billions on these behemoths annually, but many of these remain under utilized. Data center consolidation therefore is high on Kundra’s priority list. It helps to contain server sprawl and simplify data center management. It reduces the load on servers during peak hours, resulting in improved performance, availability and scalability as well as lower maintenance costs.

Data Center Consolidation

Never has there been more pressure on Federal IT to deliver higher levels of service or greater degree of availability than today. Likewise, never has it been more constrained to cut back on costs than it has now. Making sure your technology environment is efficient and effectively managed has become absolutely essential. Your data center by its very nature is where a substantial portion of your IT resources are concentrated, and that’s where you should start if you want to improve your computing environment and cut costs at the same time.

Data center consolidation means more than merely combining servers. Detailed below are a number of other factors that go into it, all of which translate into an opportunity for reduced cost and greater manageability.

  •  The problem of multiple physical locations: By reducing the number of physical locations of your data centers and combining their operations, you can substantially reduce cost overheads. While multiple data center locations used to be a way to meet the need for disaster recovery and business continuity, there are better ways to address those concerns now.
  •  Server consolidation: This not only implies utilizing unused server capacity, but also analyzing the historical CPU and memory usage patterns of the applications running on them, and consolidating them in an intelligent way.
  • Infrastructure management: This includes creating more efficient networks and better storage management. It also includes utilizing shared services to the extent possible.
  • Optimizing space and power utilization: This will result in a greener computing environment as well as in substantial cost savings.
  • Managing people and processes: This will perhaps be your biggest challenge, because while it’s quite a straight forward matter to switch servers, bringing your personnel up to speed on those changes can be more tasking.
The Benefits of Data Center Consolidation

By simplifying and consolidating your data center environment, you achieve several goals including the following:

  • Better manageability
  • Reduced costs in the areas of human resources and infrastructure facilities
  • Improved service levels and higher availability
  • Minimization of impact from external factors
Server Virtualization: The Best Way to Consolidate Your Data Center

Data center consolidation can best be achieved through server virtualization. It enables your physical servers to leverage unused capacity to support multiple workloads on simultaneously running virtual machines. It can thus help you significantly reduce the number of servers in your data center, which in turn will result in less hardware, less rack space, less cabling, less cooling, and less energy being used. This translates into lower capital costs and a substantial reduction in your ongoing maintenance expenses as well.

But virtualization is not merely about reducing the physical footprint of the servers in your data center and the resultant cost savings. It’s more about dynamically launching applications, reducing latency and expediting disaster recovery. It’s about reducing the number of tiers on your network, aggregating traffic and adding multiple devices that work like one — all of which will contribute to simplifying your operations and ensuring your network’s performance and latency are at acceptable levels.

The Benefits of Virtualization

Server virtualization can bring you a host of benefits, some of which are enumerated below:

  •  Virtualization can help you maximize network asset utilization.
  • Fewer physical servers result in reduced capital and operational expenses.
  • Fewer servers also mean reduced energy requirements and a lower carbon footprint.
  •  Virtualization can substantially reduce your rack space requirements and capital expenditure on real estate.
  • By having each application deployed within its own virtual machine, you can prevent one application from impacting another when upgrades or changes are made.
  • Virtualization allows you to develop a standard virtual server build that can easily be duplicated, thus speeding up server deployment.
  • Virtualization allows you to deploy multiple operating systems on a single hardware platform.
  • Virtualization allows you to deliver computing resources on demand, and seamlessly move data to any location on the fly.

Federal agencies would do well to shift focus from merely maintaining their server farms and data centers on which about 70% of a typical IT budget is being spent today, and instead focus their attention on efficiency and innovation, and on improving the availability of their IT resources and applications through virtualization. They should look at building virtualized networks with the ability to scale across hundreds of interconnected physical computers and storage devices. This will not only result in enhanced performance, security and availability across the board, but also in the lowest TCO over the long term, and in saving billions of dollars of taxpayers’ money.

The post Virtualization: Your First Step to Data Center Consolidation appeared first on Netspective.

In March 2010, exchange of patient medical information at VA-DOD had to be shutdown because errors kept popping up. In May 2008, heavy flooding forced evacuation of 176 patients from Mercy Medical Center in Cedar Rapids, Iowa as flood waters and sewage seeped into its basement. They barely managed to save their medical records. In May 2007, a massive storm ripped through Greensburg, Kansas and razed the Kiowa County Memorial Hospital, along with 95% of the town. The patients and staff were rescued. It was initially thought that all 17,000 patient records had been lost. After searching through the rubble they were fortunate to find that most of these were secure in a file cabinet. These were close calls, but umpteen such events keep happening, and it’s not always that luck is on your side.

Like with every other segment that relies heavily on information technology, ensuring the availability of data is of paramount importance in the healthcare sector, more so because people’s lives depend on it. In fact, DR ranked top on healthcare providers’ IT shopping lists according to a recent survey on spending priorities of the global healthcare industry, with 44 percent considering it as their top IT investment priority. Moreover, Federal mandates such as the Health Insurance Portability and Accountability Act (HIPAA) and the Joint Commission on Accreditation of Healthcare (JHACO) regulations also require healthcare providers to have data backup, DR and emergency mode operation plans in place.

Building a Disaster Recovery Plan:

With hospitals, insurance companies, laboratories, physicians’ offices, clinics and imaging centers continually accessing the system to add or recover data, ensuring information availability is a very critical need. Even an hour of downtime can have severe repercussions and could negatively impact patient care, apart from causing a lot of other collateral damage. Two key metrics on which you should base your Disaster Recovery plans are the Recovery Time Objective (RTO) and Recovery Point Objective (RPO). These metrics define the allowable down-time and the allowable data-loss per application. The smaller the RTO and RPO window is, the more complex and expensive the recovery plans become. Given below are the basic steps to be followed while building your DR plan:

  1. Secure commitment from top management for allocating adequate time and resources to develop an effective DR plan.
  2. Establish a planning committee that includes representatives from all functional areas of your organization to oversee the development and implementation of the plan.
  3. Perform a risk and business impact analysis taking into consideration all possible disaster scenarios including those arising out of natural, technical and human factors, and rate the probability of their occurrence on a scale of 1 to 5.
  4. Prioritize critical functional areas that are required to continue operations in the event of disaster.
  5. Evaluate various backup options available and identify one that is practical and economically viable for your organization.
  6. Collate essential information such as software / hardware inventory, storage location inventory, backup position, notification checklist, and communication protocol including contact numbers. Using pre-formatted forms helps you do this faster.
  7. Document the plan incorporating step-by-step procedures to be followed and assign responsibilities to appropriate team members for key functional areas such as administrative actions, facilities control, logistics management, customer support, system backup, and restoration.
  8. Test the plan to evaluate the reliability of backup facilities and procedures and identify areas that need modification, if any.
  9. Secure approval of the plan from top management.
  10. Train all personnel involved in backup and recovery procedures. Modify plan from time to time taking into account changed scenarios

Currently available DR options:

Listed below are some of the data backup and recovery options currently available. Choose one that is appropriate for your organization.

  • Tape backups – can serve as a second backup source stored in offsite locations, but are risky, slow and unreliable.
  • Disk-to-Disk technology – more secure and convenient and can automatically back up from different locations. However, on the downside, disk is more expensive as a storage medium than tapes.
  • Vaulting – automatically backs up select files at scheduled intervals.
  • Mirroring / Replication - this disk-to-disk process simply creates a data copy between two disk platforms. When trouble strikes the original, data can be restored from the replicated version or possibly even accessed directly from there. Can restore a crashed computer system within hours, instead of days.
  • Storage Area Network (SAN) – usually found to be reliable, but troubleshooting can be a Herculean task, proved as recently as last month with the massive outage at Virginia Information Technologies Agency’s Storage Area Network at Richmond.
  • WAN Optimization – bandwidth friendly, this technology can move tons of data and meet strict recovery requirements.
  • De-duplication – Often called ‘intelligent compression’ or ‘single – instance storage’, it is designed to minimize use of storage space, and looks for repeating patterns of data at the block and bit levels. After an initial backup, only changed blocks are written to disk during subsequent jobs, thus consuming significantly less storage space.
  • Continuous Data Protection (CDP) – continuously captures data modifications and stores changes independently of the primary data, enabling recovery points from any point in the past. Offers fine granularities of restorable objects to infinitely variable recovery points.
  • Snapshot – With snap shot technology, data recovery takes place as fast as the backup process. It creates incremental and differential images with almost zero latency, so there is no need to shut down system applications.

Practical tips to improve data availability:

  • Store the data in an all-disk environment, with the newest or regularly accessed data stored in higher-end disk storage devices, and unused or less critical data stored and archived in lower cost disk storage devices.
  •  To ensure business continuity, replicate the data across different data centers so that if one data center goes down, the other data center kicks in with zero disruption.
  • Protect your network from infiltration by subscribing to network security. Choose from the numerous security seals available online to perform standard and advanced audit of your system to keep it secure against hacking threats.
  • Ensure 24×7 monitoring and management as well as onsite / remote troubleshooting.
  • Conduct regular disaster recovery drills and periodically test your disaster recovery plan
  • Establish downtime procedures and have carefully drawn out medical treatment protocols in place to switch to during automation failures.
  • Have subject matter experts review networking design, technologies, components and requirements. They can help support Information Availability goals during network outages, especially prolonged ones.

A robust DR plan is like a healthcare plan which guarantees that you will come out fine in the end, should calamity strike. Bad choices can fritter away scarce dollars, risk regulatory compliance, and fail to deliver access to your backups when most needed. Simply put, not having a good DR plan in place is courting disaster.

The post How to plan for disaster recovery in healthcare and other data critical environments appeared first on Netspective.

Category: 
Influential
Active Order: 
1097

Tagcloud