Memories of the Computer Anatomy

Hard drives are the memories of our computers. They store documents, data, voice recordings and even entire movies. Because hard drives are so spacious and efficient these days, we can start to believe that they offer permanent and secure storage for our data. Unfortunately, that is not the case.

For such hard-working devices, hard drives can be remarkably fragile. They store data on stacks of rotating metallic platters. Magnetic heads ‘float’ between the platters, moving information back and forth without making physical contact. The information that looks so real on a monitor is, in fact, delicate electrical impulses on a metal plate.

Once people in an organization know how hard drives work, they understand how easy it is for data to be lost. As hard drives become smaller, so does their ‘tolerance’, the distance between the platter and the heads that read and write data. Bumping into a computer while the hard drive is running can make the head actually touch the platter and literally ‘rub out’ the data there. Contamination, like dust or moisture, or a slight change in power can also cause damaging head contact.

That’s why it is absolutely vital to switch off the hard drive at the first sign of any unusual noise, like grinding, scraping or chattering. If nothing’s wrong, nothing has been lost. But if there is physical damage taking place inside the drive, prompt action can keep it to an absolute minimum and more data will be available for recovery

Read More

The New Media for Small and Medium Sized Storage Solutions

When small to medium-sized users need more storage capacity and faster backups than they can achieve with 8mm or DDS backups, there are two new formats to choose from.

Digital Linear Tape (DLT) systems have been available since 1985, but recent increases in both speed and capacity have given the technology a new lease on life. In fact, for small to medium-sized systems they have been the leading technology for the last several years. DDS or DAT tapes were the only competitors for DLT in that market, but the tape heads had a tendency to ‘drift’ which meant technicians had to monitor them to ensure storage. DLT reliability is based on a ‘straight up and down’ recording mode.

Earlier this year, the introduction of Super DLT brought a tremendous boost in performance. Super DLT can store as much as 110 gigabytes on one cartridge, at a speed of 10 megabytes per second. With the speed of backup doubled, and capacity more than doubled, the technology can now reach ‘up’ to systems and networks that DLT previously couldn’t handle.

Competing technologies can offer very fast backups, but the tapes themselves contain very little data – hundreds of megabytes as opposed to hundreds of gigabytes that DLT offers.

Another technology has recently emerged that is comparable to DLT. That is LTO or Linear Tape Open, a consortium product from Seagate, IBM and Hewlett-Packard. LTO can put 100 gigabytes on a cartridge at up to 15 megabytes per second.

For cautious system administrators who don’t wish to try LTO, one technician said DLT is a more than acceptable choice: “Thirty million cartridges and a million tape drives can’t be wrong.”

Of course, Super DLT incorporates a good deal of new technology as well, so even though LTO is completely new technology, it “has a nice road map in front of it.” Super DLT uses a new recording format, but it does maintain a limited form of backwards compatibility with previous iterations of DLT. It incorporates the ability to read older tapes, although it cannot write to them, which means it would probably be most useful in allowing organizations to maintain their present archives in a useable form. Where users have thousands of tapes in their libraries, there can be a considerable saving in time and money if older tapes don’t have to be re-recorded on to newer ones. For those users who are moving from 8mm or DDS format systems, and committed to re-recording all their data, then there may be little to choose between LTO and Super DLT systems.

Today’s demands for storage capacity are increasing, and if anything there is going to be more pressure on our ability to back up, store, protect and retrieve data. Low to medium size users now have a choice: Super DLT, based on generations of iterative development and refinement, or LTO, new technology from a high-powered and stable group of technology companies.

Present archives in a useable form. Where users have thousands of tapes in their libraries, there can be a considerable saving in time and money if older tapes don’t have to be re-recorded on to newer ones. For those users who are moving from 8mm or DDS format systems, and committed to re-recording all their data, then there may be little to choose between LTO and Super DLT systems.

Read More

Virus Protection Key to Healthy Computing

Computer viruses are proving to be highly complex but preventing viruses from infecting your computer systems is simple. Use two well-known brands of anti-virus software and keep them as current as possible.

Beyond that, there are some simple, common sense procedures that everyone should use, whether at work or in the home computing environment. Never open a file whose origins are unknown. In a simpler day, that wisdom only applied to executable files, or files that did something. They have the suffixes .exe, .com and .bat and each can start a program on your computer. These viruses spread through games downloaded from the Internet, on borrowed diskettes and through the old ‘bulletin board’ services.

Today, unfortunately, a whole new wave of viruses has been unleashed on unsuspecting computer users because software manufacturers introduced feature-rich new programs without considering how vulnerable they are to viruses. Now, almost any document and many email messages can carry and spread ‘macro’ viruses at lightning speed. That’s why it is so important never to open messages or documents from unknown sources. Viruses can delete data, change file names or even damage the physical media the data where the data is stored.

How important is virus protection?
If your data is critical to your business operations, there is nothing more important. Even though about 75 per cent of all data loss incidents are caused by human error or system malfunctions, a virus attack can still cripple your data center. A combination of regular, verified backups and constantly updated virus protection are absolutely essential to protect your data – and your organization.

Read More

Top 10 Data Recovery Bloopers

1. People Are the Problem, Not Technology
Disk drives today are typically reliable – human beings aren’t. A recent study found that approximately 15 percent of all unplanned downtime occurs because of human error.

2. When Worlds Collide
The company’s high-level IT executives purchased a “Cadillac” system, without knowing much about it. System implementation was left to a young and inexperienced IT team. When the crisis came, neither group could talk to the other about the system.

3. An Almost Perfect Plan
The company purchased and configured a high-end, expensive, and full-featured library for the company’s system backups. Unfortunately, the backup library was placed right beside the primary system. When the primary system got fried, so too did the backup library.

4. When the Crisis Deepens, People Do Sillier Things
When the office of a civil engineering firm was devastated by floods, its owners sent 17 soaked disks from three RAID arrays to a data recovery lab in plastic bags. For some reason, someone had frozen the bags before shipping them. As the disks thawed, even more damage was done.

5. It’s the Simple Things That Matter
The client, a successful business organization, purchased a “killer” UNIX network system, and put 300+ workers in place to manage it. Backups were done daily. Unfortunately, no one thought to put in place a system to restore the data to.

6. Buy Cheap, Pay Dearly
The organization bought an IBM system – but not from IBM. Then the system manager decided to configure the system uniquely, rather than following set procedures. When things went wrong with the system, it was next to impossible to recreate the configuration.

7. Lights Are On, But No One’s Home
A regional-wide ambulance monitoring system suffered a serious disk failure, only to discover that its automated backup hadn’t run for fourteen months. A tape had jammed in the drive, but no one had noticed.

8. Hit Restore and All Will Be Well
After September’s WTC attacks, the company’s IT staff went across town to their backup system. They invoked Restore, and proceed to overwrite from the destroyed main system. Of course, all previous backups were lost.

9. In a Crisis, People Do Silly Things
The prime server in a large urban hospital’s system crashed. When minor errors started occurring, system operators, instead of gathering data about the errors, tried anything and everything, including repeatedly invoking a controller function which erased the entire RAID array data.

10The Truth, and Nothing But the Truth
After a data loss crisis, the company CEO and the IT staffer met with the data recovery team. No progress was made until the CEO was persuaded to leave the room. Then the IT staffer opened up, and solutions were developed.

Read More

Top 10 Data Recovery Companies

Secure Data Recovery (Recommended, USA)
Data Recovery Service provider with data recovery lab locations located throughout the United States. Our experience in the data recovery industry is unmatched. We have been operating since 1997 and offer world-class service and support. Our team of data recovery professionals are experts in providing advanced data recovery solutions. Our network of data recovery specialists provide: fast, friendly, accurate and reliable service.
http://www.securedatarecovery.com/

1. Kroll Ontrack®
Kroll Ontrack provides technology-driven services and software to help recover, search, analyze and produce data efficiently and cost-effectively. Commonly bridging the gap between technical and business professionals, Kroll Ontrack services a variety of customers in the legal, government, corporate and financial markets around the world.
http://www.ontrackdatarecovery.com/

2. R-Tools Technology Inc.
The leading provider of powerful data recovery, undelete, drive image, data security and PC privacy utilities for the Windows OS family.
http://www.r-tt.com/

3. DTIData
DTI DATA is the industry’s premier data recovery service and recovery software company for both physical and logical hard drive recovery.
http://www.dtidata.com/

4. SalvageData
SalvageData is the first and only US based ISO 9001:2000 certified data salvaging & recovery service lab in North America specializing in advanced data salvaging and recovery from all digital media storage types and formats.
http://salvagedata.com/

5. DriveSavers
DriveSavers is the worldwide leader in data recovery services and provides the fastest, most secure and reliable data recovery service available.
http://www.drivesavers.com/

6. Stellar Information Systems Limited
Stellar Information Systems Limited is an ISO 9001-2000 certified company specializing in data recovery and data protection services and solutions.
http://www.stellarinfo.com/

7. Data Clinic Ltd
Data Clinic Ltd provide you with a professional, cost effective and prompt data recovery service from crashed hard disks and other computer based media.
http://www.dataclinic.co.uk/

8. First Advantage Data Recovery Services (DRS)
With more than 25 years involvement in hard drive data recovery, Data Recovery Services has and will continue to lead the industry in those areas.
http://www.datarecovery.net/

9. CBL
CBL provide data recovery for failed hard drives in laptops, desktop computers, data servers, RAID arrays, tapes and all other data storage media.
http://www.cbltech.com/

10. Data Recovery Centre (ADRC) Pte Ltd
Adroit Data Recovery Centre (ADRC) Pte Ltd is the data recovery expert established since 1998.
http://www.adrc.net

Top 10 Data Recovery Company

Read More

Why did data loss?

Physical damage

A wide variety of failures can cause physical damage to storage media. CD-ROMs can have their metallic substrate or dye layer scratched off; hard disks can suffer any of several mechanical failures, such as head crashes and failed motors; and tapes can simply break. Physical damage always causes at least some data loss, and in many cases the logical structures of the file system are damaged as well. This causes logical damage that must be dealt with before any files can be recovered.

Most physical damage cannot be repaired by end users. For example, opening a hard disk in a normal environment can allow dust to settle on the surface, causing further damage to the platters. Furthermore, end users generally do not have the hardware or technical expertise required to make these sorts of repairs; therefore, data recovery companies are consulted. These firms use Class 100 clean room facilities to protect the media while repairs are made, and tools such as magnetometers to manually read the bits off failed magnetic media. The extracted raw bits can be used to reconstruct a disk image, which can then be mounted to have its logical damage repaired. Once that is complete, the files can be extracted from the image.

Logical damage

Far more common than physical damage is logical damage to a file system. Logical damage is primarily caused by power outages that prevent file system structures from being completely written to the storage medium, but problems with hardware (especially RAID controllers) and drivers, as well as system crashes, can have the same effect. The result is that the file system is left in an inconsistent state. This can cause a variety of problems, such as strange behavior (e.g., infinitely recursion directories, drives reporting negative amounts of free space), system crashes, or an actual loss of data. Various programs exist to correct these inconsistencies, and most operating systems come with at least a rudimentary repair tool for their native file systems. Linux, for instance, comes with the fsck utility, and Microsoft Windows provides chkdsk. Third-party utilities are also available, and some can produce superior results by recovering data even when the disk cannot be recognized by the operating system’s repair utility.

Two main techniques are used by these repair programs. The first, consistency checking, involves scanning the logical structure of the disk and checking to make sure that it is consistent with its specification. For instance, in most file systems, a directory must have at least two entries: a dot (.) entry that points to itself, and a dot-dot (..) entry that points to its parent. A file system repair program can read each directory and make sure that these entries exist and point to the correct directories. If they do not, an error message can be printed and the problem corrected. Both chkdsk and fsck work in this fashion. This strategy suffers from a major problem, however; if the file system is sufficiently damaged, the consistency check can fail completely. In this case, the repair program may crash trying to deal with the mangled input, or it may not recognize the drive as having a valid file system at all.

The second technique for file system repair is to assume very little about the state of the file system to be analyzed and to, using any hints that any undamaged file system structures might provide, rebuild the file system from scratch. This strategy involves scanning the entire drive and making note of all file system structures and possible file boundaries, then trying to match what was located to the specifications of a working file system. Some third-party programs use this technique, which is notably slower than consistency checking. It can, however, recover data even when the logical structures are almost completely destroyed. This technique generally does not repair the underlying file system, but merely allows for data to be extracted from it to another storage device.

While most logical damage can be either repaired or worked around using these two techniques, data recovery software can never guarantee that no data loss will occur. For instance, in the FAT file system, when two files claim to share the same allocation unit (“cross-linked”), data loss for one of the files is essentially guaranteed.

The increased use of journaling file systems, such as NTFS 5.0, ext3, and xfs, is likely to reduce the incidence of logical damage. These file systems can always be “rolled back” to a consistent state, which means that the only data likely to be lost is what was in the drive’s cache at the time of the system failure. However, regular system maintenance should still include the use of a consistency checker in case the file system software has an error that may cause data corruption. Also, in certain situations even the journaling file systems can not guarantee consistency. For instance, if the physical media disk used delays the writing back of data or reorders it in ways invisible to the file system (for instance, some disks lie about the changes being flushed to the disk, saying they have been flushed when they actually haven’t) a power loess may cause such errors to occur (note that this is usually not a problem if the delay/reordering is done by the file system software’s own caching mechanisms). The solution is to use hardware that doesn’t report data as written until it actually is written or using disk controllers equipped with a battery backup so that the waiting data can be written when power is restored. Alternatively, the entire system can be equipped with a battery backup (UPS) that may make it possible to keep the system on in such situations, or at least give some time to have it shut down properly.

And BACKUP YOUR DATA is a good way to protect data.

But backup technology and practices have failed to adequately protect data. Most computer users rely on backups and redundant storage technologies as their safety net in the event of data loss. For many users, these backups and storage strategies work as planned. Others, however, are not so lucky. Many people back up their data, only to find their backups useless in that crucial moment when they need to restore from them. These systems are designed for and rely upon a combination of technology and human intervention for success. For example, backup systems assume that the hardware is in working order. They assume that the user has the time and the technical expertise necessary to perform the backup properly. They also assume that the backup tape or CD-RW is in working order, and that the backup software is not corrupted. In reality, hardware can fail. Tapes and CD-RW do not always work properly. Backup software can become corrupted. Users accidentally back up corrupted or incorrect information. Backups are not infallible and should not be relied upon absolutely.

Read More

3D Data Recovery process

Data recovery firms are missing out on data they could retrieve with the complete 3D Data Recovery process. Proper data recovery involves three phases: drive restoration, disk imaging, and data retrieval. But data recovery professionals can face frustrating problems when imaging a damaged disk. The drive may repeatedly stop responding in the middle of copying data. The drive may fail completely because of the stress caused by intensive read processes. Significant portions of data may be left behind in bad sectors.

These issues plague firms that use traditional disk imaging methods. Read instability makes it difficult to obtain consistent data quickly, and system software is not equipped to read bad sectors. However, these problems can be solved with imaging tools that address disk-level issues.

Imaging software bypasses system software and ignores error correction code (ECC), processing each byte of data in bad sectors. Inconsistent data is evaluated statistically to determine the most likely correct value. Faster transfer methods speed up the process, and customizable algorithms allow the data recovery professional to fine-tune each pass. Imaging software provides feedback on the data recovered while imaging is still underway.

Imaging hardware can reset the drive when it stops responding, which minimizes damage from head-clicks and allows the process to run safely without supervision.

1.Drive Restoration: Damage to the hard disk drive (also referred to as HDD) is diagnosed and repaired as necessary. There are three main types of damage:

  • Physical/mechanical damage: Failed heads and other physical problems are often repaired by replacing the damaged hardware with a donor part.
  •  Electronic problems: Failed printed circuit boards (PCBs) are replaced with donor PCBs, and the contents of the failed PCB read-only memory (ROM) are copied to the donor.
  •  Firmware failure: Firmware failures are diagnosed and fixed at the drive level.2.Disk Imaging: The contents of the repaired drive are read and copied to another disk, Disk imaging prevents further data loss caused by working with an unstable drive during the subsequent data retrieval phase.Drives presented for recovery often have relatively minor physical degradation due to wear from normal use. The wear is severe enough for the drive to stop working in its native system. However, imaging software can work with slightly degraded drives, so part replacement is often not required. In these cases, the data recovery process can skip drive restoration and start with disk imaging.3. Data Retrieval: The original files that were copied onto the image drive are retrieved. Data retrieval can involve these tasks:
  • File system recovery: The recreation of a corrupted file system structure such as a corrupted directory structure or boot sector, due to data loss.
  • File verification: Recovered files are tested for potential corruption.
  • File repair: If necessary, corrupted files are repaired. Files might be corrupt because data could not be fully restored in previous phases, in which case disk imaging is repeated to retrieve more sectors. File repair is completed, where possible, using vendor-specific tools.Drive restoration and data retrieval, the first and last phases, are well-serviced by the data recovery industry. Many data recovery companies have the necessary software, hardware, knowledge, and skilled labor to complete these phases. However, the technology for effective disk imaging has been relatively neglected because of its challenges, making it a weak link in the data recovery process. Data recovery firms that skim the surface with traditional imaging methods often miss out on potential revenue.
Read More

How to select a data recovery provider

Data Recovery ProviderScan the web for data recovery providers, and you’ll find hundreds of companies promoting data recovery capabilities. Choosing the right provider can be a deciding factor in whether you will get your lost data back – and if so – how long you will have to wait. The information below will help you identify misleading sales tactics and select the provider that offers the highest level of professional service and overall best value.

Step 1: Identify companies that have the technology and resources to solve a wide array of data loss challenges.

  • How long has the provider been in the data recovery service business?
  • Does the provider have a clean-room laboratory to safely open, repair and recover data from media storage devices?
  • How many recovery labs with clean-rooms, does the provider operate? Does the provider have global coverage?
  • Does the provider have a sufficient number of engineers to handle large and complex recovery jobs and handle peak-demand seasons (e.g. hurricane season)?
  • Can the provider recover data from systems that are proprietary to their clients? Does the provider have the technology and resources to develop customized data recovery tools if required?
  • Does the provider have the expertise to recover data from virtually any type of platform, storage device, database or operating system?
  • Does the provider have the resources to perform emergency and/or on-site data recoveries?

Step 2: Identify companies that provide a range of data recovery solutions to fit your specific needs.

  • Does the provider have service and/or do-it-yourself software options to fit your budget?
  • Does the provider offer a fast enough level of recovery service to address the most urgent data loss situations?
  • Can the provider offer a secure remote data recovery service for data loss situations where no mechanical damage has occurred to the storage device?
  • What is the standard turn-time required for desktop recovery and laptop data recoveries? For more complex systems?
  • Does the provider retain your recovered data for a period of time after the client’s recovery is complete?

Step 3: Identify providers that will provide you with the information required to make an educated purchase decision.

  • Will the provider give you a file listing report showing the recoverability of your files before you commit to recovery fees?  Is this included in their evaluation service?
  • If the provider offers a file listing report, how long will it take them to deliver the report?
  • Will the provider commit to quoted price ranges in writing to ensure the services fit your budget?

Step 4: Identify companies that offer professional customer service whenever and wherever you need it.

  • Does the provider offer 24/7/365 customer service?
  • In which languages does the provider offer customer support?
  • Will the provider offer you free, no-obligation consultation and present you with a range of recovery options?
  • Will the provider allow you to speak one-to-one with a data recovery engineer to discuss your options?
  • Does the provider have a technical support team on staff to offer pre- and post-recovery support?
  • Does the provider have online customer portals to allow you to track the progress of your data recovery from start to finish?

Step 5: Identify companies that have well documented and established procedures for maintaining the security and confidentiality of your data.

  • Is the provider authorized by private and government entities to handle highly sensitive data?
  • Does the provider have the expertise to properly document chain of custody if the storage media is likely to be involved in an investigation or court case?
  • Does the provider have the ability to recover encrypted data?
  • Does the provider have the ability to return your data in an encrypted form?
  • Does the provider perform employee background checks for anyone that may come into contact with your data?
  • Does the provider participate in the U.S. GSA (General Services Administration) Program?
  • Do the provider’s facilities meet all U.S. Department of Defense specifications?

Step 6: Avoid the gimmicks! Select a data recovery provider you can trust by eliminating those who use questionable sales tactics

  • If the provider offers “free evaluations”, what is included in the free service? Will you receive a file listing report showing which files can be recovered before you are required to approve additional charges?
  • If the provider offers “no data, no charge”, what will they charge you if they recover data but not the data that you need.
  • If the provider quotes a price range for their recovery service, will they put it in writing?
  • Does the provider charge you for parts or are there other hidden fees?
  • Does the provider use outside or third parties to perform the data recovery service?
Read More

Reasons and Costs of Data Loss

Computer data may be one of your company’s most valuable and vulnerable assets. According to our experience, the primary threats to your data include:

  • Hardware or System Problems
  • Human Error
  • Software Corruption or Program Problems
  • Computer Viruses
  • Natural Disasters

These five major threats to your computer data share two things in common: they are unpredictable and, in many cases, uncontrollable. Therefore, the precautions taken by IT professionals to safeguard company data cannot always prevent a data loss.

Computer users and many experts often consider lost data permanently destroyed, with no hope of recovery. Information about lost data can be complex, inconsistent or inaccurate, so it’s not surprising that data loss and data recovery are some of the most confusing and misunderstood concepts.

In addition to being a vulnerable asset, computer data is also a valuable asset.

Based on the information below it is easy to see how significant the costs of lost or inaccessible data can be. The following is a summary of the average hourly impact of lost data on a selection of different businesses.

Type of Business & Average Hourly Impact

Costs Of Data Loss

When time is crucial and data is mission-critical, data recovery may be the most practical option available. Data recovery professionals recover data from the damaged media itself, providing several advantages over alternative methods of data retrieval.

1) Complete – Data recovery professionals can safely enter the system or media to achieve a comprehensive data recovery.

2) Current – Although many people revert to backups following a data loss, those backups typically contain outdated information or could be corrupt themselves. Data recovery can help you access the most recent version of the lost data.

3) Fast – Every second that passes following a data disaster means time and money lost to your company. Data recovery reduces this downtime by quickly recovering and returning your data.

4) Cost-effective – The expense in time, money, and effort of rebuilding or re-keying lost data can be overwhelming to your company. Data recovery can provide the quickest and most complete data recovery possible.

Read More