Glossary of Hard Disk Drive Terminology (Letter A)

Access
Retrieval of data from or transfer of data into a storage device or area such as RAM or a register.

Access Time
The amount of time, including seek time, latency and controller time, needed for a storage device to retrieve information.

Active Partition
The partition of the drive that contains the operating system. If the drive has multiple partitions, only the primary partition can be made active. A hard drive can have only one active partition.

Active Termination
One or more voltage regulators that produce termination voltage. The voltage regulator(s) drive a constant voltage along the bus to ensure that the data signal stays constant and strong over the entire length of the bus. The result is increased data integrity and reliability.

Actuator
A mechanical assembly that positions the read/write head over the appropriate track.

Actuator Arm
The part of the actuator assembly that includes the positioning arm and the read/write heads.

Adaptive Caching
A feature of Western Digital drives that enables them to improve performance and throughput by adapting to the application being run.

Address
In the hard drive industry, there are several types of addresses; an address may refer to that of a drive, called a unit address; radial position, called a cylinder address; or circumferential position, referred to as a sector address.

AFR
Annualized Failure Rate.

Allocation
The method DOS uses to assign a specific area of the hard drive to a given file. (See also cluster.)

American National Standards Institute (ANSI)
A governmental body of the United States responsible for approving US standards in many areas, including computers and communications. ANSI is a member of the International Standards Organization (ISO).

Arbitrated Loop
Fibre channel topology where two or more ports can interconnect but only two ports can communicate at the same time.

Arbitration
The act of determining which command, device, or communication protocol controls the operating environment.

Areal Density
The number of bits of data that can be recorded onto the surface of a disk or platter usually measured in square inches. The areal density is calculated by multiplying the bit density (BPI – Bits Per Inch) by the track density (TPI – Tracks Per Inch).

ASP
Average selling price.

Asynchronous Transmission
Each byte of information is synchronized individually through the use of request and acknowledge signals.

AT Bus Attachment (ATA-4)
The interface defined by IBM for the original AT disk controller. Western Digital designed the WD Caviar® drives to be fully ATA-4 compatible.

Auto Defect Retirement
If the drive finds defective sectors during reads or writes, they are automatically mapped out and relocated.

Auto Park
Turning off the drive power causes the drive to move the read/write heads to a safe non-data landing zone and lock them in place.

Average Access Time
The average length of time a drive takes to perform seeks, usually measured with 1/3 stroke.

Average Seek Time
Length of time it takes the drive to move the read/write heads to a safe non-data landing zone and lock them in place.

Read More

The Hidden Costs of Increasing Data Storage

data storage costLarge-scale IT environments have the resources to manage all aspects of a network expansion, including the initial analysis, equipment installation and wiring, and proper access management to users. In smaller environments the planning may not go beyond the immediate reaction to the user’s needs—that is, “we’re out of space!” While the size of the environment may determine how storage needs are addressed and managed, such things as proper equipment cooling, storage management software that allows for scalable growth (SRM), disaster recovery (including backup contingencies), and data recovery concerns apply to IT environments of every size.

In one scenario, picture a small business with five desktop machines. Despite following careful data compression procedures and rigorous archiving of old files, their system is running out of space. They have a small file server sitting near the users’ desks. Can the business owner upgrade the file server with a bigger hard drive or should he add a separate rack of inexpensive drives? How much space will they need? Will a terabyte be enough? What if they need to upgrade in the future? How hard will it be? What other hidden costs are they going to run into?

In another scenario, a business that uses 30-40 desktop machines has a file server located in a separate room with adequate cooling, user access management, and a solid network infrastructure. But they too are running out of space. When they plan for an expansion, what hidden costs will they need to consider?

In addition to equipment investment, there are many hidden costs to consider when determining storage needs and subsequent management. Following are some hidden costs identified when it comes to storage.

How can you get the most out of existing storage space, not allowing it to fill up so quickly? In conjunction, how do you prevent your storage space from running out before the full life expectancy is realized? This is where storage management software, such as SRM and ILM, enters the picture. Storage Resource Management (SRM) software provides storage administrators the right tools to manage space effectively. Information Lifecycle Management (ILM) software helps the management of data through its lifecycle.

While a viable solution, SRM and ILM software may not cover all the needs of a business environment. SRM and ILM software are designed to manage files and storage effectively, and with a level of automation. Beyond this is where good old-fashioned space management is required. Remember the days when space was at a premium and there were all sorts of methods to make sure that inactive files were stored somewhere else—like on floppies? Remember when file compression utilities came out and we were squeezing every duplicate byte out of files? Those techniques are not outdated just because the cost per MB has dropped, or tools exist to help us manage data storage. Prudent storage practices never go out of style.

Power consumption
Manufacturers are working hard to optimize the performance of their machines, yet server power consumption remains on the increase. What will be the power requirement of your company’s new storage solution? Luiz André Barroso at Google reports that if performance per watt is to remain constant over the next few years, power costs could easily overtake hardware costs, possibly by a large margin.

Power consumption can be a hidden fixed cost that may not have been expected with the expansion of storage space. Especially when consider the fluctuating costs of energy, unanticipated power usage increases can be an expensive budget buster affecting the entire enterprise.

Cooling requirements
Closely related to power consumption is the need to keep cool the more powerful processors found in the latest machines. Both the performance and life expectancy of the equipment are related to the component temperature of the equipment. Ever since the Pentium II processor in 1997, proper heat dissipation using heat sinks and cooling fans has become a standard for computer equipment. Today’s high performance processors, main boards, video cards, and hard drives require reliable temperature management in order to effectively and efficiently work day in, day out.

If you or your client’s storage requirements grow, proper ambient server room temperature settings are going to be required. Adding such a room or creating the necessary environment may add build-out costs, not to mention increase those power consumption and energy costs mentioned about earlier.

Noise
With proper heat dissipation and cooling comes noise. All those extra fans and cooling compressors can create a noticeable amount of decibels. A large-scale IT environment has the luxury of potentially keeping its noisy machines away from the users. However, in a smaller-scale business or home business, some have found the sound levels generated by their storage equipment to be intolerable or at minimum concentration breaking. Such noise makes surrounding areas non-conducive to work and productivity, hindering employee’s ability to simply think. When increasing your data storage, make sure the resulting noise generated is tolerable. Be sure, too, that noise suppression efforts don’t interfere or defeat heat dissipation or cooling solutions.

Administrative cost
The equipment investment for the expansion may be significant, but how does this increased storage relate to administrative needs? Should management hire a network consultant to assess user needs, then install, setup, and test the new equipment? Or can the company’s in-house network administrator do the work? A small company has a risk because although they might not be able to afford to have a professional assessment and installation, they may learn the hard way with an inexpensive solution the old adage of “you get what you pay for.”

A non-professional might misdiagnose storage usage needs, set up the equipment incorrectly, or buy equipment that isn’t a good fit for the environment. Such unintentional blunders are why there are certifications for network professionals. Storage management is not as simple as adding more space when needed, it is a complicated, multi-layered endeavor affecting every aspect and employee of a business.

Although using the skills of a professional greatly increases the success of the storage expansion, it will raise the final cost. When considering the monetary expense, businesses must also remember to consider how much other ‘costs’ – overall risk, loss of data availability, system downtime if the implemented solution fails – they can afford.

Backup management
How does your business currently manage backup cycles and corresponding storage needs? Do you store your backups on-site, or do you have a safe alternate location at which to store this precious data? Natural disasters such as fires and floods, and extreme disasters like Hurricane Katrina are wakeup calls to many resistant to the idea of offsite data storage. Offsite data storage may be as simple as storing backup tapes off site or archiving data with data farms for a monthly space rental fee, or as complex as having a mirrored site housing a direct copy of all your data (effective but costly).

Whatever backup management and storage process utilized, backups created should be tested, as well as the backup system with the expanded storage to make sure it’s actually backing everything up. There is nothing worse than relying on a backup that doesn’t work, was improperly created, or doesn’t contain the vital data your business needs.

Database storage
Databases created as a result of daily business activities can be staggering (as referenced in the earlier example of one large retail corporation’s generation of a billion rows of sales data daily). This activity can result in large amounts of data being stored. One way to optimize database performance is by separating the database files and storing them in three separate locations. In this process, data files are stored in one location, transaction files or logs in a second location, and backups in a completely different location. This not only makes data processing more efficient but prevents having an “all the eggs in one basket” scenario, beneficial when experiencing a process disruption such as equipment failure.

Undertaking this type of database optimization involves the aforementioned planning and equipment costs. But keep in mind how database information has reached into all areas of the business – customer information, billing information, and inventory management information – and how vital it is that this information be protected. Hidden costs associated with protecting database information can escalate quickly.

Installation and cabling
The old trend was a standalone unit where the processor and storage were one system. Now the trend is to build a separate networked storage system that can be accessed by many users and servers. In general, there are two types of separate storage systems, the storage area network (SAN), and the network attached storage (NAS).

The separate storage system offers a number of advantages, including easier expansion. The consideration however, is that you will need the network infrastructure to support a separate storage system. In other words, if your storage system is in a separate building, you will need faster network connectivity to avoid a “bottleneck” in communication between the server and the storage device.

Disaster recovery
A disaster recovery plan encompasses everything that could happen if there is a system failure due to destruction, natural disaster, fire, theft or equipment failure. Part of a good disaster recovery plan includes a business continuation plan, that is, how to keep the business going and doing business despite the disaster. When planning for a data storage expansion, the disaster recovery plan should be reviewed to make sure the company’s data is accessible in the event of a contingency, and be closely aligned to business continuity planning and efforts.

Data recovery
Data recovery can become a hidden cost if not planned for. Every business continuity plan and disaster plan should include professional data recovery services as part of their overall solution.

As you can see, there is much more to scalable growth than just adding more storage space. Although prudent planning and every precaution in instigating and undertaking an effective storage management solution has been enacted, failures and unforeseen circumstances can and do occur. Simply put, despite the best preparation disasters do.

Read More

Avoide the Backup Tape Graveyard?

Many businesses could find that when in need of a data recovery, their data is not retrievable because it is stored on old tape formats.

Many organizations have electronic data dating back decades and the chances of it being retrieved or rendered inaccessible over time are fairly high. Furthermore, retrieving data that is stored on out-of-date tapes can be costly and may require special equipment.

It is important for a company to look at its past as well as its future. Information that may need to be accessed must be transferred to modern media formats in order to be compliant with current legislation and recoverable in the event of data loss. By maintaining up-to-date records and data on modern media formats, extraction can be quick and painless. Furthermore, storage costs will decrease and the organization will be better aligned with compliance regulations.

In addition to ensuring backups are stored on modern media formats, the following tips may help minimize the chance of backup failure/data disaster.

1.Verify your backups. Backups, regardless of age, are known to fail or not work. This often goes undiscovered until they are needed.

2.Store backup tapes off site. This will ensure your files are preserved if your site experiences a fire, flood or other disaster.

3.Create a safe “home” for your backup tapes. Keep backup tapes stored in a stable environment, without extreme temperatures, humidity or electromagnetism.

4.Track the “expiration date.” Backup tapes are typically rated to be used from 5,000 to 500,000 times, depending on the type of tape. Tape backup software typically will keep track of the tapes, regardless of the rotation system.

5.Maintain your equipment. Clean your tape backup drive periodically, following directions in its manual regarding frequency. Most businesses just send the drive back to the manufacturer when it begins to have problems, but if a drive has problems, so can the backup tapes.

Read More

Lost Data: to recovery or not recovery

data lossA data loss has occurred – now what? Determining the need to recover lost data can be a difficult one. There are several things to take into consideration when determining if data recovery is required.

Backup, Backup, Backup
Everyone knows the importance of a good backup system, so your first step should be to determine if the data is actually backed up. Many times lost data is stored on a backup tape, backup hard drive, on the network or other various locations throughout an organization.

Unfortunately, locating and reloading the lost information can be time consuming and deplete resources. If a backup is located, it is important to check that the most recent copy of the data is available. Many times backups occur on a set schedule and if modifications to the data were saved after the backup occurred that information will not be accessible.

Re-Creation
Another important option to consider is if the data can or should be re-created. Two items to take into account when considering this option include the type of data lost and the amount lost:

  • Type of Data – Different data may have different perceived value. Recovering a customer database is (probably) more important than recovering a file containing possible names for a pet goldfish. Is the missing data a high-volume transaction database such as a banking record? This would be nearly impossible to recreate the thousands of transactions that were happening in real time. Other types of data may not be able to be re-created such as digital photos. Understanding the type of data that was lost is imperative to determining your next steps.
  • Amount of Data – Understanding how much data was lost can help you understand how much time and resources would be required to re-create the data. The more data lost, the more time and resources required to re-create it – if re-creation is even possible.

An additional point to consider is that with strict regulatory and legal requirements, many companies need access to their lost data in order to comply with these requirements. Accessibility to data and the legal requirements surrounding that data are essential to understand when considering if data recovery is necessary or not.

Data recovery costs can be difficult to plan for because they are unexpected. No one wants to lose data just like no one wants their car to break down or to have to call a plumber for a broken pipe. However, to help put it into perspective with other business related costs – vending services and that morning cup of coffee can run between $500 and $1000 every month for a small business office. An average recovery fee for a typical desktop, Windows-based system is around $1,000. Comparing those figures – the true value of data recovery becomes clear

Read More

Protecting Data from Severe Weather

You can protect your data by following some simple precautions. With that said, even the most well-protected hard drives can crash, fail, quit, click, die… you get the picture. So here are a few tips for how to respond when extreme weather does damage your computer equipment.

1. Summer heat can be a significant problem as overheating can lead to drive failures can result. Keep your computer in a cool, dry area to prevent overheating.

2. Make sure your servers have adequate air conditioning. Increases in computer processor speed have resulted in more power requirements, which in turn require better cooling – especially important during the summer months.

3. To prevent damage caused by lightning strikes, install a surge protector between the power source and the computer’s power cable to handle any power spikes or surges.

4. Invest in some form of Uninterruptible Power Supply (UPS), which uses batteries to keep computers running during power outages. UPS systems also help manage an orderly shutdown of the computer – unexpected shutdowns from power surge problems can cause data loss.

5. Check protection devices regularly: At least once a year you should inspect your power protection devices to make sure that they are functioning properly.

Responding to Data Loss Caused by Severe Weather

1. Do not attempt to operate visibly damaged computers or hard drives.

2. Do not shake, disassemble or attempt to clean any hard drive or server that has been damaged – improper handling can make recovery operations more difficult which can lead to valuable information being lost.

3. Never attempt to dry water-damaged media by opening it or exposing it to heat – such as that from a hairdryer. In fact, keeping a water-damaged drive damp can improve your chances for recovery.

4. Do not use data recovery software to attempt recovery on a physically damaged hard drive. Data recovery software is only designed for use on a drive that is fully functioning mechanically.

5. Choose a professional data recovery company to help you.

Read More

Computer Virus

computer virusHow to protect from getting a virus?
In today’s world having anti-virus software is not optional.  A good anti-virus program will perform real-time and on-demand virus checks on your system, and warn you if it detects a virus.  The program should also provide a way for you to update its virus definitions, or signatures; so that your virus protection will be current (new viruses are discovered all the time).  It is important that you keep your virus definitions as current as possible.

Once you have purchased an anti-virus program, use it to scan new programs before you execute or install them and new diskettes (even if you think they are blank) before you use them.

You can also take the following precautions to protect your computer from getting a virus:

  • Always be very careful about opening attachments you receive in an email — particularly if the mail comes from someone you do not know. Avoid accepting programs (EXE or COM files) from USENET news group postings. Be careful about running programs that come from unfamiliar sources or have come to you unrequested. Be careful about using Microsoft Word or Excel files that originate from an unknown or insecure source.
  • Avoid booting off a diskette by never leaving a floppy disk in your system when you turn it off.
  • Write protect all your system and software diskettes when you obtain them. This will stop a computer virus spreading to them if your system becomes infected.
  • Change your system’s CMOS Setup configuration to prevent it from booting from the diskette drive. If you do this a boot sector virus will be unable to infect your computer during an accidental or deliberate reboot while an infected floppy is in the drive. If you ever need to boot off your Rescue Disk, remember to change the CMOS back to allow you to boot from diskette!
  • Configure Microsoft Word and Excel to warn you whenever you open a document or spreadsheet that contains a macro (in Microsoft Word check the appropriate box in the Tools | Options | General tab).
  • Write-protect your system’s NORMAL.DOT file. By making this file read-only, you will hopefully notice if a macro virus attempts to write to it.
  • When you need to distribute a Microsoft Word file to someone, send the RTF (Rich Text Format) file instead. RTF files do not suport macros, and by doing so you can ensure that you won’t be inadvertently sending an infected file.
  • Rename your C:\AUTOEXEC.BAT file to C:\AUTO.BAT. Then, edit your C:\AUTOEXEC.BAT file to the following single line:
    auto. By doing this you can easily notice any viruses or trojans that try to add to, or replace, your AUTOEXEC.BAT file. Additionally, if a virus attempts to add code to the bottom of the file, it will not be executed.
  • Finally, always make regular backups of your computer files. That way, if your computer becomes infected, you can be confident of having a clean backup to help you recover from the attack.

What types of files that can scan and set for auto-protection?
Here’s a list of file extensions that you should make sure your anti-virus software scans and auto protects:

386, ADT, BIN, CBT, CLA, COM, CPL, CSC, DLL, DOC, DOT, DRV, EXE, HTM, HTT, JS, MDB, MSO, OV?, POT, PPT, RTF, SCR, SHS, SYS, VBS, XL?

What are some good indications that the computer has a virus?
A very good indicator is having anti-virus software tell you that it found several files on a disk infected with the same virus (sometimes if the software reports just one file is infected, or if the file is not a program file — an EXE or COM file — it is a false report).

Another good indicator is if the reported virus was found in an EXE or COM file or in a boot sector on the disk.

If Windows can not start in 32-bit disk or file access mode your computer may have a virus.

If several executable files (EXE and COM) on your system are suddenly and mysteriously larger than they were previously, you may have a virus.

If you get a warning that a Microsoft Word document or Excel spreadsheet contains a macro but you know that it should not have a macro (you must first have the auto-warn feature activated in Word/Excel).

What are the most common ways to get a virus?
One of the most common ways to get a computer virus is by booting from an infected diskette.  Another way is to receive an infected file (such as an EXE or COM file, or a Microsoft Word document or Excel spreadsheet) through file sharing, by downloading it off the Internet, or as an attachment in an email message.

What should do when get a virus?
First, don’t panic! Resist the urge to reformat or erase everything in sight. Write down everything you do in the order that you do it.  This will help you to be thorough and not duplicate your efforts.  Your main actions will be to contain the virus, so it does not spread elsewhere, and then to eradicate it.

If you work in a networked environment, where you share information and resources with others, do not be silent.  If you have a system administrator, tell her what has happened.  It is possible that the virus has infected more than one machine in your workgroup or organization.  If you are on a local area network, remove yourself physically from it immediately.

Once you have contained the virus, you will need to disinfect your system, and then work carefully outwards to deal with any problems beyond your system itself (for example, you should meticulously and methodically look at your system backups and any removable media that you use).  If you are on a network, any networked computers and servers will also need to be checked.
Any good anti-virus software will help you to identify the virus and then remove it from your system.  Viruses are designed to spread, so don’t stop at the first one you find, continue looking until you are sure you’ve checked every possible source.  It is entirely possible that you could find several hundred copies of the virus throughout your system and media!

To disinfect your system, shut down all applications and shut down your computer right away.  Then, if you have Fix-It Utilities 99, boot off your System Rescue Disk.  Use the virus scanner on this rescue disk to scan your system for viruses.  Because the virus definitions on your Rescue Disk may be out of date and is not as comprehensive as the full Virus Scanner in Fix-It, once you have used it and it has cleared your system of known viruses, boot into Windows and use the full Virus Scanner to do an “On Demand” scan set to scan all files.  If you haven’t run Easy Update recently to get the most current virus definition files, do so now.
If the virus scanner can remove the virus from an infected file, go ahead and clean the file.  If the cleaning operation fails, or the virus software cannot remove it, either delete the file or isolate it.  The best way to isolate such a file is to put it on a clearly marked floppy disk and then delete it from your system.

Once you have dealt with your system, you will need to look beyond it at things like floppy disks, backups and removable media.  This way you can make sure that you won’t accidentally re-infect your computer.  Check all of the diskettes, zip disks, and CD-ROMs that may have been used on the system.

Finally, ask yourself who has used the computer in the last few weeks.  If there are others, they may have inadvertently carried the infection to their computer, and be in need of help.  Viruses can also infect other computers through files you may have shared with other people.  Ask yourself if you have sent any files as email attachments, or copied any files from your machine to a server, web site or FTP site recently.  If so, scan them to see if they are infected, and if they are, inform other people who may now have a copy of the infected file on their machine.

Read More

Remove Password Protection from Microsoft Word Document

word documentNearly all Microsoft programs have an option for setting up different levels of passwords. These passwords can be used for specific actions, such as, preventing reading, accessing or modifying any particular file. We have all used the feature at some point and it can be easily said that it is a great tool to have at your disposal to protect the privacy of your work. The only problem is what do you do if you forget the password yourself? Is the file forever inaccessible? Can You recover the Lost Password? Well there was a time when, if you forgot your password it was gone for good, but these days there have been huge advances in technology and there are software’s available to help you remove password protection from Microsoft Word and other MS Office Programs. So if you wish to recover your lost password and access your important file again, and then continue reading this article to find out how…

Microsoft Office 2007 and onwards came out with significantly improved security features. Passwords to open Word documents are extremely hard to break but nevertheless they can be cracked. Only two basic methods of password breaking can be used: dictionary search and brute force attack. The quicker option is dictionary search; however this option won’t be off much help if the password was created artificially. The other option is Brute Force attack, this method tries to track the password down by searching all possible combinations of specified symbols, starting from very short sequences, so you can recover a password no matter how long or complex it might be. If the forgotten password contained less than six symbols, it can be recovered quickly. Long Passwords will take a while to recover, but with the aid of good software they can be retrieved fairly easily.

So if you have accidentally lost or forgotten an important password to a Microsoft Office (Word) Document, then don’t eat your brains out over it, you can still recover the lost password and remove password protection from Microsoft Office. All you need to do is get your hands on a good Password Recovery Software and you should be able to access your restricted files in a matter of minutes. Most Remove Password Protection Software’s offer a Free Download these days, so you can try out the program risk free to see if your passwords are recoverable.

Read More

Dealing with the Complexity of Storage Systems

In fact, even with all the advancements in storage technology, only about 20%* of back-up jobs are successful (*according to Enterprise Strategy Group).

Each year hundreds of new data storage products and technologies meant to make the job faster and easier are introduced, but with so many categories and options to consider, the complexity of storage instead causes confusion – which ultimately leads to lost time and the loss of the very data such new enhancements are meant to avoid.

Hence the question for most IT professionals who have invested hundreds of thousands of dollars in state-of-the-art storage technology remains, “How can data loss still happen and what am I supposed to do about it?”

Why Backups Still Fail
In a perfect world, a company would build their storage infrastructure from scratch using any of the new storage solutions and standardize on certain vendors or options. If everything remained unchanged, some incredibly powerful, rock-solid results could be achieved.

However, in the real world storage is messy. Nothing remains constant – newly created data is added at an unyielding pace while new regulations, such as Sarbanes-Oxley, mandate changes in data retention procedure. Since companies can rarely justify starting over from scratch, most tend to add storage in incremental stages – introducing new elements from different vendors at different times – hence the complexity of storage.

All this complexity can lead to a variety of backup failures that can catch companies unprepared to deal with the ramifications of data loss. One reason why backups fail is due to bad media. If a company has their backup tapes sitting on a shelf for years, the tapes could become damaged and unreadable. This is a common occurrence if backup tapes are not stored properly. Another reason why backups fail has to do with companies losing track of the software with which those backups were created. For a restore to be successful, most software packages require that the exact environment still be available. Finally, backups fail due to corruption in the backup process. Many times companies will change their data footprint but not change their backup procedure to keep up – so they are not backing up what they think they are. Without regular testing, all of these reasons are likely sources of failure.

What to Do When Your Backup Fails
No matter how much a company tries to speed operations and guard against problems with new products and technology, the threat of data loss remains and backup and storage techniques do not always provide the necessary recovery. When an hour of down time can result in millions of dollars lost, including data recovery in your overall disaster plan is critical, and may be the only way to restore business continuity quickly and efficiently. When a data loss situation occurs, time is the most critical component. Decisions about the most prudent course of action must be made quickly, which is why administrators must understand when to repair, when to restore and when to recover data.

When to Repair
This is as simple as running file repair tools (such as fsck or CHKDSK – file repair tools attempt to repair broken links in the file system through very specific knowledge of how that file system is supposed to look) in read-only mode first, since running the actual repair on a system with many errors could overwrite data and make the problem worse. Depending on the results of the read-only diagnosis, the administrator can make an informed decision to repair or recover. If they find a limited amount of errors, it is probably fine to go ahead and fix them as the repair tool will yield good results.

Note: if your hard drive makes strange noises at any point, immediately skip to the recovery option.

When to Restore
The first question an admin should ask is how fresh their last backup is and will a restore get them to the point where they can effectively continue with normal operations. There is a significant difference between data from the last backup and data from the point of failure, so it is important to make that distinction right away. Only a recovery can help if critical data has never been backed up. Another important question is how long it will take to complete the restore – if the necessary time is too long they might need to look at other options. A final consideration is how much data are they trying to restore. Restoring several terabytes of data, for example, will take a long time from tape backups.

When to Recover
The decision to recover comes down to whether or not a company’s data loss situation is critical and how much downtime they can afford. If they don’t have enough time to schedule the restore process, it is probably best to move forward with recovery. Recovery is also the best method if backups turn out to be too old or there is some type of corruption. The bottom line is, if other options are attempted and those options fail, it is best to contact a recovery company immediately. Some administrators will try multiple restores or repairs before trying recovery and will actually cause more damage to the data.

Read More

Why you Should Have a Disaster Recovery Plan in Place

There is something that is inevitable. You never know when an entire system is going to crash or another disaster may come about. You have to be prepared for these things. If you’re not, then everything will be chaotic. No one will know what to do. In other words, everyone will be running around asking each other, “What do we do now?” And no one is going to have an answer.

What is a disaster recovery plan?
A disaster recovery plan is that protocol in which your employees follow when a certain disaster comes about. You have to evaluate everything that could go wrong within your business and have a recovery plan for each one of those situations. Since not one situation is the same, there has to be a protocol for each. From there, your employees have to study it and know what to do immediately. This means they need to memorize. There are many disasters that do not allow time for someone to pull out a manual and read what needs to happen. They have to act immediately.

But why have a disaster recovery plan in place?
You should have one in place because you need to conduct business in the best manner possible for your customers. Your customers expect seamless service no matter what, so you have to try to make things as convenient for them as possible. If you don’t, then you risk losing their business.

Your disaster recovery plan will include dealing with data loss during a natural disaster, dealing with a system meltdown, power surges, and so much more. It depends on what sort of business you are in as to what kind of plans you use. Just make sure that you cover all of your bases and that you also have a master plan so that you can take care of something that may not have a plan. You just never know what could happen.

Statistics
Statistics have shown that businesses with a disaster recovery plan are amongst those that recover better. Those who have experienced some sort of disaster that lasts for more than 10 days will never recover financially. 50% of those companies without a disaster recovery plan will spend so much time making up for lost cash that they will most likely be out of business in 5 years. That is not something you want to have to deal with. The cost of an outage that lasts only a few days is already bad enough. Contracts can be broken, credibility can be lost, and even future customers will never be acquired. These are extreme losses.

So take these statistics to heart so that you know why it is you need a disaster recovery plan. Not one more business needs to go out of business due to an outage, so you need to be on top of things. You need to realize that anything that prohibits you from carrying out your business practices can do irreparable damages. Your customers expect for you to be there for them whenever they need you. There is nothing more frustrating to them than trying to resolve an issue that you can’t resolve because of an outage. If their request is not fulfilled, then they may suddenly become your competitor’s newest customer.

Read More

Data Protection Schemes to Storage System

Storage System manufacturers are pursuing unique ways of processing large amounts of data while still being able to provide redundancy in case of disaster. Some large SAN units incorporate intricate device block-level organization, essentially creating a low-level file system from the RAID perspective. Other SAN units have an internal block-level transaction log in place so that the Control Processor of the SAN is tracking all of the block-level writes to the individual disks. Using this transaction log, the SAN unit can recover from unexpected power failures or shutdowns.

Some computer scientists specializing in the storage system field are proposing adding more intelligence to the RAID array controller card so that it is ‘file system aware.’ This technology would provide more recoverability in case disaster struck, the goal being the storage array would become more self-healing.

Other ideas along these lines are to have a heterogeneous storage pool where multiple computers can access information without being dependant on a specific system’s file system. In organizations where there are multiple hardware and system platforms, a transparent file system will provide access to data regardless of what system wrote the data.

Other computer scientists are approaching the redundancy of the storage array quite differently. The RAID concept is in use on a vast number of systems, yet computer scientists and engineers are looking for new ways to provide better data protection in case of failure. The goals that drive this type of RAID development are data protection and redundancy without sacrificing performance.

Reviewing the University of California, Berkeley report about the amount of digital information that was produced 2003 is staggering. You or your client’s site may not have terabytes or pet bytes of information, yet during a data disaster, every file is critically important.

Avoiding Storage System Failures
There are many ways to reduce or eliminate the impact of storage system failures. You may not be able to prevent a disaster from happening, but you may be able to minimize the disruption of service to your clients.

There are many ways to add redundancy to primary storage systems. Some of the options can be quite costly and only large business organizations can afford the investment. These options include duplicate storage systems or identical servers, known as ‘mirror sites’. Additionally, elaborate backup processes or file-system ‘snapshots’ that always have a checkpoint to restore to, provide another level of data protection.

Experience has shown there are usually multiple or rolling failures that happen when an organization has a data disaster. Therefore, to rely on just one restoration protocol is shortsighted. A successful storage organization will have multiple layers of restoration pathways.

Here are several risk mitigation policies that storage administrators can adopt that will help minimize data loss when a disaster happens:

Offline storage system — Avoid forcing an array or drive back on-line. There is usually a valid reason for a controller card to disable a drive or array, forcing an array back on-line may expose the volume to file system corruption.

Rebuilding a failed drive — When rebuilding a single failed drive, it is import to allow the controller card to finish the process. If a second drive fails or go off-line during this process, stop and get professional data recovery services involved. During a rebuild, replacing a second failed drive will change the data on the other drives.

Storage system architecture — Plan the storage system’s configuration carefully. We have seen many cases with multiple configurations used on a single storage array. For example, three RAID 5 arrays (each holding six drives) are striped in a RAID 0 configuration and then spanned. Keep a simple storage configuration and document each aspect of it.

During an outage — If the problem escalates up to the OEM technical support, always ask “Is the data integrity at risk?” or, “Will this damage my data in any way?” If the technician says that there may be a risk to the data, stop and get professional data recovery services involved.

Read More