summaryrefslogtreecommitdiff
path: root/en/using-d-i/modules/partman-md.xml
diff options
context:
space:
mode:
Diffstat (limited to 'en/using-d-i/modules/partman-md.xml')
-rw-r--r--en/using-d-i/modules/partman-md.xml300
1 files changed, 300 insertions, 0 deletions
diff --git a/en/using-d-i/modules/partman-md.xml b/en/using-d-i/modules/partman-md.xml
new file mode 100644
index 000000000..9dffad9f8
--- /dev/null
+++ b/en/using-d-i/modules/partman-md.xml
@@ -0,0 +1,300 @@
+<!-- retain these comments for translator revision tracking -->
+<!-- $Id$ -->
+
+ <sect3 id="partman-md">
+ <title>Configuring Multidisk Devices (Software RAID)</title>
+<para>
+
+If you have more than one harddrive<footnote><para>
+
+To be honest, you can construct an MD device even from partitions
+residing on single physical drive, but that won't give any benefits.
+
+</para></footnote> in your computer, you can use
+<command>partman-md</command> to set up your drives for increased
+performance and/or better reliability of your data. The result is
+called <firstterm>Multidisk Device</firstterm> (or after its most
+famous variant <firstterm>software RAID</firstterm>).
+
+</para><para>
+
+MD is basically a bunch of partitions located on different disks and
+combined together to form a <emphasis>logical</emphasis> device. This
+device can then be used like an ordinary partition (i.e. in
+<command>partman</command> you can format it, assign a mountpoint,
+etc.).
+
+</para><para>
+
+What benefits this brings depends on the type of MD device you are
+creating. Currently supported are:
+
+<variablelist>
+<varlistentry>
+
+<term>RAID0</term><listitem><para>
+
+Is mainly aimed at performance. RAID0 splits all incoming data into
+<firstterm>stripes</firstterm> and distributes them equally over each
+disk in the array. This can increase the speed of read/write
+operations, but when one of the disks fails, you will lose
+<emphasis>everything</emphasis> (part of the information is still on
+the healthy disk(s), the other part <emphasis>was</emphasis> on the
+failed disk).
+
+</para><para>
+
+The typical use for RAID0 is a partition for video editing.
+
+</para></listitem>
+</varlistentry>
+<varlistentry>
+
+<term>RAID1</term><listitem><para>
+
+Is suitable for setups where reliability is the first concern. It
+consists of several (usually two) equally-sized partitions where every
+partition contains exactly the same data. This essentially means three
+things. First, if one of your disks fails, you still have the data
+mirrored on the remaining disks. Second, you can use only a fraction
+of the available capacity (more precisely, it is the size of the
+smallest partition in the RAID). Third, file-reads are load-balanced among
+the disks, which can improve performance on a server, such as a file
+server, that tends to be loaded with more disk reads than writes.
+
+</para><para>
+
+Optionally you can have a spare disk in the array which will take the
+place of the failed disk in the case of failure.
+
+</para></listitem>
+</varlistentry>
+<varlistentry>
+
+<term>RAID5</term><listitem><para>
+
+Is a good compromise between speed, reliability and data redundancy.
+RAID5 splits all incoming data into stripes and distributes them
+equally on all but one disk (similar to RAID0). Unlike RAID0, RAID5
+also computes <firstterm>parity</firstterm> information, which gets
+written on the remaining disk. The parity disk is not static (that
+would be called RAID4), but is changing periodically, so the parity
+information is distributed equally on all disks. When one of the
+disks fails, the missing part of information can be computed from
+remaining data and its parity. RAID5 must consist of at least three
+active partitions. Optionally you can have a spare disk in the array
+which will take the place of the failed disk in the case of failure.
+
+</para><para>
+
+As you can see, RAID5 has a similar degree of reliability to RAID1
+while achieving less redundancy. On the other hand, it might be a bit
+slower on write operations than RAID0 due to computation of parity
+information.
+
+</para></listitem>
+</varlistentry>
+<varlistentry>
+
+<term>RAID6</term><listitem><para>
+
+Is similar to RAID5 except that it uses two parity devices instead of
+one.
+
+</para><para>
+
+A RAID6 array can survive up to two disk failures.
+
+</para></listitem>
+</varlistentry>
+<varlistentry>
+
+<term>RAID10</term><listitem><para>
+
+RAID10 combines striping (as in RAID0) and mirroring (as in RAID1).
+It creates <replaceable>n</replaceable> copies of incoming data and
+distributes them across the partitions so that none of the copies of
+the same data are on the same device.
+The default value of <replaceable>n</replaceable> is 2, but it can be
+set to something else in expert mode. The number of partitions used
+must be at least <replaceable>n</replaceable>.
+RAID10 has different layouts for distributing the copies. The default is
+near copies. Near copies have all of the copies at about the same offset
+on all of the disks. Far copies have the copies at different offsets on
+the disks. Offset copies copy the stripe, not the individual copies.
+
+</para><para>
+
+RAID10 can be used to achieve reliability and redundancy without the
+drawback of having to calculate parity.
+
+</para></listitem>
+</varlistentry>
+</variablelist>
+
+To sum it up:
+
+<informaltable>
+<tgroup cols="5">
+<thead>
+<row>
+ <entry>Type</entry>
+ <entry>Minimum Devices</entry>
+ <entry>Spare Device</entry>
+ <entry>Survives disk failure?</entry>
+ <entry>Available Space</entry>
+</row>
+</thead>
+
+<tbody>
+<row>
+ <entry>RAID0</entry>
+ <entry>2</entry>
+ <entry>no</entry>
+ <entry>no</entry>
+ <entry>Size of the smallest partition multiplied by number of devices in RAID</entry>
+</row>
+
+<row>
+ <entry>RAID1</entry>
+ <entry>2</entry>
+ <entry>optional</entry>
+ <entry>yes</entry>
+ <entry>Size of the smallest partition in RAID</entry>
+</row>
+
+<row>
+ <entry>RAID5</entry>
+ <entry>3</entry>
+ <entry>optional</entry>
+ <entry>yes</entry>
+ <entry>
+ Size of the smallest partition multiplied by (number of devices in
+ RAID minus one)
+ </entry>
+</row>
+
+<row>
+ <entry>RAID6</entry>
+ <entry>4</entry>
+ <entry>optional</entry>
+ <entry>yes</entry>
+ <entry>
+ Size of the smallest partition multiplied by (number of devices in
+ RAID minus two)
+ </entry>
+</row>
+
+<row>
+ <entry>RAID10</entry>
+ <entry>2</entry>
+ <entry>optional</entry>
+ <entry>yes</entry>
+ <entry>
+ Total of all partitions divided by the number of chunk copies (defaults to two)
+ </entry>
+</row>
+
+</tbody></tgroup></informaltable>
+
+</para><para>
+
+If you want to know more about Software RAID, have a look
+at <ulink url="&url-software-raid-howto;">Software RAID HOWTO</ulink>.
+
+</para><para>
+
+To create an MD device, you need to have the desired partitions it
+should consist of marked for use in a RAID. (This is done in
+<command>partman</command> in the <guimenu>Partition
+settings</guimenu> menu where you should select <menuchoice>
+<guimenu>Use as:</guimenu> <guimenuitem>physical volume for
+RAID</guimenuitem> </menuchoice>.)
+
+</para><note><para>
+
+Make sure that the system can be booted with the partitioning scheme
+you are planning. In general it will be necessary to create a separate
+file system for <filename>/boot</filename> when using RAID for the root
+(<filename>/</filename>) file system.
+Most boot loaders <phrase arch="x86">(including lilo and grub)</phrase>
+do support mirrored (not striped!) RAID1, so using for example RAID5 for
+<filename>/</filename> and RAID1 for <filename>/boot</filename> can be
+an option.
+
+</para></note><para>
+
+Next, you should choose <guimenuitem>Configure software
+RAID</guimenuitem> from the main <command>partman</command> menu.
+(The menu will only appear after you mark at least one partition for
+use as <guimenuitem>physical volume for RAID</guimenuitem>.)
+On the first screen of <command>partman-md</command> simply select
+<guimenuitem>Create MD device</guimenuitem>. You will be presented with
+a list of supported types of MD devices, from which you should choose
+one (e.g. RAID1). What follows depends on the type of MD you selected.
+</para>
+
+<itemizedlist>
+<listitem><para>
+
+RAID0 is simple &mdash; you will be issued with the list of available
+RAID partitions and your only task is to select the partitions which
+will form the MD.
+
+</para></listitem>
+<listitem><para>
+
+RAID1 is a bit more tricky. First, you will be asked to enter the
+number of active devices and the number of spare devices which will
+form the MD. Next, you need to select from the list of available RAID
+partitions those that will be active and then those that will be
+spare. The count of selected partitions must be equal to the number
+provided earlier. Don't worry. If you make a mistake and
+select a different number of partitions, &d-i; won't let you
+continue until you correct the issue.
+
+</para></listitem>
+<listitem><para>
+
+RAID5 has a setup procedure similar to RAID1 with the exception that you
+need to use at least <emphasis>three</emphasis> active partitions.
+
+</para></listitem>
+<listitem><para>
+
+RAID6 also has a setup procedure similar to RAID1 except that at least
+<emphasis>four</emphasis> active partitions are required.
+
+</para></listitem>
+<listitem><para>
+
+RAID10 again has a setup procedure similar to RAID1 except in expert
+mode. In expert mode, &d-i; will ask you for the layout.
+The layout has two parts. The first part is the layout type. It is either
+<literal>n</literal> (for near copies), <literal>f</literal> (for far
+copies), or <literal>o</literal> (for offset copies). The second part is
+the number of copies to make of the data. There must be at least that
+many active devices so that all of the copies can be distributed onto
+different disks.
+
+</para></listitem>
+</itemizedlist>
+
+<para>
+
+It is perfectly possible to have several types of MD at once. For
+example, if you have three 200 GB hard drives dedicated to MD, each
+containing two 100 GB partitions, you can combine the first partitions on
+all three disks into the RAID0 (fast 300 GB video editing partition)
+and use the other three partitions (2 active and 1 spare) for RAID1
+(quite reliable 100 GB partition for <filename>/home</filename>).
+
+</para><para>
+
+After you set up MD devices to your liking, you can
+<guimenuitem>Finish</guimenuitem> <command>partman-md</command> to return
+back to the <command>partman</command> to create filesystems on your
+new MD devices and assign them the usual attributes like mountpoints.
+
+</para>
+ </sect3>