
Bacula 1.32 User's Guide
|
Chapter 5.1
|
|
 |
Bacula Configuration
|
Index
|
Client/File daemon Configuration
|
|
|
Configuring the Director
Of all the configuration files needed to run Bacula, the
Director's is the most complicated, and the one that you
will need to modify the most often as you add clients or modify
the FileSets.
For a general discussion of configuration file and resources
including the data types recognized by Bacula. Please
see the Configuration chapter of this
manual.
Director Resource Types
Director resource type may be one of the following:
Job, Client, Storage, Catalog, Schedule, FileSet, Pool, Director,
or Messages.
We present them here in the most logical order
for defining them:
- Director -- to
define the Director's name and its access password used for
authenticating the Console program. Only a single
Director resource definition may appear in the Director's
configuration file.
- Job -- to define the backup/restore Jobs
and to tie together the Client, FileSet and Schedule resources to
be used for each Job.
- Schedule -- to define when a Job is to
be automatically run by Bacula's internal scheduler.
- FileSet -- to define the set of files
to be backed up for each Client.
- Client -- to define what Client is
to be backed up.
- Storage -- to define on what
physical device the Volumes should be mounted.
- Pool -- to define what
the pool of Volumes that can be used for a particular Job.
- Catalog -- to define in what database to
keep the list of files and the Volume names where they are backed up.
- Messages -- to define where error
and information messages are to be sent or logged.
The Director resource defines the attributes of the Directors running on
the network. In the current implementation, there is only a single Director
resource, but the final design will contain multiple Directors to maintain
index and media database redundancy.
- Director
- Start of the Director records. One and only one
director resource must be supplied.
- Name = <name>
- The director name used by the system
administrator. This record is required.
- Description = <text>
- The text field contains a
description of the Director that will be displayed in the
graphical user interface. This record is optional.
- Password = <UA-password>
- Specifies the password that
must be supplied for a Bacula Console to be authorized. The same
password must appear in the Director resource of the Console
configuration file. For added security, the password is never
actually passed across the network but rather a challenge response
hash code created with the password. This record is required.
- Messages = <Messages-resource-name>
- The messages resource
specifies where to deliver Director messages that are not associated
with a specific Job. Most messages are specific to a job and will
be directed to the Messages resource specified by the job. However,
there are a few messages that can occur when no job is running.
This record is required.
- Working Directory = <Directory>
- This directive
is mandatory and specifies a directory in which the Director
may put its status files. This directory should be used only
by Bacula but may be shared by other Bacula daemons.
Standard shell expansion of the Directory
is done when the configuration file is read so that values such
as $HOME will be properly expanded. This record is required.
- Pid Directory = <Directory>
- This directive
is mandatory and specifies a directory in which the Director
may put its process Id file files. The process Id file is used to
shutdown Bacula and to prevent multiple copies of
Bacula from running simultaneously.
Standard shell expansion of the Directory
is done when the configuration file is read so that values such
as $HOME will be properly expanded.
Typically on Linux systems, you will set this to:
/var/run. If you are not installing Bacula in the
system directories, you can use the Working Directory as
defined above.
This record is required.
- QueryFile = <Path>
- This directive
is mandatory and specifies a directory and file in which the Director
can find the canned SQL statements for the Query command of
the Console. Standard shell expansion of the Path is done
when the configuration file is read so that values such as
$HOME will be properly expanded. This record is required.
- Maximum Concurrent Jobs = <number>
- where <number>
is the maximum number of total Director Jobs that should run concurrently. The
default is set to 1, but you may set it to a larger number. Note
however, at this time (Bacula version 1.30), multiple simultaneous
jobs have not been heavily tested.
Because this feature is not yet well tested, we recommend that you
either set it to 1 or make careful tests to ensure that everything
you want works, and at a minimum keep all your Storage maximum
simultaneous job limits to 1 (see discussed below).
The Volume format becomes much more complicated with
multiple simultaneous jobs, and not all the utility programs (e.g.
bextract, ... have been properly updated to deal with more
than one Job at a time to the same Volume).BE WARNED!!!!
At the current time, there is no configuration parameter
set or limit the number console connections. A maximum of five
simultaneous console connections are permitted.
Note that Maximum Concurrent Jobs is implemented in the
Director, Job, Client, and Storage resources. Each one is independent
of the others, and all limits from each of those resources must
be met before a Job can run. There should be no problems increasing
the Maximum Concurrent Jobs in the Director, Client, and
Job resources. However, we strongly recommend that you always
set Maximum Concurrent Jobs = 1 in each Storage
definition. This will ensure that only one job is writing to any
single Volume at one time. By setting the Storage Maximum
Concurrent Jobs to one, and the Director's limit greater than
one, you can safely run multiple simultaneous jobs with each writing to a
different Volume providing you have multiple Storage definitions --
that is either multiple tape drives, or you are writing to separate
file Volumes.
- FD Connect Timeout = <time>
- where time
is the time that the Director should continue attempting
to contact the File daemon to start a job, and after which the
Director will cancel the job. The default is 30 minutes.
- SD Connect Timeout = <time>
- where time
is the time that the Director should continue attempting
to contact the Storage daemon to start a job, and after which the
Director will cancel the job. The default is 30 minutes.
- DIRport = <port-number>
- Specify the port (a positive
integer) on which the
Director daemon will listen for Bacula Console connections.
This same port number must be specified in the Director resource
of the Console configuration file. The default is 9101, so
normally this record need not be specified.
- DirAddress = <IP-Address>
- This record is optional,
and if it is specified, it will cause the Director server (for
the Console program) to bind to the specified IP-Address,
which is either a domain name or an IP address specified as a
dotted quadruple in string or quoted string format.
If this record is not specified, the Director
will bind to any available address (the default).
The following is an example of a valid Director resource definition:
Director {
Name = HeadMan
WorkingDirectory = "$HOME/bacula/bin/working"
Password = UA_password
PidDirectory = "$HOME/bacula/bin/working"
QueryFile = "$HOME/bacula/bin/query.sql"
Messages = Standard
}
The Job resource defines a Job (Backup, Restore, ...) that Bacula must
perform. Each Job resource definition contains the names
of the Clients and their FileSets to backup or restore,
the Schedule for the Job, where the data are to be stored,
and what media Pool can be used. In effect, each Job resource
must specify What, Where, How, and When or FileSet, Storage,
Backup/Restore/Level, and Schedule respectively.
Only a single type (Backup, Restore, ...) can be
specified for any job. If you want to backup multiple FileSets on
the same Client or multiple Clients, you must define a Job for
each one.
- Job
- Start of the Job records. At least one Job
resource is required.
- Name = <name>
- The Job name. This name can be specified
on the Run command in the console program to start a job. If the
name contains spaces, it must be specified between quotes. It is
generally a good idea to give your job the same name as the Client
that it will backup. This permits easy identification of jobs.
When the job actually runs, the unique Job Name will consist
of the name you specify here followed by the date and time the
job was scheduled for execution. This record is required.
- Type = <job-type>
- The Type record specifies
the Job type, which may be one of the following: Backup,
Restore, Verify, or Admin. This record
is required.
- Backup
- Run a backup Job. Normally you will
have at least one Backup job for each client you want
to save. Normally, unless you turn off cataloging,
most all the important statistics and data concerning
files backed up will be placed in the catalog.
- Restore
- Run a restore Job. Normally, you will
specify only one Restore job which acts as a sort
of prototype that you will modify using the console
program in order to perform restores. Although certain
basic information from a Restore job is saved in the
catalog, it is very minimal compared to the information
stored for a Backup job -- for example, no File records
are generated since no Files are saved.
- Verify
- Run a verify Job. In general, verify
jobs permit you to compare the contents of the catalog
to the file system, or to what was backed up. In addition,
to verifying that a tape that was written can be read,
you can also use verify as a sort of tripwire
intrusion detection.
- Admin
- Run a admin Job. An Admin job can
be used to periodically run catalog pruning, if you
do not want to do it at the end of each Backup
Job. Although an Admin job is recorded in the
catalog, very little data is saved.
- Level = <job-level>
- The Level record specifies
the default Job level to be run. The Level is normally overridden
by a different value that is specified in the Schedule
resource. This record is not required, but must be specified either
by a Level record or as a override specified in the
Schedule resource.
For a Backup Job, the Level may be one of the
following:
- Full
- is all files in the FileSet whether or not they
have changed.
- Incremental
- is all files that have changed since the
last successful backup of the specified FileSet.
If the Director cannot find a previous Full, Differential,
or Differential backup, then the job will be
upgraded into a Full backup. When the Director looks for
a "suitable" backup record in the catalog
database, it looks for a previous Job with:
- The same Job name.
- The same Client name.
- The same FileSet (any change to the definition of
the FileSet such as adding or deleting a file in the
Include or Exclude sections constitutes a different FileSet.
- The Job was a Full, Differential, or Incremental backup.
- The Job terminated normally (i.e. did not fail or was not
canceled).
If all the above conditions do not hold, the Director will upgrade
the Incremental to a Full save. Otherwise, the Incremental
backup will be performed as requested.
The File daemon (Client) decides which files to backup for an
Incremental backup by comparing start time of the prior Job
(Full, Differential, or Incremental) against
the time each file was last "modified" (st_mtime) and
the time it was last "changed" (st_ctime). If
the file was modified or changed after this start time,
it will then be backed up. You must ensure that the clock
on the client is the same as the one on the Director's machine.
If the times are not synchronized (or close), some files
that have been changed may not be backed up.
- Differential
- is all files that have changed since the
last successful Full backup of the specified FileSet.
If the Director cannot find a previous Full backup or a
suitable Full backup, then the Differential job will be
upgraded into a Full backup. When the Director looks for
a "suitable" Full backup record in the catalog
database, it looks for a previous Job with:
- The same Job name.
- The same Client name.
- The same FileSet (any change to the definition of
the FileSet such as adding or deleting a file in the
Include or Exclude sections constitutes a different FileSet.
- The Job was a FULL backup.
- The Job terminated normally (i.e. did not fail or was not
canceled).
If all the above conditions do not hold, the Director will
upgrade the Differential to a Full save. Otherwise, the
Differential backup will be performed as requested.
The File daemon (Client) decides which files to backup for a
Differential backup by comparing the start time of the prior
Full backup Job against the time each file was last
"modified" (st_mtime) and the time it was last
"changed" (st_ctime). If the file was modified or
changed after this start time, it will then be backed up. The
start time used is displayed after the Since on the Job
report. In rare cases, using the start time of the prior
backup may cause some files to be backed up twice, but it
ensures that no change is missed. As with the Incremental
option, you must ensure that the clocks on your server and
client are synchronized or as close as possible to avoid
the possibility of a file being skipped. For more details,
please see the discussion under the Incremental option
above.
For a Restore Job, no level need be specified.
For a Verify Job, the Level may be one of the
following:
- InitCatalog
- does a scan of the specified FileSet
and stores the file attributes in the Catalog database.
Since no file data is saved, you might ask why you would want to
do this. It turns out to be a very simple and easy way to have
a Tripwire like feature using Bacula. In other
words, it allows you to save the state of a set of files defined
by the FileSet and later check to see if those files have
been modified or deleted and if any new files have been added.
This can be used to detect system intrusion. Typically you
would specify a FileSet that contains the set of system
files that should not change (e.g. /sbin, /boot, /lib, /bin,
...). Normally, you run the InitCatalog level verify one
time when your system is first setup, and then once again after
each modification (upgrade) to your system. Thereafter, when
your want to check the state of your system files, you use
a Verify level = Catalog. This compares the results of
your InitCatalog with the current state of the files.
-
- Catalog
- Compares the current state of the files against
the state previously saved during an InitCatalog. Any
discrepancies are reported. The items reported are determined
by the verify options specified on the Include
directive in the specified FileSet (see the
FileSet resource below for more details). Typically this
command will be run once a day (or night) to check for any
changes to your system files.
Please note! If you run two Verify Catalog jobs on
the same client at the same time, the results will
certainly be incorrect. This is because Verify Catalog
modifies the Catalog database while running in order to track new
files.
- VolumeToCatalog
- This level causes Bacula to read
the file attribute data written to the Volume from the last Job.
The file attribute data are compared to the values saved in the
Catalog database and any differences are reported. This is
similar to the Catalog level except that instead of
comparing the disk file attributes to the catalog database, the
attribute data written to the Volume is read and compared to the
catalog database. Although the attribute data including the
signatures (MD5 or SHA1) are compared the actual file data is not
compared (it is not in the catalog).
Please note! If you
run two Verify VolumeToCatalog jobs on the same client at the
same time, the results will certainly be incorrect. This is
because the Verify VolumeToCatalog modifies the Catalog database
while running.
- DiskToCatalog
- This level causes Bacula to read the
files as they currently are on disk, and to compare the
current file attributes with the attributes saved in the
catalog from the last backup for the job specified on
the VerifyJob record. This level differs from the
Catalog level described above by the fact that it
compare not against a previous Verify job but against a
previous backup. When you run this level, you must supply the
verify options on your Include statements. Those options
determine what attribute fields are compared.
This command can be very useful if you have disk problems
because it will compare the current state of your disk against
the last successful backup, which may be several jobs.
Note, the current implementation (1.32c) does not
identify files that have been deleted.
- Verify Job = <Job-Resource-Name>
- If you run
a verify job without this record, the last job run will
be compared with the catalog, which means that you must
immediately follow a backup by a verify command. If you
specify a Verify Job Bacula will find the last
job with that name that ran. This permits you to run
all your backups, then run Verify jobs on those that
you wish to be verified (most often a VolumeToCatalog
so that the tape just written is re-read.
- Bootstrap = <bootstrap-file>
- The Bootstrap
record specifies a bootstrap file that, if provided, will
be used during Restore Jobs and is ignored in other
Job types. The bootstrap
file contains the list of tapes to be used in a restore
Job as well as which files are to be restored. Specification
of this record is optional, and
if specified, it is used only for a restore job. In addition,
when running a Restore job from the console console, this value can
be changed.
If you use the Restore command in the Console program,
to start a restore job, the bootstrap
file will be created automatically from the files you
select to be restored.
For additional details of the bootstrap file, please see
Restoring Files with the Bootstrap File
chapter of this manual.
- Write Bootstrap =
<bootstrap-file-specification>
- The
writebootstrap record specifies a file name where
Bacula will write a bootstrap file for each Backup job
run. Thus this record applies only to Backup Jobs. If the Backup
job is a Full save, Bacula will erase any current contents of
the specified file before writing the bootstrap records. If the Job
is an Incremental save, Bacula will append the current
bootstrap record to the end of the file.
Using this feature,
permits you to constantly have a bootstrap file that can recover the
current state of your system. Normally, the file specified should
be a mounted drive on another machine, so that if your hard disk is
lost, you will immediately have a bootstrap record available. If
the bootstrap-file-specification begins with a vertical bar
(|), Bacula will use the specification as the name of a program to
which it will pipe the bootstrap record. It could for example be a
shell script that emails you the bootstrap record. For more
details on using this file, please see the chapter entitled The Bootstrap File of this manual.
- Client = <client-resource-name>
- The Client record
specifies the Client (File daemon) that will be used in the
current Job. Only a single Client may be specified in any one Job.
The Client runs on the machine to be backed up,
and sends the requested files to the Storage daemon for backup,
or receives them when restoring. For additional details, see the
Client Resource
section of this chapter. This record is required.
- FileSet = <FileSet-resource-name>
- The FileSet record
specifies the FileSet that will be used in the
current Job. The FileSet specifies which directories (or files)
are to be backed up, and what options to use (e.g. compression, ...).
Only a single FileSet resource may be specified in any one Job.
For additional details, see the
FileSet Resource
section of this chapter. This record is required.
- Messages = <messages-resource-name>
- The Messages record
defines what Messages resource should be used for this job, and thus
how and where the various messages are to be delivered. For example,
you can direct some messages to a log file, and others can be
sent by email. For additional details, see the
Messages Resource Chapter of this
manual. This record is required.
- Pool = <pool-resource-name>
- The Pool record defines
the pool of Volumes where your data can be backed up. Many Bacula
installations will use only the Default pool. However, if
you want to specify a different set of Volumes for different
Clients or different Jobs, you will probably want to use Pools.
For additional details, see the Pool
Resource section of this
chapter. This resource is required.
- Schedule = <schedule-name>
- The Schedule record defines
what schedule is to be used for the Job. The schedule determines
when the Job will be automatically started and what Job level
(i.e. Full, Incremental, ...) is to be run.
For additional details, see the Schedule Resource
Chapter of this manual.
If a Schedule resource is
specified, the job will be run according to the schedule specified.
If no Schedule resource is specified for the Job, the job must
be manually started using the Console program. Although you may
specify only a single Schedule resource for any one job, the Schedule
resource may contain multiple run records, which allow you
to run the Job at many different times, and each run record
permits overriding the default Job Level Pool, Storage,
and Messages
resources. This gives considerable flexibility in what can be done
with a single Job.
- Storage = <storage-resource-name>
- The Storage record
defines the name of the storage services where you want to backup
the FileSet data. For additional details, see the Storage Resource Chapter of this manual.
This record is required.
- Max Start Delay = <time>
- The time specifies
maximum delay between the scheduled time and the actual start
time for the Job. For example, a job can be scheduled to run
at 1:00am, but because other jobs are running, it may wait
to run. If the delay is set to 3600 (one hour) and the job
has not begun to run by 2:00am, the job will be canceled.
This can be useful, for example, to prevent jobs from running
during day time hours. The default is 0 which indicates
no limit.
- Prune Jobs = <yes/no>
- Normally, pruning of Jobs
from the Catalog is specified on a Client by Client basis in the
Client resource with the AutoPrune record. If this
record is specified (not normally) and the value is yes, it
will override the value specified in the Client resource.
The default is no.
- Prune Files = <yes/no>
- Normally, pruning of Files
from the Catalog is specified on a Client by Client basis in the
Client resource with the AutoPrune record. If this
record is specified (not normally) and the value is yes, it
will override the value specified in the Client resource.
The default is no.
- Prune Volumes = <yes/no>
- Normally, pruning of Volumes
from the Catalog is specified on a Client by Client basis in the
Client resource with the AutoPrune record. If this
record is specified (not normally) and the value is yes, it
will override the value specified in the Client resource.
The default is no.
- Run Before Job = <command>
- The specified command
is run as an external program prior to running the current Job. Any
output sent by the job to standard output will be included in the
Bacula job report. The command string must be a valid program name
or name of a shell script. This record is not required, but if it is
and if the exit code of the program run is non-zero, the current
Bacula job will be canceled.
Before submitting the specified command to the operating system,
Bacula performs character substitution of the following
characters:
%% = %
%c = Client's name
%d = Director's name
%i = JobId
%e = Job Exit Status
%j = Unique Job name
%l = Job Level
%n = Job name
%t = Job type
As of version 1.30, Bacula checks the exit status of the RunBeforeJob
program. If it is non-zero, the job will be error terminated.
Lutz Kittler has pointed out that this can be a simple way to modify
your schedules during a holiday. For example, suppose that you normally
do Full backups on Fridays, but Thursday and Friday are holidays. To avoid
having to change tapes between Thursday and Friday when no one is in the
office, you can create a RunBeforeJob that returns a non-zero status on
Thursday and zero on all other days. That way, the Thursday job will not
run, and on Friday the tape you insert on Wednesday before leaving will
be used.
- Run After Job = <command>
- The specified command
is run as an external program after the current job terminates.
This record is not required. The
command string must be a valid program name or name of a shell script.
If the exit code of the program run is non-zero, the current
Bacula job will terminate in error.
Before submitting the specified command to the operating system,
Bacula performs character substitution as described above
for the Run Before Job record.
An example of the use of this command is given in the
Tips Chapter of this manual.
As of version 1.30, Bacula checks the exit status of the RunAfter
program. If it is non-zero, the job will be terminated in error.
- Client Run Before Job = <command>
- This command
is the same as Run Before Job except that it is
run on the client machine. Note, this probably will not
word with Windows clients.
- Client Run After Job = <command>
- This command
is the same as Run After Job except that it is
run on the client machine. Note, this probably will not
word with Windows clients.
- Spool Attributes = <yes/no>
- The default is set to
no, which means that the File attributes are sent by the
Storage daemon to the Director as they are stored on tape. However,
if you want to avoid the possibility that database updates will
slow down writing to the tape, you may want to set the value to
yes, in which case the Storage daemon will buffer the
File attributes and Storage coordinates to a temporary file
in the Working Directory, then when writing the Job data to the tape is
completed, the attributes and storage coordinates will be
sent to the Director. The default is no.
- Where = <directory>
- This record applies only
to a Restore job and specifies a prefix to the directory name
of all files being restored. This permits files to be restored
in a different location from which they were saved. If Where
is not specified or is set to backslash (/), the files
will be restored to their original location. By default, we
have set Where in the example configuration files to be
/tmp/bacula-restores. This is to prevent accidental overwriting
of your files.
- Replace = <replace-option>
- This record applies only
to a Restore job and specifies what happens when Bacula wants to
restore a file or directory that already exists. You have the
following options for replace-option:
- always
- when the file to be restored already exists,
it is deleted then replaced by the copy backed up.
- ifnewer
- if the backed up file (on tape) is newer than the
existing file, the existing file is deleted and replaced by
the back up.
- ifolder
if the backed up file (on tape) is older than the
existing file, the existing file is deleted and replaced by
the back up.
- never
if the backed up file already exists, Bacula skips
restoration for this file.
Prefix Links=<yes/no>If a Where path prefix is
specified for a recovery job, apply it to absolute links as
well. The default is No. When set to Yes during
restoration of files to an alternate directory, any absolute soft links
will also be modified to point to the new alternate directory.
Normally this is what is desired -- i.e. everything is self
consistent. However, if you wish to later move the files to
their original locations, all files linked with absolute names
will be broken.
Maximum Concurrent Jobs = <number>where <number>
is the maximum number of Jobs from the current Job resource that
can run concurrently. Note, this record limits only Jobs
with the same name as the resource in which it appears. Any
other restrictions on the maximum concurrent jobs such as in
the Director, Client, or Storage resources will also apply in addition to
the limit specified here. The
default is set to 1, but you may set it to a larger number.
We strongly recommend that you read the WARNING documented under
Maximum Concurrent Jobs in the Director's resource.
Reschedule On Error = <yes/no>If this record is enabled,
and the job terminates in error, the job will be rescheduled as determined
by the Reschedule Interval and Reschedule Times records.
If you cancel the job, it will not be rescheduled. The default is
no (i.e. the job will not be rescheduled).
This specification can be useful for portables, laptops, or other
machines that are not always connected to the network or switched on.
Reschedule Interval = <time-specification>If you have
specified Reschedule On Error = yes and the job terminates in
error, it will be rescheduled after the interval of time specified
by time-specification. See
the time specification formats in the Configure chapter for
details of time specifications. If no interval is specified, the
job will not be rescheduled on error.
Reschedule Times = <count>This record specifies the
maximum number of times to reschedule the job. If it is set to zero
(the default) the job will be rescheduled an indefinite number of times.
Priority = <number>This record permits you
to control the order in which your jobs run by specifying a positive
non-zero number. The higher the number, the lower the job priority.
Assuming you are not running concurrent jobs, all queued jobs of
priority 1 will run before queued jobs of priority 2 and so on,
regardless of the original scheduling order.
The priority only affects waiting jobs that are queued to run, not jobs
that are already running. If one or more jobs of priority 2 are already
running, and a new job is scheduled with priority 1, the currently
running priority 2 jobs must complete before the priority 1 job is run.
The default priority is 10.
If you want to run concurrent jobs, which is not recommended, you should
keep these points in mind:
- To run concurrent jobs,
you must set Maximum Concurrent Jobs = 2 in 5 or 6 distinct places:
in bacula-dir.conf in the Director, the Job, the Client, the Storage
resources; in bacula-fd in the FileDaemon (or Client) resource,
and in bacula-sd.conf in the Storage resource. If any one
is missing, it will throttle the jobs to one at a time.
- Bacula concurrently runs jobs of only one priority at a time. It will
not simultaneously run a priority 1 and a priority 2 job.
- If Bacula is running a priority 2 job and a new priority 1
job is scheduled, it will wait until the running priority 2 job
terminates even if the Maximum Concurrent Jobs settings
would otherwise allow two jobs to run simultaneously.
- Suppose that bacula is running a priority 2 job and new priority 1
job is scheduled and queued waiting for the running priority
2 job to terminate. If you then start a second priority 2 job,
the waiting priority 1 job
will prevent the new priority 2 job from running concurrently
with the running priority 2 job.
That is: as long as there is a higher priority job waiting to
run, no new lower priority jobs will start even if
the Maximum Concurrent Jobs settings would normally allow
them to run. This ensures that higher priority jobs will
be run as soon as possible.
If you have several jobs of different priority, it is best
not to start them at exactly the same time, because Bacula
must examine them one at a time. If by chance Bacula treats
a lower priority first, then it will run before your high
priority jobs. To avoid this, start any higher priority
a few seconds before lower ones. This insures that Bacula
will examine the jobs in the correct order, and that your
priority scheme will be respected.
The following is an example of a valid Job resource definition:
Job {
Name = "Minou"
Type = Backup
Level = Incremental # default
Client = Minou
FileSet="Minou Full Set"
Storage = DLTDrive
Pool = Default
Schedule = "MinouWeeklyCycle"
Messages = Standard
}
The Schedule Resource
The Schedule resource provides a means of automatically scheduling
a Job as well as the ability to override the default Level, Pool,
Storage and Messages resources.
In general, you specify an action to be taken and when.
- Schedule
- Start of the Schedule records. No Schedule
resource is required, but you will need at least one if you want
Jobs to be automatically started.
- Name = <name>
- The name of the schedule being defined.
The name record is required.
- Run = <Job-overrides> <Date-time-specification>
- The Run record defines when a Job is to be run,
and what overrides if any to apply. You may specify multiple
run records within a Schedule resource. If you
do, they will all be applied (i.e. multiple schedules). If you
have two run records that start at the same time, two
Jobs will start at the same time (well, within one second of
time difference).
The Job-overrides permit overriding the Level, the
Storage, the Messages, and the Pool specifications
provided in the Job resource. By the use of these overrides, you
may customize a particular Job. For example, you may specify a
Messages override for your Incremental backups that
outputs messages to a log file, but for your weekly or monthly
Full backups, you may send the output by email by using
a different Messages override.
The Job-overrides are specified as:
keyword=value where the keyword is Level, Storage,
Messages, or Pool, and the value is as defined
on the respective record formats for the Job resource. You may specify
multiple Job-overrides on one Run record by separating them
with one or more spaces or by separating them with a trailing comma.
For example:
- Level=Full
- is all files in the FileSet whether or not
they have changed.
- Level=Incremental
- is all files that have changed since
the last backup.
- Pool=Weekly
- specifies to use the Pool named Weekly.
- Storage=DLT_Drive
- specifies to use DLT_Drive for
the storage device.
- Messages=Verbose
- specifies to use the Verbose
message resource for the Job.
The Date-time-specification allows you to specify when the
Job is to be run. Any specification given is assumed to be repetitive in
nature. For example, daily means every day of every month in
every year.
Basically, you must supply a month, day, hour, and
minute the Job is to be run. Of these four items to be specified,
day is special in that you may either specify a day of the month
such as 1, 2, ... 31, or you may specify a day of the week such
as Monday, Tuesday, ... Sunday. Finally, you may also specify a
week qualifier to restrict the schedule to the first, second, third,
fourth, or fifth week of the month.
The Job will be run on either day that
matches the current day (day of the week, or day of the month).
The default is that every hour of every day of every week of every
month is set. As you specify the parts of the time, the default for
that part of the time is cleared and the new value set. However,
the other defaults are set until their corresponding part is set.
For example, if you specify only a day of the week, such as Tuesday
the Job will be run every hour of every Tuesday of every Month. That
is the month and hour remain set to the defaults of
every month and all hours.
The following special keywords specify multiple parts of the
time (e.g. day and hour), and in specifying them
none of the other defaults are cleared:
Keyword Meaning
=========== ================
Hourly Every hour of every day of every month
Weekly Every Sunday of the week of every month
Daily Every day of every month
Monthly Every first day of every month
All the other keywords show below specify only a single part of the time,
and specifying them will clear all the defaults, which means that
you must then specify all parts of the time:
The date/time to run the Job can be specified in the following way
in pseudo-BNF: -
<void-keyword> = on
<at-keyword> = at
<week-keyword> = 1st | 2nd | 3rd | 4th | 5th | first |
second | third | forth | fifth
<wday-keyword> = sun | mon | tue | wed | thu | fri | sat |
sunday | monday | tuesday | wednesday |
thursday | friday
<month-keyword> = jan | feb | mar | apr | may | jun | jul |
aug | sep | oct | nov | dec | january |
february | ... | december
<daily-keyword> = daily
<weekly-keyword> = weekly
<monthly-keyword> = monthly
<hourly-keyword> = hourly
<digit> = 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 0
<number> = <digit> | <digit><number>
<12hour> = 0 | 1 | 2 | ... 12
<hour> = 0 | 1 | 2 | ... 23
<minute> = 0 | 1 | 2 | ... 59
<day> = 1 | 2 | ... 31
<time> = <hour>:<minute> |
<12hour>:<minute>am |
<12hour>:<minute>pm
<time-spec> = <at-keyword> <time> |
<hourly-keyword>
<date-keyword> = <void-keyword> <weekly-keyword>
<day-range> = <day>-<day>
<month-range> = <month-keyword>-<month-keyword>
<wday-range> = <wday-keyword>-<wday-keyword>
<range> = <day-range> | <month-range> |
<wday-range>
<date> = <date-keyword> | <day> | <range>
<date-spec> = <date> | <date-spec>
<day-spec> = <day> | <wday-keyword> |
<day-range> | <wday-range> |
<daily-keyword>
<day-spec> = <day> | <wday-keyword> |
<week-keyword> <wday-keyword>
<month-spec> = <month-keyword> | <month-range> |
<monthly-keyword>
<date-time-spec> = <month-spec> <day-spec> <time-spec>
An example schedule resource that is named WeeklyCycle and runs a
job with level full each Sunday at 1:05am and an incremental job Monday
through Saturday at 1:05am is:
Schedule {
Name = "WeeklyCycle"
Run = Level=Full sun at 1:05
Run = Level=Incremental mon-sat at 1:05
}
An example of a possible monthly cycle is as follows:
Schedule {
Name = "MonthlyCycle"
Run = Level=Full Pool=Monthly 1st sun at 1:05
Run = Level=Differential 2nd-5th sun at 1:05
Run = Level=Incremental Pool=Daily mon-sat at 1:05
}
The FileSet resource defines what files are to be included in a backup
job. At least one FileSet resource is required.
It consists of a list of files or directories to be included, a
list of files or directories to be excluded and the various backup
options such as compression, encryption, and signatures that are to be
applied to each file.
Any change to the list of the included files will cause Bacula
to automatically create a new FileSet (defined by the name and an
MD5 checksum of the Include contents). Each time a new FileSet is
created, Bacula will ensure that the first backup is always a
Full save.
- FileSet
- Start of the FileSet records. At least one FileSet
resource must be defined.
- Name = <name>
- The name of the FileSet resource.
This record is required.
- Include = <processing-options>
{ <file-list> }
The Include resource specifies the list of files and/or directories to
be included in the backup job. There can be any number of Include
file-list specifications within the FileSet, each having its own
set of processing-options. Normally, the file-list
consists of one file or directory name per line. Directory names should
be specified without a trailing slash. Wild-card (or glob matching)
can be specified. As a consequence, any asterisk (*), question mark (?),
or left-bracket ([) must be preceded by a slash (\\) if you want it
to represent the literal character.
You should always specify
a full path for every directory and file that you list in the FileSet.
In addition, on Windows machines, you should always prefix the
directory or filename with the drive specification (e.g. c:/xxx)
except within an Exclude where for some reason the exclude
will not work with a prefixed drive letter.
Bacula's default for processing directories is to recursively descend
in the directory saving all files and subdirectories. Bacula will not
by default cross file systems (or mount points in Unix parlance). This
means that if you specify the root partition (e.g. /), Bacula will
save only the root partition and not any of the other mounted
file systems. Similarly on Windows systems, you must explicitly
specify each of the drives you want saved (e.g. c:/ and d:/ ...).
In addition, at least for Windows systems, you will most likely want
to enclose each specification within double quotes.
The df command on Unix systems will show you which
mount points you must specify to save everything. See below for
an example.
The <processing-options> is
optional. If specified, it is a list of keyword=value
options to be applied to the file-list. Multiple options may be
specified by separating them with spaces.
These options are used to
modify the default processing behavior of the files included. Since
there can be multiple Include sets, this permits effectively
specifying the desired options (compression, encryption, ...) on a
file by file basis. The options may be one of the following:
- compression=GZIP
- All files saved will be software
compressed using the GNU ZIP compression format. The
compression is done on a file by file basis by the File daemon.
If there is a problem reading the tape in a
single record of a file, it will at most affect that file and none
of the other files on the tape. Normally this option is not needed
if you have a modern tape drive as the drive will do its own
compression. However, compression is very important if you are writing
your Volumes to a file, and it can also be helpful if you have a
fast computer but a slow network.
Specifying GZIP uses the default compression level six
(i.e. GZIP is identical to GZIP6). If you
want a different compression level (1 through 9), you can specify
it by appending the level number with no intervening spaces
to GZIP. Thus compression=GZIP1 would give minimum
compression but the fastest algorithm, and compression=GZIP9
would give the highest level of compression, but requires more
computation. According to the GZIP documentation, compression levels
greater than 6 generally give very little extra compression but are
rather CPU intensive.
- signature=MD5
- An MD5 signature will be computed for all
files saved. Adding this option generates about 5% extra overhead
for each file saved. We strongly recommend that this option
be specified as a default for all files.
- signature=SHA1
- An SHA1 signature will be computed for all
files saved. Adding this option generates about extra overhead
for each file saved. The SHA1 algorithm is purported to be some
what slower than the MD5 algorithm, but at the same time is
significantly better from a cryptographic point of view (i.e.
much fewer collisions, much lower probability of being hacked.)
We strongly recommend that either this option
or MD5 be specified as a default for all files. Note, only
one of the two options MD5 or SHA1 can be computed for any
file.
- *encryption=<algorithm>
- All files saved will be
encrypted using one of the following algorithms (NOT YET IMPLEMENTED):
- *Blowfish
- *3DES
- verify=<options>
- The options letters specified are used
when running a Verify Level=Catalog job, and may be any
combination of the following:
- i
- compare the inodes
- p
- compare the permission bits
- n
- compare the number of links
- u
- compare the user id
- g
- compare the group id
- s
- compare the size
- a
- compare the access time
- m
- compare the modification time (st_mtime)
- c
- compare the change time (st_ctime)
- s
- report file size decreases
- 5
- compare the MD5 signature
- 1
- compare the SHA1 signature
- A useful set of general options on the Level=Catalog
verify is pins5 i.e. compare permission bits, inodes, number
of links, size, and MD5 changes.
- onefs=yes/no
- If set to yes (the default), Bacula
will remain on a single file system. That is it will not backup
file systems that are mounted on a subdirectory.
In this case, you must explicitly list each file system you want saved.
If you set this option to no, Bacula will backup
all mounted file systems (i.e. traverse mount points) that
are found within the FileSet. Thus if
you have NFS or Samba file systems mounted on a directory included
in your FileSet, they will also be backed up. Normally, it is
preferable to set onefs=yes and to explicitly name
each file system you want backed up.
See the example below for more details.
- portable=yes/no
- If set to yes (default is
no), the Bacula File daemon will backup Win32 files
in a portable format. By default, this option is set to
no, which means that on Win32 systems, the data will
be backed up using Windows API calls and on WinNT/2K/XP,
the security and ownership data will be properly backed up
(and restored), but the data format is not portable to other
systems -- e.g. Unix, Win95/98/Me. On Unix systems, this
option is ignored, and unless you have a specific need to
have portable backups, we recommend accept the default
(no) so that the maximum information concerning
your files is backed up.
- recurse=yes/no
- If set to yes (the default),
Bacula will recurse (or descend) into all subdirectories
found unless the directory is explicitly excluded
using an exclude definition.
If you set
recurse=no, Bacula will save the subdirectory entries,
but not descend into the subdirectories, and thus
will not save the contents of the subdirectories. Normally, you
will want the default (yes).
- sparse=yes/no
- Enable special code that checks for sparse files
such as created by ndbm. The default is no, so no checks
are made for sparse files. You may specify sparse=yes even
on files that are not sparse file. No harm will be done, but there
will be a small additional overhead to check for buffers of
all zero, and a small additional amount of space on the output
archive will be used to save the seek address of each non-zero
record read.
Restrictions: Bacula reads files in 32K buffers.
If the whole buffer is zero, it will be treated as a sparse
block and not written to tape. However, if any part of the buffer
is non-zero, the whole buffer will be written to tape, possibly
including some disk sectors (generally 4098 bytes) that are all
zero. As a consequence, Bacula's detection of sparse blocks is in
32K increments rather than the system block size. If anyone
considers this to be a real problem, please send in a request
for change with the reason. The sparse code was first
implemented in version 1.27.
If you are not familiar with sparse files, an example is
say a file where you wrote 512 bytes at address zero, then
512 bytes at address 1 million. The operating system will
allocate only two blocks, and the empty space or hole
will have nothing allocated. However, when you read the
sparse file and read the addresses where nothing was written,
the OS will return all zeros as if the space were allocated,
and if you backup such a file, a lot of space will be used
to write zeros to the volume. Worse yet, when you restore
the file, all the previously empty space will now be allocated
using much more disk space. By turning on the sparse
option, Bacula will specifically look for empty space in
the file, and any empty space will not be written to the Volume,
nor will it be restored. The price to pay for this is that
Bacula must search each block it reads before writing it.
On a slow system, this may be important. If you suspect
you have sparse files, you should benchmark the difference
or set sparse for only those files that are really sparse.
- readfifo=yes/no
- If enabled, tells the Client to
read the data on a backup and write the data on a restore
to any FIFO (pipe) that is explicitly mentioned
in the FileSet. In this case, you must have a program already
running that writes into the FIFO for a backup or reads
from the FIFO on a restore. This can be accomplished with
the RunBeforeJob record. If this is not the case,
Bacula will hang indefinitely on reading/writing the FIFO.
When this is not enabled (default), the Client simply
saves the directory entry for the FIFO.
<file-list> is a space separated list of filenames
and/or directory names. To include names containing spaces, enclose the
name between double-quotes. The list may span multiple lines, in fact,
normally it is good practice to specify each filename on a separate
line.
There are a number of special cases when specifying files or
directories in a file-list. They are:
- Any file-list item preceded by an at-sign (@) is assumed to be a
filename containing a list of files, which is read when
the configuration file is parsed during Director startup.
Note, that the file is read on the Director's machine
and not on the Client.
- Any file-list item beginning with a vertical bar (|) is
assumed to be a program. This program will be executed
on the Director's machine at the time the Job starts (not
when the Director reads the configuration file), and any output
from that program will be assumed to be a list of files or
directories, one per line, to be included. This allows you to
have a job that for example includes all the local partitions even
if you change the partitioning by adding a disk.
As an example:
Include = signature=SHA1 {
"|sh -c 'df -l | grep \"^/dev/hd[ab]\" | grep -v \".*/tmp\"
| awk \"{print \\$6}\"'"
}
will produce a list of all the local partitions on a RedHat Linux
system. Note, the above line was split, but should normally
be written on one line.
Quoting is a real problem because you must quote for Bacula
which consists of preceding every \ and every " with a \, and
you must also quote for the shell command. In the end, it is probably
easier just to execute a small file with:
Include = signature=MD5 {
"|my_partitions"
}
where my_partitions has:
#!/bin/sh
df -l | grep "^/dev/hd[ab]" | grep -v ".*/tmp" | awk "{print \$6}"
If the vertical bar (|) is preceded by a backslash as in \|,
the program will be executed on the Director's machine instead
of on the Director's machine -- (this is implemented but
not tested, and very likely will not work on Windows).
- Any file-list item preceded by a less-than sign (<) will be taken
to be a file. This file will be read on the Director's machine
at the time the Job starts, and the data will be assumed to be a
list of directories or files, one per line, to be included. This
feature allows you to modify the external file and change what
will be saved without stopping and restarting Bacula as would be
necessary if using the @ modifier noted above.
If you precede the less-than sign (<) with a backslash
as in \<, the file-list will be read on the Client machine
instead of on the Director's machine (implemented but not
tested).
- If you explicitly specify a block device such as /dev/hda1,
then Bacula (starting with version 1.28) will assume that this
is a raw partition to be backed up. In this case, you are strongly
urged to specify a sparse=yes include option, otherwise, you
will save the whole partition rather than just the actual data that
the partition contains. For example:
Include = signature=MD5 sparse=yes {
/dev/hd6
}
will backup the data in device /dev/hd6.
Ludovic Strappazon has pointed out that this feature can be
used to backup a full Microsoft Windows disk. Simply boot into
the system using a Linux Rescue disk, then load a statically
linked Bacula as described in the
Disaster Recovery Using Bacula chapter of this manual. Then
simply save the whole disk partition. In the case of a disaster, you
can then restore the desired partition.
- If you explicitly specify a FIFO device name (created with mkfifo),
and you add the option readfifo=yes as an option, Bacula
will read the FIFO and back its data up to the Volume. For
example:
Include = signature=SHA1 readfifo=yes {
/home/abc/fifo
}
if /home/abc/fifo is a fifo device, Bacula will open the
fifo, read it, and store all data thus obtained on the Volume.
Please note, you must have a process on the system that is
writing into the fifo, or Bacula will hang, and after one
minute of waiting, it will go on to the next file. The data
read can be anything since Bacula treats it as a stream.
This feature can be an excellent way to do a "hot"
backup of a very large database. You can use the RunBeforeJob
to create the fifo and to start a program that dynamically reads
your database and writes it to the fifo. Bacula will then write
it to the Volume.
During the restore operation, the inverse is
true, after Bacula creates the fifo if there was any data stored
with it (no need to explicitly list it or add any options), that
data will be written back to the fifo. As a consequence, if
any such FIFOs exist in the fileset to be restored, you must
ensure that there is a reader program or Bacula will block,
and after one minute, Bacula will time out the write to the
fifo and move on to the next file.
The Exclude Files specifies the list of files and/or directories to
be excluded from the backup job. The <file-list> is a
comma or space separated list of filenames and/or directory names. To exclude
names containing spaces, enclose the name between double-quotes.
Most often each filename is on a separate line.
For exclusions on Windows systems, do not include a leading
drive letter such as c:. This does not work.
Any filename preceded by an at-sign (@)
is assumed to be a filename on the Director's machine
containing a list of files.
The following is an example of a valid FileSet resource definition:
FileSet {
Name = "Full Set"
Include = compression=GZIP signature=MD5 sparse=yes {
@/etc/backup.list
}
Include = {
/root/myfile
/usr/lib/another_file
}
Exclude = { *.o }
}
Note, in the above example, all the files contained in /etc/backup.list
will be compressed with GZIP compression, an MD5 signature will be
computed on the file's contents (its data), and sparse file handling
will apply.
The two files /root/myfile and
/usr/lib/another_file will also be saved but without any options.
In addition, all files with the extension .o will be excluded
from the file set (i.e. from the backup).
Suppose you want to save everything except /tmp on your system.
Doing a df command, you get the following output:
[kern@rufus k]$ df
Filesystem 1k-blocks Used Available Use% Mounted on
/dev/hda5 5044156 439232 4348692 10% /
/dev/hda1 62193 4935 54047 9% /boot
/dev/hda9 20161172 5524660 13612372 29% /home
/dev/hda2 62217 6843 52161 12% /rescue
/dev/hda8 5044156 42548 4745376 1% /tmp
/dev/hda6 5044156 2613132 2174792 55% /usr
none 127708 0 127708 0% /dev/shm
//minimatou/c$ 14099200 9895424 4203776 71% /mnt/mmatou
lmatou:/ 1554264 215884 1258056 15% /mnt/matou
lmatou:/home 2478140 1589952 760072 68% /mnt/matou/home
lmatou:/usr 1981000 1199960 678628 64% /mnt/matou/usr
lpmatou:/ 995116 484112 459596 52% /mnt/pmatou
lpmatou:/home 19222656 2787880 15458228 16% /mnt/pmatou/home
lpmatou:/usr 2478140 2038764 311260 87% /mnt/pmatou/usr
deuter:/ 4806936 97684 4465064 3% /mnt/deuter
deuter:/home 4806904 280100 4282620 7% /mnt/deuter/home
deuter:/files 44133352 27652876 14238608 67% /mnt/deuter/files
Now, if you specify only / in your Include list, Bacula will
only save the Filesystem /dev/hda5. To save all file
systems except /tmp with out including any of the Samba or NFS
mounted systems, and explicitly excluding a /tmp, /proc, .journal, and
.autofsck, which you will not want to be saved and restored,
you can use the following:
FileSet {
Name = Everything
Include = {
/
/boot
/home
/rescue
/usr
}
Exclude = {
/proc
/tmp
.journal
.autofsck
}
}
Since /tmp is on its own filesystem and it was not explicitly
named in the Include list, it is not really needed in the
exclude list. It is better to list it in the Exclude list for
clarity, and in case the disks are changed so that it is no longer
in its own partition.
Please be aware that allowing Bacula to traverse or change
file systems can be very dangerous. For example, with
the following:
FileSet {
Name = "Bad example"
Include = onefs=no {
/mnt/matou
}
}
you will be backing up an NFS mounted partition (/mnt/matou),
and since onefs is set to no, Bacula will
traverse file systems. However, if /mnt/matou has the
current machine's file systems mounted, as is often the case,
you will get yourself into a recursive loop and the backup will
never end.
The following FileSet definition will backup a raw partition:
FileSet {
Name = "RawPartition"
Include = sparse=yes {
/dev/hda2
}
}
Note, in backing up and restoring a raw partition, you should ensure
that no other process including the system is writing to that partition.
As a precaution, you are strongly urged to ensure that the raw partition
is not mounted or is mounted read-only. If necessary, this can be done using the
RunBeforeJob record.
Windows Considerations for FileSets
If you are entering Windows file names, the directory path may be
preceded by the drive and a colon (as in c:). However, the path
separators must be specified in Unix convention (i.e. forward
slash (/)). If you wish to include a quote in a file name, precede
the quote with a backslash (\\). For example you might
use the following for a Windows machine to backup the "My Documents"
directory:
FileSet {
Name = "Windows Set"
Include = {
"c:/My Documents"
}
Exclude = { *.obj *.exe }
}
When using exclusion on Windows, do not use a drive prefix (i.e. c:) as
it will prevent the exclusion from working (don't ask me why -- I haven't
figured this one out yet).
Testing Your FileSet
If you wish to get an idea of what your FileSet will really
backup or if your exclusion rules will work correctly, you can
test it by using the estimate command in the Console
program. See the estimate command
in the Console chapter of this manual.
Windows NTFS Naming Considerations
NTFS filenames containing Unicode characters (i.e. > 0xFF) cannot
be explicitly named at the moment. You must include such names by
naming a higher level directory or a drive letter that does
not contain Unicode characters.
The Client resource defines the attributes of the Clients that are served
by this Director; that is the machines that are to be backed up.
You will need one Client resource definition for each machine to
be backed up.
- Client (or FileDaemon)
- Start of the Client records.
- Name = <name>
- The client name which will be used in the
Job resource record or in the console run command.
This record is required.
- Address = <address>
- Where the address is a host
name, a fully qualified domain name, or a network address in
dotted quad notation for a Bacula File server daemon.
This record is required.
- FD Port = <port-number>
- Where the port is a port
number at which the Bacula File server daemon can be contacted.
The default is 9102.
- Catalog = <Catalog-resource-name>
- This specifies the
name of the catalog resource to be used for this Client.
This record is required.
- Password = <password>
- This is the password to be
used when establishing a connection with the File services, so
the Client configuration file on the machine to be backed up must
have the same password defined for this Director. This record is
required.
- File Retention = <time-period-specification>
- The File Retention record defines the length of time that
Bacula will keep File records in the Catalog database.
When this time period expires, and if AutoPrune is set to
yes Bacula will prune (remove) File records that
are older than the specified File Retention period. Note, this
affects only records in the catalog database. It does not
effect your archive backups.
File records
may actually be retained for a shorter period than you specify on
this record if you specify either a shorter Job Retention
or shorter Volume Retention period. The shortest
retention period of the three takes precedence.
The time may be expressed in seconds, minutes,
hours, days, weeks, months, quarters, or years. See the Configuration chapter of this
manual for additional details of time specification. The
default is 60 days.
- Job Retention = <time-period-specification>
- The Job Retention record defines the length of time that
Bacula will keep Job records in the Catalog database.
When this time period expires, and if AutoPrune is set to
yes Bacula will prune (remove) Job records that are
older than the specified File Retention period. As with the other
retention periods, this affects only records in the catalog and
not data in your archive backup.
If a Job
record is selected for pruning, all associated File and JobMedia
records will also be pruned regardless of the File Retention
period set. As a consequence, you normally will set the File
retention period to be less than the Job retention period. The
Job retention period can actually be less than the value you
specify here if you set the Volume Retention record in the
Pool resource to a smaller duration. This is because the Job
retention period and the Volume retention period are
independently applied, so the smaller of the two takes
precedence.
The Job retention period is specified as seconds,
minutes, hours, days, weeks, months,
quarters, or years.
See the
Configuration chapter of this manual for additional details of
time specification.
The default is 180 days.
- AutoPrune = <yes/no>
-
If AutoPrune is set to
yes (default), Bacula (version 1.20 or greater) will
automatically apply the File retention period and the Job
retention period for the Client at the end of the Job.
If you set AutoPrune = no, pruning will not be done,
and your Catalog will grow in size each time you run a Job.
Pruning affects only information in the catalog and not data
stored in the backup archives (on Volumes).
- Maximum Concurrent Jobs = <number>
- where <number>
is the maximum number of Jobs with the current Client that
can run concurrently. Note, this record limits only Jobs
for Clients
with the same name as the resource in which it appears. Any
other restrictions on the maximum concurrent jobs such as in
the Director, Job, or Storage resources will also apply in addition to
any limit specified here. The
default is set to 1, but you may set it to a larger number.
We strongly recommend that you read the WARNING documented under
Maximum Concurrent Jobs in the Director's resource.
- *Priority = <number>
- The number specifies the
priority of this client relative to other clients that the
Director is processing simultaneously. The priority can range
from 1 to 1000. The clients are ordered such that the smaller
number priorities are performed first (not currently
implemented).
The following is an example of a valid Client resource definition:
Client {
Name = Minimatou
Address = minimatou
Catalog = MySQL
Password = very_good
}
The Storage resource defines which Storage daemons are available for use
by the Director.
- Storage
- Start of the Storage records. At least one
storage resource must be specified.
- Name = <name>
- The name of the storage resource. This
name appears on the Storage record specified in the Job record and
is required.
- Address = <address>
- Where the address is a host name,
a fully qualified domain name, or a IP address. Please note
that the <address> as specified here will be transmitted to
the File daemon who will then use it to contact the Storage daemon. Hence,
it is not, a good idea to use localhost as the
name but rather a fully qualified machine name or an IP address.
This record is required.
- SD Port = <port>
- Where port is the port to use to
contact the storage daemon for information and to start jobs.
This same port number must appear in the Storage resource of the
Storage daemon's configuration file. The default is 9103.
- Password = <password>
- This is the password to be used
when establishing a connection with the Storage services. This
same password also must appear in the Director resource of the Storage
daemon's configuration file. This record is required.
- Device = <device-name>
- This record specifies the name
of the device to be used to for the storage. This name is not the
physical device name, but the logical device name as defined on the
Name record contained in the Device resource
definition of the Storage daemon configuration file.
You can specify any name you would like (even the device name if
you prefer) up to a maximum of 127 characters in length.
The physical device name associated with this device is specified in
the Storage daemon configuration file (as Archive
Device). Please take care not to define two different
Storage resource records in the Director that point to the
same Device in the Storage daemon. Doing so may cause the
Storage daemon to block (or hang) attempting to open the
same device that is already open. This record is required.
- Media Type = <MediaType>
- This record specifies the
Media Type to be used to store the data. This is an arbitrary
string of characters up to 127 maximum that you define. It can
be anything you want. However, it is best to
make it descriptive of the storage media (e.g. File, DAT, "HP
DLT8000", 8mm, ...). The MediaType specified here, must
correspond to the Media Type specified in the Device
resource of the Storage daemon configuration file.
This record is required, and it is used by the Director and the
Storage daemon to ensure that a Volume automatically selected from
the Pool corresponds to the physical device. If a Storage daemon
handles multiple devices (e.g. will write to various file Volumes
on different partitions), this record allows you to specify exactly
which device.
As mentioned above, the value specified in the Director's Storage
resource must agree with the value specified in the Device resource in
the Storage daemon's configuration file. It is also an
additional check so
that you don't try to write data for a DLT onto an 8mm device.
- Autochanger = <yes/no>
- If you specify yes
for this command (the default is no), when you use the label
command or the add command to create a new Volume, Bacula
will also request the Autochanger Slot number. This simplifies
creating database entries for Volumes in an autochanger. If you forget
to specify the Slot, the autochanger will not be used. However, you
may modify the Slot associated with a Volume at any time
by using the update volume command in the console program.
You may include this record whether the Storage device is
really an autochanger or not. It will do no harm, but the Slot
information will simply be ignored by the Storage daemon if the
device is not really an autochanger.
The default is no.
For the autochanger to be
used, you must also specify Autochanger = yes in the
Device Resource
in the Storage daemon's configuration file.
See the
Using Autochangers manual of this chapter for the details of
using autochangers.
- Maximum Concurrent Jobs = <number>
- where <number>
is the maximum number of Jobs with the current Storage resource that
can run concurrently. Note, this record limits only Jobs
for Jobs using this Storage daemon. Any
other restrictions on the maximum concurrent jobs such as in
the Director, Job, or Client resources will also apply in addition to
any limit specified here. The
default is set to 1, but you may set it to a larger number.
We strongly recommend that you read the WARNING documented under
Maximum Concurrent Jobs in the Director's resource.
While it is possible to set the Director's, Job's, or Client's
maximum concurrent jobs greater than one, you should take great
care in setting the Storage daemon's greater than one. By keeping
this record set to one, you will avoid having two jobs simultaneously
write to the same Volume. Although this is supported, it is not
currently recommended.
The following is an example of a valid Storage resource definition:
# Definition of tape storage device
Storage {
Name = DLTDrive
Address = lpmatou
Password = local_storage_password # password for Storage daemon
Device = "HP DLT 80" # same as Device in Storage daemon
Media Type = DLT8000 # same as MediaType in Storage daemon
}
The Pool Resource
The Pool resource defines the set of storage Volumes (tapes or files) to
be used by Bacula to write the data.
By configuring different
Pools, you can determine which set of Volumes (media) receives the
backup data. This permits, for example, to store all full backup
data on one set of Volumes and all incremental backups on another
set of Volumes. Alternatively, you could assign a different set
of Volumes to each machine that you backup. This is most easily done
by defining multiple Pools.
Another important aspect of a Pool is that it contains the default
attributes (Maximum Jobs, Retention Period, Recycle flag, ...) that
will be given to a Volume when it is created. This avoids the need
for you to answer a large number of questions when labeling a
new Volume. Each of these attributes
can later be changed on a Volume by Volume basis using the update
command in the console program. Note that you must explicitly
specify which Pool Bacula is to use with each Job. Bacula will
not automatically search for the correct Pool.
Most often in Bacula installations all backups for all machines
(Clients) go to a single set of Volumes. In this case, you will
probably only use the Default Pool. If your backup strategy
calls for you to mount a different tape each day, you will probably
want to define a separate Pool for each day. For more information
on this subject, please see the Backup
Strategies chapter of this manual.
To use a Pool, there are three distinct steps.
First the Pool must be defined in the Director's
configuration file. Then the Pool must be written
to the Catalog database. This is done
automatically by the Director each time that it
starts, or alternatively can be done
using the create command in the console program.
Finally, if you change the Pool definition in the Director's
configuration file and restart Bacula, the pool will be
updated alternatively you can
use the update pool console command to refresh
the database image. It is this database image rather than
the Director's resource image that is used for the
default Volume attributes. Note, for the pool to
be automatically created or updated, it must be
explicitly referenced by a Job resource.
Next the physical media must be labeled.
The labeling can either be done with the label
command in the console program or using the
btape program. The preferred method is to
use the label command in the console
program.
Finally, you must add Volume names (and their attributes) to the
Pool. For Volumes to be used by Bacula they must be of the same
Media Type as the archive device specified for the job (i.e. if
you are going to back up to a DLT device, the Pool must have DLT volumes
defined since 8mm volumes cannot be mounted on a DLT drive). The
Media Type has particular importance if you are
backing up to files. When running a Job, you must explicitly specify
which Pool to use. Bacula will then automatically select the
next Volume to use from the Pool, but it will ensure that the
Media Type of any Volume selected from the Pool is identical
to that required by the Storage resource you have specified for the
Job.
If you use the label command in the console
program to label the Volumes, they will automatically be
added to the Pool, so this last step is not normally
required.
It is also possible to add Volumes to the database
without explicitly labeling the physical volume. This is
done with the add console command.
As previously mentioned, each time Bacula starts, it scans all
the Pools associated with each Catalog, and if the database record does
not already exist, it will be created from the Pool Resource definition.
Bacula probably should do an update pool if you change the
Pool definition, but currently, you must do this manually using
the update pool command in the Console program.
The Pool Resource defined in the Director's
configuration file (bacula-dir.conf) may contain the following records:
- Pool
- Start of the Pool records. There must
be at least one Pool resource defined.
- Name = <name>
- The name of the pool.
For most applications, you will use the default pool
name Default. This record is required.
- Number of Volumes = <number>
- This record specifies
the number of volumes (tapes or files) contained in the pool.
Normally, it is defined and updated automatically by the
Bacula catalog handling routines.
- Maximum Volumes = <number>
- This record specifies the
maximum number of volumes (tapes or files) contained in the pool.
This record is optional, if omitted or set to zero, any number
of volumes will be permitted. In general, this record is useful
for Autochangers where there is a fixed number of Volumes, or
for File storage where you wish to to ensure that the backups made to
disk files do not become too numerous or consume too much space.
- Pool Type = <type>
- This record defines the pool
type, which corresponds to the type of Job being run. It is
required and may be one of the following:
- Backup
- *Archive
- *Cloned
- *Migration
- *Copy
- *Save
- Use Volume Once = <yes/no>
- This record
if set to yes specifies that each volume is to be
used only once. This is most useful when the Media is a
file and you want a new file for each backup that is
done. The default is no (i.e. use volume any
number of times). This record will most likely be phased out
(deprecated), so you are recommended to use Maximum Volume Jobs = 1
instead.
- Maximum Volume Jobs = <positive-integer>
- This record specifies
the maximum number of Jobs that can be written to the Volume. If
you specify zero (the default), there is no limit. Otherwise,
when the number of Jobs backed up to the Volume equals positive-integer
the Volume will be marked Used. When the Volume is marked
Used it can no longer be used for appending Jobs, much like
the Full status but it can be recycled if recycling is enabled.
By setting MaximumVolumeJobs to one, you get the same
effect as setting UseVolumeOnce = yes.
- Maximum Volume Files = <positive-integer>
- This record specifies
the maximum number of files that can be written to the Volume. If
you specify zero (the default), there is no limit. Otherwise,
when the number of files written to the Volume equals positive-integer
the Volume will be marked Used. When the Volume is marked
Used it can no longer be used for appending Jobs, much like
the Full status but it can be recycled if recycling is enabled.
This value is checked and the Used status is set only
at the end of a job that writes to the particular volume.
- Maximum Volume Bytes = <size>
- This record specifies
the maximum number of bytes that can be written to the Volume. If
you specify zero (the default), there is no limit except the
physical size of the Volume. Otherwise,
when the number of bytes written to the Volume equals size
the Volume will be marked Used. When the Volume is marked
Used it can no longer be used for appending Jobs, much like
the Full status but it can be recycled if recycling is enabled.
This value is checked and the Used status set while
the job is writing to the particular volume.
- Volume Use Duration = <time-period-specification>
-
The Volume Use Duration record defines the time period that
the Volume can be written beginning from the time of first data
write to the Volume. If the time-period specified is zero (the
default), the Volume can be written indefinitely. Otherwise,
when the time period from the first write to the volume (the
first Job written) exceeds the time-period-specification, the
Volume will be marked Used, which means that no more
Jobs can be appended to the Volume, but it may be recycled if
recycling is enabled.
You might use this record, for example, if you have a Volume
used for Incremental backups, and Volumes used for Weekly Full
backups. Once the Full backup is done, you will want to use a
different Incremental Volume. This can be accomplished by setting
the Volume Use Duration for the Incremental Volume to six days.
I.e. it will be used for the 6 days following a Full save, then
a different Incremental volume will be used.
This value is checked and the Used status is set only
at the end of a job that writes to the particular volume, which
means that even though the use duration may have expired, the
catalog entry will not be updated until the next job that
uses this volume is run.
- Catalog Files = <yes/no>
- This record
defines whether or not you want the names of the files
that were saved to be put into the catalog. The default
is yes. The advantage of specifying Catalog Files = No
is that you will have a significantly smaller Catalog database. The
disadvantage is that you will not be able to produce a Catalog listing
of the files backed up for each Job (this is often called Browsing).
- AutoPrune = <yes/no>
- If AutoPrune is set to
yes (default), Bacula (version 1.20 or greater) will
automatically apply the Volume Retention period when new Volume
is needed and no appendable Volumes exist in the Pool. Volume
pruning causes expired Jobs (older than the Volume
Retention period) to be deleted from the Catalog and permits
possible recycling of the Volume.
- Volume Retention = <time-period-specification>
- The
Volume Retention record defines the length of time that Bacula
will keep Job records associated with the Volume in the Catalog
database. When this time period expires, and if AutoPrune
is set to yes Bacula will prune (remove) Job
records that are older than the specified Volume Retention period.
All File records associated with pruned Jobs are also pruned.
The time may be specified as seconds,
minutes, hours, days, weeks, months, quarters, or years.
The Volume Retention applied independently to the
Job Retention and the File Retention periods
defined in the Client resource. This means that the shorter
period is the one that applies. Note, that when the
Volume Retention period has been reached, it will
prune both the Job and the File records.
The default is 365 days. Note, this record sets the default
value for each Volume entry in the Catalog when the Volume is
created. The value in the
catalog may be later individually changed for each Volume using
the Console program.
By defining multiple Pools with different Volume Retention periods,
you may effectively have a set of tapes that is recycled weekly,
another Pool of tapes that is recycled monthly and so on. However,
one must keep in mind that if your Volume Retention period
is too short, it may prune the last valid Full backup, and hence
until the next Full backup is done, you will not have a complete
backup of your system, and in addition, the next Incremental
or Differental backup will be promoted to a Full backup. As
a consquence, the minimum Volume Retention period should be at
twice the interval of your Full backups. This means that if you
do a Full backup once a month, the minimum Volume retention
period should be two months.
- Recycle = <yes/no>
- This record specifies the
default for recycling Purged Volumes. If it is set to yes
and Bacula needs a volume but finds none that are
appendable, it will search for Purged Volumes (i.e. volumes
with all the Jobs and Files expired and thus deleted from
the Catalog). If the Volume is recycled, all previous data
written to that Volume will be overwritten.
- Recycle Oldest Volume = <yes/no>
- This record
instructs the Director to search for the oldest used
Volume in the Pool when another Volume is requested by
the Storage daemon and none are available.
The catalog is then pruned respecting the retention
periods of all Files and Jobs written to this Volume.
If all Jobs are pruned (i.e. the volume is Purged), then
the Volume is recycled and will be used as the next
Volume to be written. This record respects any Job,
File, or Volume retention periods that you may have specified,
and as such it is much better to use this record
than the Purge Oldest Volume.
This record can be useful if you have
a fixed number of Volumes in the Pool and you want to
cycle through them and you have specified the correct
retention periods.
- Recycle Current Volume = <yes/no>
- If the
Bacula needs a new Volume, this record instructs Bacula
to do Prune the volume respecting the Job and File
retention periods.
If all Jobs are pruned (i.e. the volume is Purged), then
the Volume is recycled and will be used as the next
Volume to be written. This record respects any Job,
File, or Volume retention periods that you may have specified,
and as such it is much better to use this record
than the Purge Oldest Volume.
This record can be useful if you have
a fixed number of Volumes in the Pool and you want to
cycle through them and you have specified the correct
retention periods.
- Purge Oldest Volume = <yes/no>
- This record
instructs the Director to search for the oldest used
Volume in the Pool when another Volume is requested by
the Storage daemon and none are available.
The catalog is then purged irrespective of retention
periods of all Files and Jobs written to this Volume.
The Volume is then recycled and will be used as the next
Volume to be written. This record overrides any Job,
File, or Volume retention periods that you may have specified.
This record can be useful if you have
a fixed number of Volumes in the Pool and you want to
cycle through them and when all Volumes are full, but you don't
want to worry about setting proper retention periods. However,
by using this option you risk losing valuable data.
Please be aware that Purge Oldest Volume disregards
all retention periods. If you have only a single Volume
defined and you turn this variable on, that Volume will always
be immediately overwritten when it fills! So at a minimum,
ensure that you have a decent number of Volumes in your Pool
before running any jobs. If you want retention periods to apply
do not use this record. To specify a retention period,
use the Volume Retention record (see above).
I highly recommend against using this record, because it is
sure that some day, Bacula will recycle a Volume that contains
current data.
- Accept Any Volume = <yes/no>
- This record
specifies whether or not any volume from the Pool may
be used for backup. The default is yes as of version
1.27 and later. If it is no then only the first
writable volume in the Pool will be accepted for writing backup
data, thus Bacula will fill each Volume sequentially
in turn before using any other appendable volume in the
Pool. If this is no and you mount a volume out
of order, Bacula will not accept it. If this
is yes any appendable volume from the pool
mounted will be accepted.
If your tape backup procedure dictates that you manually
mount the next volume, you will almost certainly want to be
sure this record is turned on.
If you are going on vacation and you think the current volume
may not have enough room on it, you can simply label a new tape
and leave it in the drive, and assuming that Accept Any Volume
is yes Bacula will begin writing on it. When you return
from vacation, simply remount the last tape, and Bacula will
continue writing on it until it is full. Then you can remount
your vacation tape and Bacula will fill it in turn.
- Cleaning Prefix = <string>
- This record defines
a prefix string, which if it matches the beginning of
a Volume name during labeling of a Volume, the Volume
will be defined with the VolStatus set to Cleaning and
thus Bacula will never attempt to use this tape. This
is primarily for use with autochangers that accept barcodes
where the convention is that barcodes beginning with CLN
are treated as cleaning tapes.
- Label Format = <format>
- This record specifies the
format of the labels contained in this pool. The format record
is used as a sort of template to create new Volume names during
automatic Volume labeling.
The format
consists of letters, numbers and the special characters
hyphen (-), underscore (_), colon (:), and
period (.), which are the legal characters for a Volume
name. The format should be enclosed in
double quotes (").
In addition, the format may contain a number of variable expansion
characters which will be expanded by a complex algorithm allowing
you to create Volume names of many different formats. In all
cases, the expansion process must resolve to the set of characters
noted above that are legal Volume names. Generally, these
variable expansion characters begin with a dollar sign ($)
or a left bracket ([). For more details on variable expansion,
please see the Variable Expansion Chapter of
this manual.
If no variable expansion characters are found in the string,
the Volume name will be formed from the format string
appended with the number of volumes in the pool plus one, which
will be edited as four digits with leading zeros. For example,
with a Label Format = File-, the first volumes will be
named File-0001, File-0002, ...
With the exception of Job specific variables, you can test
your LabelFormat by using the
var command the Console Chapter of this manual.
In almost all cases, you should enclose the format specification
(part after the equal sign) in double quotes.
-
In order for a Pool to be used during a Backup Job, the Pool must have
at least one Volume associated with it. Volumes are created
for a Pool using the label or the add commands in the
Bacula Console,
program. In addition to adding Volumes to the Pool (i.e. putting the
Volume names in the Catalog database), the physical Volume must be
labeled with valid Bacula software volume label before
Bacula will accept the Volume. This will be automatically done
if you use the label command. Bacula can
automatically label Volumes if instructed to do so, but this feature is
not yet fully implemented.
The following is an
example of a valid Pool resource definition:
Pool {
Name = Default
Pool Type = Backup
}
The Catalog Resource
The Catalog Resource defines what catalog to use for the
current job. Currently, Bacula can only handle
a single database server (SQLite, MySQL, built-in) that is
defined when configuring Bacula. However, there
may be as many Catalogs (databases) defined as you
wish. For example, you may want each Client to have
its own Catalog database, or you may want backup
jobs to use one database and verify or restore
jobs to use another database.
- Catalog
- Start of the Catalog records.
At least one Catalog resource must be defined.
- Name = <name>
- The name of the Catalog. No
necessary relation to the database server name. This name
will be specified in the Client resource record indicating
that all catalog data for that Client is maintained in this
Catalog. This record is required.
- password = <password>
- This specifies the password
to use when logging into the database. This record is required.
- DB Name = <name>
- This specifies the name of the
database. If you use multiple catalogs (databases), you specify
which one here. If you are using an external database server
rather than the internal one, you must specify a name that
is known to the server (i.e. you explicitly created the
Bacula tables using this name. This record is
required.
- user = <user>
- This specifies what user name
to use to log into the database. This record is required.
- DB Socket = <socket-name>
- This is the name of
a socket to use on the local host to connect to the database.
This record is used only by MySQL and is ignored by
SQLite. Normally, if neither DB Socket or DB Address
are specified, MySQL will use the default socket.
- DB Address = <address>
- This is the host address
of the database server. Normally, you would specify this instead
of DB Socket if the database server is on another machine.
In that case, you will also specify DB Port. This record
is used only by MySQL and is ignored by SQLite if provided.
This record is optional.
- DB Port = <port>
- This defines the port to
be used in conjunction with DB Address to access the
database if it is on another machine. This record is used
only by MySQL and is ignored by SQLite if provided. This
record is optional.
The following is an example of a valid Catalog resource definition:
Catalog
{
Name = SQLite
dbname = bacula;
user = bacula;
password = "" # no password = no security
}
or for a Catalog on another machine:
Catalog
{
Name = MySQL
dbname = bacula
user = bacula
password = ""
DB Address = remote.acme.com
DB Port = 1234
}
The Messages Resource
For the details of the Messages Resource, please see the
Messages Resource Chapter of
this manual.
The Counter Resource
The Counter Resource defines a counter variable that can
be accessed by variable expansion used for creating
Volume labels with the LabelFormat record.
See the LabelFormat
record in this chapter for more details.
- Counter
- Start of the Counter record.
Counter records are optional.
- Name = <name>
- The name of the Counter.
This is the name you will use in the variable expansion
to reference the counter value.
- Minimum = <integer>
- This specifies the minimum
value that the counter can have. It also becomes the default.
If not supplied, zero is assumed.
- Maximum = <integer>
- This is the maximum value
value that the counter can have. If not specified or set to
zero, the counter can have a maximum value of 2,147,483,648
(2 to the 31 power). When the counter is incremented past
this value, it is reset to the Minimum.
- *WrapCounter = <counter-name>
- If this value
is specified, when the counter is incremented past the maximum
and thus reset to the minimum, the counter specified on the
WrapCounter is incremented. (This is not currently
implemented).
- Catalog = <catalog-name>
- If this record is
specified, the counter and its values will be saved in
the specified catalog. If this record is not present, the
counter will be redefined each time that Bacula is started.
An example Director configuration file might be the following:
#
# Default Bacula Director Configuration file
#
# The only thing that MUST be changed is to add one or more
# file or directory names in the Include directive of the
# FileSet resource.
#
# For Bacula release 1.15 (5 March 2002) -- redhat
#
# You might also want to change the default email address
# from root to your address. See the "mail" and "operator"
# directives in the Messages resource.
#
Director { \# define myself
Name = rufus-dir
QueryFile = "/home/kern/bacula/bin/query.sql"
WorkingDirectory = "/home/kern/bacula/bin/working"
PidDirectory = "/home/kern/bacula/bin/working"
Password = "XkSfzu/Cf/wX4L8Zh4G4/yhCbpLcz3YVdmVoQvU3EyF/"
}
# Define the backup Job
Job {
Name = "NightlySave"
Type = Backup
Level = Incremental # default
Client=rufus-fd
FileSet="Full Set"
Schedule = "WeeklyCycle"
Storage = DLTDrive
Messages = Standard
Pool = Default
}
Job {
Name = "Restore"
Type = Restore
Client=rufus-fd
FileSet="Full Set"
Where = /tmp/bacula-restores
Storage = DLTDrive
Messages = Standard
Pool = Default
}
# List of files to be backed up
FileSet {
Name = "Full Set"
Include = signature=SHA1 {
#
# Put your list of files here, one per line or include an
# external list with:
#
# @file-name
#
# Note: / backs up everything
/
}
Exclude = { }
}
# When to do the backups
Schedule {
Name = "WeeklyCycle"
Run = Full sun at 1:05
Run = Incremental mon-sat at 1:05
}
# Client (File Services) to backup
Client {
Name = rufus-fd
Address = rufus
Catalog = MyCatalog
Password = "MQk6lVinz4GG2hdIZk1dsKE/LxMZGo6znMHiD7t7vzF+"
File Retention = 60d \# sixty day file retention
Job Retention = 1y \# 1 year Job retention
AutoPrune = yes \# Auto apply retention periods
}
# Definition of DLT tape storage device
Storage {
Name = DLTDrive
Address = rufus
Password = "jMeWZvfikUHvt3kzKVVPpQ0ccmV6emPnF2cPYFdhLApQ"
Device = "HP DLT 80" \# same as Device in Storage daemon
Media Type = DLT8000 \# same as MediaType in Storage daemon
}
# Definition of DDS tape storage device
Storage {
Name = SDT-10000
Address = rufus
Password = "jMeWZvfikUHvt3kzKVVPpQ0ccmV6emPnF2cPYFdhLApQ"
Device = SDT-10000 \# same as Device in Storage daemon
Media Type = DDS-4 \# same as MediaType in Storage daemon
}
# Definition of 8mm tape storage device
Storage {
Name = "8mmDrive"
Address = rufus
Password = "jMeWZvfikUHvt3kzKVVPpQ0ccmV6emPnF2cPYFdhLApQ"
Device = "Exabyte 8mm"
MediaType = "8mm"
}
# Definition of file storage device
Storage {
Name = File
Address = rufus
Password = "jMeWZvfikUHvt3kzKVVPpQ0ccmV6emPnF2cPYFdhLApQ"
Device = FileStorage
Media Type = File
}
# Generic catalog service
Catalog {
Name = MyCatalog
dbname = bacula; user = bacula; password = ""
}
# Reasonable message delivery -- send most everything to email address
# and to the console
Messages {
Name = Standard
mail = root@localhost = all, !skipped, !terminate
operator = root@localhost = mount
console = all, !skipped, !saved
}
# Default pool definition
Pool {
Name = Default
Pool Type = Backup
AutoPrune = yes
Recycle = yes
}
Bacula Configuration
|
Index
|
Client/File daemon Configuration
|
|
 |
|
|