Commit 8200cbd1 authored by Giuseppe Lo Presti's avatar Giuseppe Lo Presti
Browse files

[migration] Better implementation for 0-byte files, cf. #697

This implementation (unfortunately) introduces one more CLI, but
it makes the 0-byte files migration fully independent from the rest.
The main reason is that the query to extract them from the CASTOR
Nameserver is pretty expensive.
The README.md file and the packaging have been updated accordingly.
parent 18188553
......@@ -334,7 +334,8 @@ directory metadata into the EOS namespace.
%attr(0644,root,root) %{_bindir}/begin_vo_export_to_cta.sh
%attr(0644,root,root) %{_bindir}/export_production_tapepool_to_cta.sh
%attr(0755,root,root) %{_bindir}/tapepool_castor_to_cta.py
%attr(0755,root,root) %{_bindir}/complete_tapepool_export.py
%attr(0755,root,root) %{_bindir}/zerolen_castor_to_cta.py
%attr(0755,root,root) %{_bindir}/complete_cta_export.py
%attr(0644,root,root) %{_bindir}/vmgr_reenable_tapepool.sh
%attr(0644,root,root) %{_bindir}/cta-catalogue-remove-castor-tapes.py
%attr(0644,root,root) %config(noreplace) %{_sysconfdir}/cta/castor-migration.conf.example
......
......@@ -15,7 +15,8 @@ The tools to perform the metadata migration from CASTOR to CTA are as follows:
* `tapepool_castor_to_cta.py`: migrates all files belonging to a given tapepool to CTA. `ARCHIVED` tapes are not migrated, `READONLY` and `DISABLED` ones are. This tool creates an intermediate table for the files, consumed by `eos-import-files`.
* `eos-import-dirs --delta`: imports any additional/missing directory that was not imported by the first round of `eos-import-dirs`.
* `eos-import-files`: reads all file-related metadata from the previously created intermediate table and injects it to the EOS namespace.
* `complete_tapepool_export.py`: terminates an ongoing tapepool export, cleaning up the previously created intermediate table.
* `complete_cta_export.py`: terminates an ongoing export, cleaning up the previously created intermediate tables and flagging all files as 'on CTA' and tapes as `EXPORTED`.
* `zerolen_castor_to_cta.py`: similarly to `tapepool_castor_to_cta.py`, migrates all 0-byte files belonging to a given VO to CTA. The criteria to infer whether a 0-byte file belongs to a VO is purely path-based: all and only files whose path starts with `/castor/cern.ch/<VO>` or `/castor/cern.ch/grid/<VO>` are taken into account. If the specified VO is `'ALL'`, then any non-migrated 0-byte files are taken into account.
The tools are designed to work as follows, for any given VO:
......@@ -34,14 +35,21 @@ The tools are designed to work as follows, for any given VO:
* `bash export_production_tapepool_to_cta.sh r_atlas_raw atlas eosctaatlas --doit`
* `bash export_production_tapepool_to_cta.sh r_atlas_user atlas eosctaatlas --doit`
The first execution of the `export_production_tapepool_to_cta.sh` script will also export all 0-byte files belonging to the given VO (only when `--doit` is specified). The criteria to infer whether a 0-byte file belongs to a VO is purely on path-based: files whose path starts with `/castor/cern.ch/<VO>` or `/castor/cern.ch/grid/<VO>` are taken into account.
> [warning] In order to efficiently import tape pools holding dual tape copies, and avoid a double pass over the EOS metadata import, such tape pools are imported in a single operation: assuming that a tape pool holding the 1st copies is requested to be exported, the corresponding tape pool holding the 2nd copies is **also exported**. Furthermore, to protect CTA, `tapepool_castor_to_cta.py` will abort if some files are found having their 1st tape copy and other files having their 2nd tape copy, all in the given tape pool. The process will also abort if 2nd copies exist but a single tape pool holding them cannot be identified (e.g. because there are two, or it was not found).
3. In case of errors, `export_production_tapepool_to_cta.sh` stops and the operator is expected to fix the case and rerun the export, possibly re-executing by hand one of the commands documented above. Errors are accumulated in suitable Oracle tables both for the database migration and the EOS namespace injection tools. For the latter, please refer to their specific instructions.
3. At the end of all VO's tapepool, perform the 0-byte files migration. Example:
* `zerolen_castor_to_cta.py --vo atlas --instance eosctaatlas --doit`
* `eos-import-dirs --delta / # if needed`
* `eos-import-files`
* `complete_cta_export.py`
> [warning] In order to efficiently import tape pools holding dual tape copies, and avoid a double pass over the EOS metadata import, such tape pools are imported in a single operation: assuming that a tape pool holding the 1st copies is requested to be exported, the corresponding tape pool holding the 2nd copies is **also exported**. Furthermore, to protect CTA, `tapepool_castor_to_cta.py` will abort if some files are found having their 1st tape copy and other files having their 2nd tape copy, all in the given tape pool. The process will also abort if 2nd copies exist but a single tape pool holding them cannot be identified (e.g. because there are two, or it was not found).
4. After all VOs have been successfully exported, a final run of the step 3. is required in order to export the remaining 0-byte files from the CASTOR Namespace:
* `zerolen_castor_to_cta.py --vo ALL --instance eosctapublic --doit`
* `eos-import-dirs --delta / # if needed`
* `eos-import-files`
* `complete_cta_export.py`
4. After all VOs have been successfully exported, a final run of the following command is required in order to export the remaining 0-byte files from the CASTOR Namespace:
* `bash export_production_tapepool_to_cta.sh r_validation ALL eosctapublic --doit`
In case of errors, the tools abort and the operator is expected to fix the issue and rerun the export, possibly re-executing by hand one of the commands documented above. Errors are accumulated in suitable Oracle tables both for the database migration and the EOS namespace injection tools. For the latter, please refer to their specific instructions.
In addition, the following tools are provided, which can be used as part of a restore/recovery procedure. Such procedure has deliberately **not** been automated and will have to be dealt with on a case by case basis.
......
......@@ -18,6 +18,7 @@ install(FILES
${CMAKE_SOURCE_DIR}/migration/castor/begin_vo_export_to_cta.sh
${CMAKE_SOURCE_DIR}/migration/castor/export_production_tapepool_to_cta.sh
${CMAKE_SOURCE_DIR}/migration/castor/tapepool_castor_to_cta.py
${CMAKE_SOURCE_DIR}/migration/castor/complete_tapepool_export.py
${CMAKE_SOURCE_DIR}/migration/castor/zerolen_castor_to_cta.py
${CMAKE_SOURCE_DIR}/migration/castor/complete_cta_export.py
${CMAKE_SOURCE_DIR}/migration/castor/vmgr_reenable_tapepool.sh
DESTINATION usr/bin)
......@@ -111,7 +111,7 @@ END;
/* Procedure to prepare the files and segments metadata for export to CTA */
CREATE OR REPLACE PROCEDURE filesForCTAExport(inPoolName IN VARCHAR2, inVO IN VARCHAR2, inDryRun INTEGER, out2ndCopyPoolName OUT VARCHAR2) AS
CREATE OR REPLACE PROCEDURE filesForCTAExport(inPoolName IN VARCHAR2, out2ndCopyPoolName OUT VARCHAR2) AS
nbFiles INTEGER;
var2ndCopy INTEGER;
BEGIN
......@@ -131,31 +131,7 @@ BEGIN
AND BITAND(status, 2) = 0 AND BITAND(status, 32) = 0 -- not already EXPORTED or ARCHIVED
)
);
COMMIT; -- needed otherwise next query gets ORA-12838 (cannot read/modify an object after modifying it in parallel)
IF inDryRun = 0 THEN
-- when in "do-it" mode, include as well the 0-byte files concerning this VO, unless already
-- exported: here we assume that all relevant files match the top-level path(s) of the VO.
-- This assumption holds true for the medium/large VOs (LHC and AMS, COMPASS, etc.),
-- but not for all the rest (i.e. general-purpose data, backups, public user areas, etc.).
-- For those, a separate migration needs to be done at the very end of the process by taking
-- any non-migrated files left over (passing inVO = 'ALL').
INSERT /*+ APPEND PARALLEL(CTAFilesHelper) */ INTO CTAFilesHelper
(fileid, parent_fileid, filename, disk_uid, disk_gid, filemode,
btime, ctime, mtime, classid, filesize) (
SELECT /*+ PARALLEL(F) PARALLEL(D) INDEX(D) */
F.fileid, F.parent_fileid, F.name, F.owner_uid, F.gid,
F.filemode, F.atime, F.ctime, F.mtime, F.fileclass, 0
FROM Cns_file_metadata F, Dirs_Full_Path D
WHERE F.parent_fileid = D.fileid
AND F.filesize = 0 AND F.onCTA IS NULL -- zero-byte file not on CTA
AND BITAND(F.filemode, 16384) = 0 -- not a directory
AND (D.path LIKE '/castor/cern.ch/' || inVO || '%'
OR D.path LIKE '/castor/cern.ch/grid/' || inVO || '%' -- belongs to the given VO
OR inVO = 'ALL') -- catch-all keyword, cf. README.md
);
ctaLog(inPoolName, 'Identified '|| SQL%ROWCOUNT ||' 0-byte files to be exported');
COMMIT;
END IF;
COMMIT;
SELECT COUNT(*) INTO nbFiles FROM CTAFilesHelper;
IF nbFiles = 0 THEN
raise_application_error(-20000, 'No such tape pool or no valid files found, aborting the export');
......@@ -209,6 +185,40 @@ BEGIN
END;
/
/* Procedure to prepare the empty files metadata for export to CTA */
CREATE OR REPLACE PROCEDURE zeroBytefilesForCTAExport(inVO IN VARCHAR2) AS
nbFiles INTEGER;
BEGIN
-- look for 0-byte files, matching their top-level path(s) to the given VO.
-- This assumption holds true for the medium/large VOs (LHC and AMS, COMPASS, etc.),
-- but not for all the rest (i.e. general-purpose data, backups, public user areas, etc.).
-- For those, a separate migration needs to be done at the very end of the process by taking
-- any non-migrated files left over (passing inVO = 'ALL').
INSERT /*+ APPEND PARALLEL(CTAFilesHelper) */ INTO CTAFilesHelper
(fileid, parent_fileid, filename, disk_uid, disk_gid, filemode,
btime, ctime, mtime, classid, filesize, checksum) (
SELECT /*+ PARALLEL(F) PARALLEL(D) INDEX(D) */
F.fileid, F.parent_fileid, F.name, F.owner_uid, F.gid,
F.filemode, F.atime, F.ctime, F.mtime, F.fileclass, 0, 1
FROM Cns_file_metadata F, Dirs_Full_Path D
WHERE F.parent_fileid = D.fileid
AND F.filesize = 0 AND F.onCTA IS NULL -- zero-byte file not yet on CTA
AND BITAND(F.filemode, 16384) = 0 -- not a directory
AND F.fileclass IS NOT NULL -- nor a symbolic link
AND (D.path LIKE '/castor/cern.ch/' || lower(inVO) || '%'
OR D.path LIKE '/castor/cern.ch/grid/' || lower(inVO) || '%' -- belongs to the given VO
OR inVO = 'ALL') -- catch-all keyword, cf. README.md
);
COMMIT;
SELECT COUNT(*) INTO nbFiles FROM CTAFilesHelper;
IF nbFiles = 0 THEN
raise_application_error(-20000, 'No valid files found, aborting the export');
END IF;
ctaLog(upper(inVO) || '::*', 'Intermediate table for empty files prepared, '|| nbFiles ||' files to be exported. ETA: '||
prettyTime(nbFiles/6000/60));
END;
/
/* Procedure to prepare the directories for export to CTA. The local Dirs_Full_Path cache table is updated */
CREATE OR REPLACE PROCEDURE dirsForCTAExport(inPoolName IN VARCHAR2) AS
......@@ -296,6 +306,8 @@ BEGIN
DP.path = '/castor/cern.ch' || D.path,
DP.depth = D.depth
WHEN NOT MATCHED THEN
-- note that here we leave D.name = NULL: it's not used by the CTA migration,
-- and this way the extra entries are 'earmarked' for future analysis
INSERT (fileid, parent, path, depth)
VALUES (D.fileid, D.parent_fileid, '/castor/cern.ch' || D.path, D.depth);
COMMIT;
......@@ -380,6 +392,7 @@ GRANT SELECT ON CTAMigrationLog TO &ctaSchema;
GRANT SELECT ON Cns_class_metadata TO &ctaSchema;
GRANT SELECT ON Dirs_Full_Path TO &ctaSchema;
GRANT EXECUTE ON filesForCTAExport TO &ctaSchema;
GRANT EXECUTE ON zeroBytefilesForCTAExport TO &ctaSchema;
GRANT EXECUTE ON dirsForCTAExport TO &ctaSchema;
GRANT EXECUTE ON ctaLog TO &ctaSchema;
GRANT EXECUTE ON getTime TO &ctaSchema;
#!/usr/bin/python
#/******************************************************************************
# * complete_tapepool_export.py
# * complete_cta_export.py
# *
# * This file is part of the Castor/CTA project.
# * See http://cern.ch/castor and http://cern.ch/eoscta
......@@ -21,10 +21,9 @@
# * @author Castor Dev team, castor-dev@cern.ch
# *****************************************************************************/
'''Command line tool to complete the export of a tapepool from CASTOR to CTA'''
'''Command line tool to complete the current export from CASTOR to CTA'''
import sys
import getopt
from time import sleep, time
from datetime import datetime
from threading import Thread
......@@ -32,13 +31,6 @@ from threading import Thread
import castor_tools
def usage(exitcode):
'''prints usage'''
print __doc__
print 'Usage : ' + sys.argv[0] + ' [-h|--help] -t|--tapepool <tapepool>'
sys.exit(exitcode)
def async_complete_export(conn):
'''helper function to execute the PL/SQL procedure in a separate thread'''
cur = conn.cursor()
......@@ -51,44 +43,41 @@ def async_complete_export(conn):
def run():
'''main code'''
tapepool = None
# first parse the options
try:
options, _ = getopt.getopt(sys.argv[1:], 'ht:', ['help', 'tapepool='])
except Exception as e:
print(e)
usage(1)
for f, v in options:
if f == '-h' or f == '--help':
usage(0)
elif f == '-t' or f == '--tapepool':
tapepool = v
else:
print 'Unknown option: ' + f
usage(1)
# deal with arguments
if not tapepool:
print 'Missing argument(s)\n'
usage(1)
try:
# connect to the Nameserver and execute the complete_export procedure on a separate thread, to be able to babysit it
# work out the tape pool being dealt with
nsconn = castor_tools.connectToNS()
nscur = nsconn.cursor()
querylog = '''
SELECT tapepool, timestamp, message FROM (
SELECT * FROM CTAMigrationLog
ORDER BY timestamp DESC)
WHERE ROWNUM <= 1
'''
try:
nscur.execute(querylog)
rows = nscur.fetchall()
if rows:
tapepool, t, msg = rows[0]
if 'Export from CASTOR fully completed' in msg:
raise ValueError
else:
raise ValueError
except ValueError as e:
print datetime.now().isoformat().split('.')[0], ' No ongoing files export to CTA, nothing to do'
sys.exit(0)
# now reconnect to the Nameserver and execute the complete_export procedure on a separate thread,
# to be able to babysit it
nsconn_async = castor_tools.connectToNS()
runner = Thread(target=async_complete_export, args=[nsconn_async])
runner.start()
# at the same time, connect again to the Nameserver for monitoring the process
sleep(1)
nsconn = castor_tools.connectToNS()
nscur = nsconn.cursor()
# poll the NS database for logs about the ongoing execution
querylog = '''
SELECT timestamp, message FROM CTAMigrationLog
WHERE tapepool = :tapepool AND timestamp > :t ORDER BY timestamp ASC
'''
# poll the NS database for logs about the ongoing execution
t = time() - 24*3600
nscur = nsconn.cursor()
lastprinttime = time()
while True:
nscur.execute(querylog, tapepool=tapepool, t=t)
......
......@@ -52,7 +52,7 @@ def async_import(conn, tapepool, vo, eosctainstance, dryrun):
cur = conn.cursor()
cur.execute('ALTER SESSION ENABLE PARALLEL DML')
cur = conn.cursor()
cur.execute('BEGIN importFromCASTOR(:tapepool, :vo, :eosctainstance, :dryrun); END;', \
cur.execute('BEGIN importFromCASTOR(:tapepool, :vo, :eosctainstance, 0, :dryrun); END;', \
tapepool=tapepool, vo=vo, eosctainstance=eosctainstance, dryrun=dryrun)
......
#!/usr/bin/python
#/******************************************************************************
# * zerolen_castor_to_cta.py
# *
# * This file is part of the Castor/CTA project.
# * See http://cern.ch/castor and http://cern.ch/eoscta
# * Copyright (C) 2019 CERN
# *
# * This program is free software; you can redistribute it and/or
# * modify it under the terms of the GNU General Public License
# * as published by the Free Software Foundation; either version 2
# * of the License, or (at your option) any later version.
# * This program is distributed in the hope that it will be useful,
# * but WITHOUT ANY WARRANTY; without even the implied warranty of
# * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# * GNU General Public License for more details.
# * You should have received a copy of the GNU General Public License
# * along with this program; if not, write to the Free Software
# * Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
# *
# * @author Castor Dev team, castor-dev@cern.ch
# *****************************************************************************/
'''Command line tool for the actual copy of zero length file's metadata from CASTOR to CTA'''
import sys
import getopt
from time import sleep, time
from datetime import datetime
from threading import Thread
import castor_tools
def usage(exitcode):
'''prints usage'''
print __doc__
print 'Usage : ' + sys.argv[0] + ' [-h|--help] -v|--vo <VO> -i|--eosctainstance <eosctainstance> --dryrun|--doit'
sys.exit(exitcode)
def connectToCTA():
'''Connects to the CTA catalogue database, cf. castor_tools'''
user, passwd, dbname = castor_tools.getNSDBConnectParam('CTACONFIG')
return castor_tools.connectToDB(user, passwd, dbname, '0.0', enforceCheck=False)
def async_import(conn, vo, eosctainstance, dryrun):
'''helper function to execute the PL/SQL procedure in a separate thread'''
cur = conn.cursor()
cur.execute('ALTER SESSION ENABLE PARALLEL QUERY')
cur = conn.cursor()
cur.execute('ALTER SESSION ENABLE PARALLEL DML')
cur = conn.cursor()
cur.execute('BEGIN importFromCASTOR(NULL, :vo, :eosctainstance, 1, :dryrun); END;', \
vo=vo, eosctainstance=eosctainstance, dryrun=dryrun)
def run():
'''main code'''
success = False
dryrun = None
vo = None
eosctaInstance = None
# first parse the options
try:
options, _ = getopt.getopt(sys.argv[1:], 'hv:i:d', ['help', 'vo=', 'eosctainstance=', 'dryrun', 'doit'])
except Exception, e:
print e
usage(1)
for f, v in options:
if f == '-h' or f == '--help':
usage(0)
elif f == '-v' or f == '--vo':
vo = v
elif f == '-i' or f == '--eosctainstance':
eosctaInstance = v
elif f == '--dryrun':
dryrun = 1
elif f == '--doit':
dryrun = 0
else:
print "unknown option : " + f
usage(1)
# deal with arguments
if not vo or not eosctaInstance or dryrun == None:
print 'Missing argument(s). Either --dryrun or --doit is mandatory.\n'
usage(1)
try:
# connect to CTA and execute the import on a separate thread, to be able to babysit it
ctaconn = connectToCTA()
runner = Thread(target=async_import, args=(ctaconn, vo, eosctaInstance, dryrun))
runner.start()
# at the same time, connect to nameserver for monitoring the import process
sleep(1)
nsconn = castor_tools.connectToNS()
nscur = nsconn.cursor()
querylog = '''
SELECT timestamp, message FROM CTAMigrationLog
WHERE tapepool = upper(:vo) || '::*' AND timestamp > :t ORDER BY timestamp ASC
'''
# poll the NS database for logs about the ongoing migration
t = time() - 12*3600
while True:
nscur.execute(querylog, vo=vo, t=t)
rows = nscur.fetchall()
for r in rows:
print datetime.fromtimestamp(int(r[0])).isoformat(), ' ', r[1]
if not rows:
# keep printing something when no news
print datetime.now().isoformat().split('.')[0], ' .'
elif 'CASTOR metadata import completed successfully' in rows[-1][1]:
# export is over, terminate
success = True
break
if rows:
t = rows[-1][0]
# exit also in case of premature termination
if not runner.isAlive():
break
sleep(60)
# that ought to be immediate now
runner.join()
# close DB connections
castor_tools.disconnectDB(ctaconn)
castor_tools.disconnectDB(nsconn)
# goodbye
if success:
print datetime.now().isoformat().split('.')[0], \
' Please now inject the metadata to the EOS namespace'
else:
sys.exit(-1)
except Exception, e:
print e
import traceback
traceback.print_exc()
sys.exit(-1)
if __name__ == '__main__':
run()
......@@ -33,6 +33,7 @@ CREATE OR REPLACE SYNONYM CNS_Class_Metadata FOR &castornsSchema..Cns_class_meta
CREATE OR REPLACE SYNONYM CNS_Dirs_Full_Path FOR &castornsSchema..Dirs_Full_Path;
CREATE OR REPLACE SYNONYM CNS_filesForCTAExport FOR &castornsSchema..filesForCTAExport;
CREATE OR REPLACE SYNONYM CNS_zeroByteFilesForCTAExport FOR &castornsSchema..zeroByteFilesForCTAExport;
CREATE OR REPLACE SYNONYM CNS_dirsForCTAExport FOR &castornsSchema..dirsForCTAExport;
CREATE OR REPLACE SYNONYM CNS_ctaLog FOR &castornsSchema..ctaLog;
......@@ -279,9 +280,10 @@ END;
-- Entry point to import metadata from the CASTOR namespace
CREATE OR REPLACE PROCEDURE importFromCASTOR(inTapePool VARCHAR2, inVO VARCHAR2, inEOSCTAInstance VARCHAR2, inDryRun INTEGER) AS
CREATE OR REPLACE PROCEDURE importFromCASTOR(inTapePool VARCHAR2, inVO VARCHAR2, inEOSCTAInstance VARCHAR2, inZeroBytes IN INTEGER, inDryRun INTEGER) AS
nbFiles INTEGER;
var2ndTapePool VARCHAR2(100);
varTapePool VARCHAR2(100) := inTapePool;
var2ndTapePool VARCHAR2(100) := NULL;
BEGIN
-- First check if there's anything already ongoing and fail early:
SELECT COUNT(*) INTO nbFiles FROM CNS_CTAFilesHelper;
......@@ -289,31 +291,41 @@ BEGIN
raise_application_error(-20000, 'Another export of ' || nbFiles || ' files to CTA is ongoing, ' ||
'please terminate it with complete_tapepool_export.py before starting a new one.');
END IF;
IF inZeroBytes = 1 THEN
varTapePool := upper(inVO) || '::*'; -- made-up convention for logging purposes
END IF;
IF inDryRun = 0 THEN
CNS_ctaLog(inTapePool, 'CASTOR metadata import started');
CNS_ctaLog(varTapePool, 'CASTOR metadata import started');
ELSE
CNS_ctaLog(inTapePool, 'CASTOR metadata import started [dry-run mode]');
CNS_ctaLog(varTapePool, 'CASTOR metadata import started [dry-run mode]');
END IF;
BEGIN
-- extract all relevant files metadata and work out if dual tape copies are to be imported; can raise exceptions
CNS_filesForCTAExport(inTapePool, inVO, inDryRun, var2ndTapePool);
-- extract all directories metadata
CNS_dirsForCTAExport(inTapePool);
-- import tapes; can raise exceptions
importTapePool(inTapePool, inVO);
IF var2ndTapePool IS NOT NULL THEN
importTapePool(var2ndTapePool, inVO);
IF inZeroBytes = 0 THEN
-- extract all relevant files metadata and work out if dual tape copies are to be imported; can raise exceptions
CNS_filesForCTAExport(inTapePool, var2ndTapePool);
-- extract all directories metadata
CNS_dirsForCTAExport(inTapePool);
-- import tapes; can raise exceptions
importTapePool(inTapePool, inVO);
IF var2ndTapePool IS NOT NULL THEN
importTapePool(var2ndTapePool, inVO);
END IF;
ELSE
-- 0-byte files are special as they do not belong to any tape pool
CNS_zeroByteFilesForCTAExport(inVO);
-- extract all directories metadata as above
CNS_dirsForCTAExport(varTapePool);
END IF;
-- import metadata into the CTA catalogue
populateCTAFilesFromCASTOR(inEOSCTAInstance, inTapePool);
populateCTAFilesFromCASTOR(inEOSCTAInstance, varTapePool);
IF inDryRun = 0 THEN
CNS_ctaLog(inTapePool, 'CASTOR metadata import completed successfully');
CNS_ctaLog(varTapePool, 'CASTOR metadata import completed successfully');
ELSE
CNS_ctaLog(inTapePool, 'CASTOR metadata import completed successfully [dry-run mode]');
CNS_ctaLog(varTapePool, 'CASTOR metadata import completed successfully [dry-run mode]');
END IF;
EXCEPTION WHEN OTHERS THEN
-- any error is logged and raised to the caller
CNS_ctaLog(inTapePool, 'Exception caught, terminating import: '|| SQLERRM ||' '|| dbms_utility.format_error_backtrace());
CNS_ctaLog(varTapePool, 'Exception caught, terminating import: '|| SQLERRM ||' '|| dbms_utility.format_error_backtrace());
RAISE;
END;
END;
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment