Loading...
 
Wrapper Creator

Introduction

A wrapper creator is a B-Fabric entity which contains the configuration of an executable with WRAPPERCREATOR context. This entity is considered as a "bridge entity" between the wrapper creator executable and the application. Depending on the configuration, the wrappers may be created in different ways. The examples below show how the new concept is used to ensure the compatibility of existing external applications.

Registration

Navigate to Admin / Application Data / Wrapper Creators and click on "Create Wrapper Creator". Select the submitter executable, modify the parameter setting and click on "Save".

"Resources Batch"

The application executable entity contains the absolute path to the executable file on the file system in the "program" attribute. Exactly one input file will be specified as input file. Copying of the input file to the scratch space is done by the the application executable, as well as copying of the output file to the storage.
Example Application: Peakplot - Label tandem mass specs
Example Wrapper 'Resources Batch'
#!/bin/bash -x CONTACT="support@fgcz.uzh.ch" declare -a ALLTEMPFILES trap cleanTemp EXIT WSTOOLSSET_DIRPATH="/home/bfabric/sgeworker_test/fgcz-bfabric-demo/webservicetools" export PATH="$WSTOOLSSET_DIRPATH:$PATH" SCRATCHSPACE="/scratch" EXTERNALJOBID=3254 WORKUNITID=105789 STORAGEID=1 PROJECTID=474 CREATEDBY="amarko" INPUT_RESOURCEID=81605 RESOURCE_NAME="F129304" INPUT_PROTOCOL="scp" INPUT_HOST="fgcz-c-064.fgcz-net.unizh.ch" INPUT_FILEPATH="/usr/local/mascot/data/20101203/F129304.dat" INPUT="$INPUT_PROTOCOL://$INPUT_HOST/$INPUT_FILEPATH" OUTPUT_PROTOCOL="scp" OUTPUT_HOST="fgcz-data.uzh.ch" OUTPUT_BASEPATH="/home/bfabric/repos/" OUTPUT_RELATIVEPATH="p474/Proteomics/Peakplot___Label_tandem_mass_specs/workunit_105789" OUTPUT_DIRPATH="$OUTPUT_BASEPATH/$OUTPUT_RELATIVEPATH" OUTPUT_FILENAME="F129304.zip" OUTPUT_FILEPATH="$OUTPUT_DIRPATH/$OUTPUT_FILENAME" OUTPUT="$OUTPUT_PROTOCOL://$OUTPUT_HOST/$OUTPUT_FILEPATH" ORIGINAL_EXECUTABLE="/home/bfabric/sgeworker_test/bin/fgcz_sge_peakplot_ng" EXECUTABLE="$SCRATCHSPACE/executable-$WORKUNITID-$EXTERNALJOBID" addTemp() { local TEMPFILE="$1" test -e "$TEMPFILE" && ALLTEMPFILES[${#ALLTEMPFILES[*]}]=$TEMPFILE } cleanTemp() { if [[ ${#ALLTEMPFILES[*]} -gt 0 ]]; then for TEMPFILE in "${ALLTEMPFILES[@]}" do test -e "$TEMPFILE" && rm -rf $TEMPFILE done fi } die() { if [ -n "$JOB_ID" ]; then externaljobsave --id $EXTERNALJOBID \ --status "failed" --logthis "$1" fi local myDate=$( date "+%Y%m%d-%H%M%S") local myHost=$( hostname ) echo -n "[$myDate @ $myHost] Error: " echo $* echo "Contact: $CONTACT" echo "exit 1" exit 1 } logmessage() { local myDate=$( date "+%Y%m%d-%H%M%S" ) local myHost=$( hostname ) echo -n "[$myDate @ $myHost] Log: " echo $* } logmessage "BEGIN COMPUTENODE INFO" logmessage $( date ) logmessage $( hostname ) logmessage $( netstat -i -e | grep -w inet | grep -v 127.0.0.1 | head -n1 | awk '{print $2}' | cut -d'/' -f1 | cut -d':' -f2 ) logmessage $( uptime ) logmessage "BASH_VERSION: $BASH_VERSION" logmessage "END COMPUTENODE INFO" test -e "$SCRATCHSPACE" || die "$SCRATCHSPACE does not exist!" test -w "$SCRATCHSPACE" || die "$SCRATCHSPACE is not writable!" if [ -n "$JOB_ID" ]; then which externaljobsave || die "externaljobsave not available!" which resourcesave || die "resourcesave not available!" which curl || die "curl not available!" which mktemp || die "mktemp not available!" which xmlstarlet || die "xmlstarlet not available!" fi test -e "$ORIGINAL_EXECUTABLE" || die "$ORIGINAL_EXECUTABLE does not exits!" test -x "$ORIGINAL_EXECUTABLE" || die "$ORIGINAL_EXECUTABLE can't be executed!" if [ -n "$JOB_ID" ]; then export STATFILE="/tmp/statfile-$WORKUNITID-$EXTERNALJOBID" touch $STATFILE addTemp $STATFILE fi cp "$ORIGINAL_EXECUTABLE" "$EXECUTABLE" addTemp "$EXECUTABLE" chmod 700 "$EXECUTABLE" $EXECUTABLE \ --input="$INPUT" \ --output="$OUTPUT" \ --createdby=$CREATEDBY \ --projectid=$PROJECTID \ --workunitid=$WORKUNITID test $? -eq 0 || die "execution failed" if [ -n "$JOB_ID" ]; then FILESIZE="$( cat $STATFILE | grep 'SIZE' | awk '{print $2}' )" FILECHECKSUM="$( cat $STATFILE | grep 'MD5SUM' | awk '{print $2}')" fi if [ -n "$JOB_ID" ]; then set -x resourcesave \ --workunitid $WORKUNITID \ --storageid $STORAGEID \ --relativepath "$OUTPUT_RELATIVEPATH/$OUTPUT_FILENAME" \ --inputresourceid $INPUT_RESOURCEID \ --size $FILESIZE \ --filechecksum "$FILECHECKSUM" \ --name "$RESOURCE_NAME" \ --status "available" test $? -eq 0 || die "failed to create resource in B-Fabric" set -x externaljobsave \ --id $EXTERNALJOBID \ --status done \ --logthis "execution finished successfully" test $? -eq 0 || die "failed to update external job in B-Fabric" fi exit 0

"Resources Non-Batch"

The application executable entity contains the absolute path to the executable file on the file system in the "program" attribute. One or more input file will be specified as input file. Copying of the input file to the scratch space is done by the the application executable, as well as copying of the output file to the storage.
Example Application: Scaffold, mudpit
Example Wrapper 'Resources Non-Batch'
#!/bin/bash -x CONTACT="support@fgcz.uzh.ch" declare -a ALLTEMPFILES trap cleanTemp EXIT WSTOOLSSET_DIRPATH="/home/bfabric/sgeworker_test/fgcz-bfabric-demo/webservicetools" export PATH="$WSTOOLSSET_DIRPATH:$PATH" SCRATCHSPACE="/scratch" EXTERNALJOBID=3328 WORKUNITID=105826 STORAGEID=1 PROJECTID=474 CREATEDBY="barkows" RESOURCE_NAME="Scaffold, no mudPit" INPUT="scp://fgcz-c-064.fgcz-net.unizh.ch/usr/local/mascot/data/20101202/F129284.dat,scp://fgcz-c-064.fgcz-net.unizh.ch/usr/local/mascot/data/20101202/F129277.dat" OUTPUT_PROTOCOL="scp" OUTPUT_HOST="fgcz-data.uzh.ch" OUTPUT_BASEPATH="/home/bfabric/repos/" OUTPUT_RELATIVEPATH="p474/Proteomics/Scaffold__no_mudPit/workunit_105826" OUTPUT_DIRPATH="$OUTPUT_BASEPATH/$OUTPUT_RELATIVEPATH" OUTPUT_FILENAME="Scaffold__no_mudPit.sf3" OUTPUT_FILEPATH="$OUTPUT_DIRPATH/$OUTPUT_FILENAME" OUTPUT="$OUTPUT_PROTOCOL://$OUTPUT_HOST/$OUTPUT_FILEPATH" ORIGINAL_EXECUTABLE="/home/bfabric/sgeworker_test/bin/fgcz_sge_scaffold_no_mudPit_ng" EXECUTABLE="$SCRATCHSPACE/executable-$WORKUNITID" addTemp() { local TEMPFILE="$1" test -e "$TEMPFILE" && ALLTEMPFILES[${#ALLTEMPFILES[*]}]=$TEMPFILE } cleanTemp() { if [[ ${#ALLTEMPFILES[*]} -gt 0 ]]; then for TEMPFILE in "${ALLTEMPFILES[@]}" do test -e "$TEMPFILE" && rm -rf $TEMPFILE done fi } die() { if [ -n "$JOB_ID" ]; then externaljobsave --id $EXTERNALJOBID \ --status "failed" --logthis "$1" fi local myDate=$( date "+%Y%m%d-%H%M%S") local myHost=$( hostname ) echo -n "[$myDate @ $myHost] Error: " echo $* echo "Contact: $CONTACT" echo "exit 1" exit 1 } logmessage() { local myDate=$( date "+%Y%m%d-%H%M%S" ) local myHost=$( hostname ) echo -n "[$myDate @ $myHost] Log: " echo $* } logmessage "BEGIN COMPUTENODE INFO" logmessage $( date ) logmessage $( hostname ) logmessage $( netstat -i -e | grep -w inet | grep -v 127.0.0.1 | head -n1 | awk '{print $2}' | cut -d'/' -f1 | cut -d':' -f2 ) logmessage $( uptime ) logmessage "BASH_VERSION: $BASH_VERSION" logmessage "END COMPUTENODE INFO" test -e "$SCRATCHSPACE" || die "$SCRATCHSPACE does not exist!" test -w "$SCRATCHSPACE" || die "$SCRATCHSPACE is not writable!" if [ -n "$JOB_ID" ]; then which externaljobsave || die "externaljobsave not available!" which resourcesave || die "resourcesave not available!" which curl || die "curl not available!" which mktemp || die "mktemp not available!" which xmlstarlet || die "xmlstarlet not available!" fi test -e "$ORIGINAL_EXECUTABLE" || die "$ORIGINAL_EXECUTABLE does not exits!" test -x "$ORIGINAL_EXECUTABLE" || die "$ORIGINAL_EXECUTABLE can't be executed!" if [ -n "$JOB_ID" ]; then export STATFILE="/tmp/statfile-$WORKUNITID-$EXTERNALJOBID" touch $STATFILE addTemp $STATFILE fi cp "$ORIGINAL_EXECUTABLE" "$EXECUTABLE" addTemp "$EXECUTABLE" chmod 700 "$EXECUTABLE" $EXECUTABLE \ --input="$INPUT" \ --output="$OUTPUT" \ --createdby=$CREATEDBY \ --projectid=$PROJECTID \ --workunitid=$WORKUNITID test $? -eq 0 || die "execution failed" if [ -n "$JOB_ID" ]; then FILESIZE="$( cat $STATFILE | grep 'SIZE' | awk '{print $2}' )" FILECHECKSUM="$( cat $STATFILE | grep 'MD5SUM' | awk '{print $2}')" fi if [ -n "$JOB_ID" ]; then set -x resourcesave \ --workunitid $WORKUNITID \ --storageid $STORAGEID \ --relativepath "$OUTPUT_RELATIVEPATH/$OUTPUT_FILENAME" \ --size $FILESIZE \ --filechecksum "$FILECHECKSUM" \ --name "$RESOURCE_NAME" \ --status "available" test $? -eq 0 || die "failed to create resource in B-Fabric" set -x externaljobsave \ --id $EXTERNALJOBID \ --status done \ --logthis "execution finished successfully" test $? -eq 0 || die "failed to update external job in B-Fabric" fi exit 0

"R Server"

The application executable entity contains the name of the R Script in the "program" attribute. The config.r and SampleAnnotation.txt configuration files are created according to the selected experiment definition. Copying of the input file to the scratch space is done by the the application executable, copying of the output file to the storage is done by the wrapper.
Example Application: Affymetrix QC Report
#!/bin/bash -x CONTACT="support@fgcz.uzh.ch" declare -a ALLTEMPFILES trap cleanTemp EXIT WSTOOLSSET_DIRPATH="/home/bfabric/sgeworker_test/fgcz-bfabric-demo/webservicetools" export PATH="$WSTOOLSSET_DIRPATH:$PATH" SCRATCHSPACE="/scratch/bfabric" EXTERNALJOBID=3232 WORKUNITID=105783 STORAGEID=1 PROJECTID=403 CREATEDBY="amarko" RESOURCE_NAME="tutorial day demo" REXECUTABLE="/usr/local/ngseq/bin/R" RCMD="$REXECUTABLE --no-save" ZIPCMD="zip -r -q" OUTPUT_PROTOCOL="scp" OUTPUT_HOST="fgcz-data.uzh.ch" OUTPUT_BASEPATH="/home/bfabric/repos/" OUTPUT_RELATIVEPATH="p403/Transcriptomics/Affymetrix_QC_Report/workunit_105783" OUTPUT_DIRPATH="$OUTPUT_BASEPATH/$OUTPUT_RELATIVEPATH" OUTPUT_FILENAME="tutorial_day_demo.zip" OUTPUT_FILEPATH="$OUTPUT_DIRPATH/$OUTPUT_FILENAME" OUTPUT="$OUTPUT_PROTOCOL://$OUTPUT_HOST/$OUTPUT_FILEPATH" WORKINGDIR="$SCRATCHSPACE/workunit-$WORKUNITID-$EXTERNALJOBID" addTemp() { local TEMPFILE="$1" test -e "$TEMPFILE" && ALLTEMPFILES[${#ALLTEMPFILES[*]}]=$TEMPFILE } cleanTemp() { if [[ ${#ALLTEMPFILES[*]} -gt 0 ]]; then for TEMPFILE in "${ALLTEMPFILES[@]}" do test -e "$TEMPFILE" && rm -rf $TEMPFILE done fi } die() { if [ -n "$JOB_ID" ]; then externaljobsave --id $EXTERNALJOBID \ --status "failed" --logthis "$1" fi local myDate=$( date "+%Y%m%d-%H%M%S") local myHost=$( hostname ) echo -n "[$myDate @ $myHost] Error: " echo $* echo "Contact: $CONTACT" echo "exit 1" exit 1 } logmessage() { local myDate=$( date "+%Y%m%d-%H%M%S" ) local myHost=$( hostname ) echo -n "[$myDate @ $myHost] Log: " echo $* } printSampleAnnotation() { cat <<ENDOFSAMPLEANNOTATION Data Resource Sample Extract Species Treatment p403/Transcriptomics/Affymetrix/LightStimulus/caquinof_20090312_dark_2_ATH1.CEL dark_2_ dark_2_ Arabidopsis thaliana (thale cress) light deprivation p403/Transcriptomics/Affymetrix/LightStimulus/caquinof_20090312_sdlg_1_ATH1.CEL sdlg_1_ sdlg_1_ Arabidopsis thaliana (thale cress) light treatment p403/Transcriptomics/Affymetrix/LightStimulus/caquinof_20090312_sdlg_3_ATH1.CEL sdlg_3_ sdlg_3_ Arabidopsis thaliana (thale cress) light treatment p403/Transcriptomics/Affymetrix/LightStimulus/caquinof_20090313_dark_1_ATH1.CEL dark_1_ dark_1_ Arabidopsis thaliana (thale cress) light deprivation p403/Transcriptomics/Affymetrix/LightStimulus/caquinof_20090313_dark_3_ATH1.CEL dark_3_ dark_3_ Arabidopsis thaliana (thale cress) light deprivation p403/Transcriptomics/Affymetrix/LightStimulus/caquinof_20090313_sdlg_2_ATH1.CEL sdlg_2_ sdlg_2_ Arabidopsis thaliana (thale cress) light treatment ENDOFSAMPLEANNOTATION } printConfigR(){ cat <<EOFCONFIGR scriptDir = "/usr/local/ngseq/bfab_scripts" rScripts = list.files(path=scriptDir, pattern='.*\\\\.r', recursive=TRUE,full.names=TRUE) for (rs in rScripts){ message(rs); source(rs)} annoFile = "SampleAnnotation.txt" config = list() config[["Name"]] = "Affymetrix QC Report" config[["Project ID"]] = "403" config[["Project Name"]] = "Informatics Test Project" affyQC(annoFile, config=config) EOFCONFIGR } logmessage "BEGIN COMPUTENODE INFO" logmessage $( date ) logmessage $( hostname ) logmessage $( netstat -i -e | grep -w inet | grep -v 127.0.0.1 | head -n1 | awk '{print $2}' | cut -d'/' -f1 | cut -d':' -f2 ) logmessage $( uptime ) logmessage "BASH_VERSION: $BASH_VERSION" logmessage "END COMPUTENODE INFO" test -e "$SCRATCHSPACE" || die "$SCRATCHSPACE does not exist!" test -w "$SCRATCHSPACE" || die "$SCRATCHSPACE is not writable!" if [ -n "$JOB_ID" ]; then which externaljobsave || die "externaljobsave not available!" which resourcesave || die "resourcesave not available!" which curl || die "curl not available!" which mktemp || die "mktemp not available!" which xmlstarlet || die "xmlstarlet not available!" fi set -x if [ -n "$JOB_ID" ]; then echo "checking tool(s) for copying directory contents to the storage host" which scp || die "scp not available!" which ssh || die "ssh not available!" which rsync || die "rsync not available!" fi # checking execution tools test -e "$REXECUTABLE" || die "$REXECUTABLE does not exits!" test -x "$REXECUTABLE" || die "$REXECUTABLE can't be executed!" which zip || die "zip not available!" if [ -n "$JOB_ID" ]; then set -x STARTING=$( echo "Starting execution on $HOSTNAME; JOB_ID: "$JOB_ID"; JOB_NAME: $JOB_NAME" ) externaljobsave --id $EXTERNALJOBID --status running --logthis "$STARTING" set -x fi # create the working directory mkdir -p "$WORKINGDIR" test $? -eq 0 || die "failed to create workint directory: $WORKINGDIR" if [ -n "$JOB_ID" ]; then addTemp "$WORKINGDIR" fi # change into the working dir, print config files and run R cd "$WORKINGDIR" printConfigR > config.r printSampleAnnotation > SampleAnnotation.txt $RCMD < config.r test $? -eq 0 || die "R execution failed" test -r "$WORKINGDIR/00index.html" || die "R processing finished without success!" # zip the content of the working dir cd "$WORKINGDIR" $ZIPCMD "$OUTPUT_FILENAME" . test $? -eq 0 || die "zip command failed" # determine the filesize and the file checksum FILESIZE="$( stat -c %s $OUTPUT_FILENAME )" FILECHECKSUM="$( md5sum $OUTPUT_FILENAME | awk '{print $1}' )" # copy the content of the working directory to the storage if [ -n "$JOB_ID" ]; then ssh $OUTPUT_HOST "mkdir -p $OUTPUT_DIRPATH" test $? -eq 0 || die "creating workunit directory failed" rsync -av -e ssh "$WORKINGDIR/" "$OUTPUT_HOST:$OUTPUT_DIRPATH/" test $? -eq 0 || die "rsync failed to copy data to storage" fi # update B-Fabric if [ -n "$JOB_ID" ]; then set -x resourcesave \ --workunitid $WORKUNITID \ --storageid $STORAGEID \ --relativepath "$OUTPUT_RELATIVEPATH/$OUTPUT_FILENAME" \ --size $FILESIZE \ --filechecksum "$FILECHECKSUM" \ --name "$RESOURCE_NAME" \ --status "available" test $? -eq 0 || die "failed to create resource in B-Fabric" set -x externaljobsave \ --id $EXTERNALJOBID \ --status done \ --logthis "execution finished successfully" test $? -eq 0 || die "failed to update external job in B-Fabric" fi exit 0

"program $OPTIONS $INPUT $OUTPUT"

The application executable contains the command or the absolute path in the "program" attribute. The options are composed from parameters: if a parameter is of type "string" then "key value" will be added to the variable OPTIONS; in case of parameters of type "boolean" only "key" will be attached. The order is not guaranteed. The wrapper will copy the input file from the storage to the the scratch; INPUT will contain the path of the local input file, OUTPUT will contain the output path to the local output file.
Currently, there is no application which utilizes this wrapper creator.

"program $OPTIONS $INPUT > $OUTPUT"

The application executable contains the command or the absolute path in the "program" attribute. The options are composed from parameters: if a parameter is of type "string" then "key value" will be added to the variable OPTIONS; in case of parameters of type "boolean" only "key" will be attached. The order is not guaranteed. The wrapper will copy the input file from the storage to the the scratch; INPUT will contain the local path. It is assumed that the application writes the output to STDOUT, so this will be piped into the file described by the OUTPUT variable.
Example Application: Head Tool
program $OPTIONS $INPUT $OUTPUT
#!/bin/bash -x CONTACT="support@fgcz.uzh.ch" declare -a ALLTEMPFILES trap cleanTemp EXIT WSTOOLSSET_DIRPATH="/home/bfabric/sgeworker_test/fgcz-bfabric-demo/webservicetools" export PATH="$WSTOOLSSET_DIRPATH:$PATH" SCRATCHSPACE="/scratch" EXTERNALJOBID=3381 WORKUNITID=105851 STORAGEID=1 PROJECTID=474 CREATEDBY="amarko" INPUT_RESOURCEID=81600 RESOURCE_NAME="F129283" INPUT_PROTOCOL="scp" INPUT_HOST="fgcz-c-064.fgcz-net.unizh.ch" INPUT_FILEPATH="/usr/local/mascot/data/20101202/F129283.dat" INPUT_FILENAME="F129283.dat" INPUT="$INPUT_PROTOCOL://$INPUT_HOST/$INPUT_FILEPATH" OUTPUT_PROTOCOL="scp" OUTPUT_HOST="fgcz-data.uzh.ch" OUTPUT_BASEPATH="/home/bfabric/repos" OUTPUT_RELATIVEPATH="p474/General/Head_Tool/workunit_105851" OUTPUT_DIRPATH="$OUTPUT_BASEPATH/$OUTPUT_RELATIVEPATH" OUTPUT_FILENAME="F129283.zip" OUTPUT_FILEPATH="$OUTPUT_DIRPATH/$OUTPUT_FILENAME" OUTPUT="$OUTPUT_PROTOCOL://$OUTPUT_HOST/$OUTPUT_FILEPATH" OUTPUT_FILENAME_UNZIPPED="F129283.txt" ZIPCMD="zip -r -q" EXECUTABLE="/usr/bin/head" OPTIONS="-n 40 " WORKINGDIR="$SCRATCHSPACE/workunit-$WORKUNITID-$EXTERNALJOBID" INPUT_FILEPATH_LOCAL="$WORKINGDIR/$INPUT_FILENAME" OUTPUT_FILEPATH_LOCAL="$WORKINGDIR/$OUTPUT_FILENAME" OUTPUT_FILEPATH_LOCAL_UNZIPPED="$WORKINGDIR/$OUTPUT_FILENAME_UNZIPPED" addTemp() { local TEMPFILE="$1" test -e "$TEMPFILE" && ALLTEMPFILES[${#ALLTEMPFILES[*]}]=$TEMPFILE } cleanTemp() { if [[ ${#ALLTEMPFILES[*]} -gt 0 ]]; then for TEMPFILE in "${ALLTEMPFILES[@]}" do test -e "$TEMPFILE" && rm -rf $TEMPFILE done fi } die() { if [ -n "$JOB_ID" ]; then externaljobsave --id $EXTERNALJOBID \ --status "failed" --logthis "$1" fi local myDate=$( date "+%Y%m%d-%H%M%S") local myHost=$( hostname ) echo -n "[$myDate @ $myHost] Error: " echo $* echo "Contact: $CONTACT" echo "exit 1" exit 1 } logmessage() { local myDate=$( date "+%Y%m%d-%H%M%S" ) local myHost=$( hostname ) echo -n "[$myDate @ $myHost] Log: " echo $* } logmessage "BEGIN COMPUTENODE INFO" logmessage $( date ) logmessage $( hostname ) logmessage $( netstat -i -e | grep -w inet | grep -v 127.0.0.1 | head -n1 | awk '{print $2}' | cut -d'/' -f1 | cut -d':' -f2 ) logmessage $( uptime ) logmessage "BASH_VERSION: $BASH_VERSION" logmessage "END COMPUTENODE INFO" test -e "$SCRATCHSPACE" || die "$SCRATCHSPACE does not exist!" test -w "$SCRATCHSPACE" || die "$SCRATCHSPACE is not writable!" if [ -n "$JOB_ID" ]; then which externaljobsave || die "externaljobsave not available!" which resourcesave || die "resourcesave not available!" which curl || die "curl not available!" which mktemp || die "mktemp not available!" which xmlstarlet || die "xmlstarlet not available!" fi if [ -n "$JOB_ID" ]; then set -x STARTING=$( echo "Starting execution on $HOSTNAME; JOB_ID: "$JOB_ID"; JOB_NAME: $JOB_NAME" ) externaljobsave --id $EXTERNALJOBID --status running --logthis "$STARTING" set -x fi set -x echo "checking tool(s) for copying input file from the storage host" which scp || die "scp not available!" which ssh || die "ssh not available!" if [ -n "$JOB_ID" ]; then echo "checking tool(s) for copying output file to the storage host" which scp || die "scp not available!" which ssh || die "ssh not available!" fi mkdir -p "$WORKINGDIR" test $? -eq 0 || die "unable to create working directory: $WORKINGDIR" if [ -n "$JOB_ID" ]; then addTemp "$WORKINGDIR" fi scp -o StrictHostKeyChecking=no -o VerifyHostKeyDNS=no -c arcfour "$INPUT_HOST:$INPUT_FILEPATH" "$WORKINGDIR" test $? -eq 0 || die "copying input files failed" cd "$WORKINGDIR" $EXECUTABLE $OPTIONS $INPUT_FILEPATH_LOCAL > $OUTPUT_FILEPATH_LOCAL_UNZIPPED test $? -eq 0 || die "execution failed" cd "$WORKINGDIR" $ZIPCMD $OUTPUT_FILENAME $OUTPUT_FILENAME_UNZIPPED test $? -eq "0" || die "zip command failed" FILESIZE="$( stat -c %s $OUTPUT_FILENAME )" FILECHECKSUM="$( md5sum $OUTPUT_FILENAME | awk '{print $1}' )" if [ -n "$JOB_ID" ]; then ssh $OUTPUT_HOST "mkdir -p $OUTPUT_DIRPATH" test $? -eq 0 || die "creating workunit directory failed" scp -o StrictHostKeyChecking=no -o VerifyHostKeyDNS=no -c arcfour "$OUTPUT_FILEPATH_LOCAL" "$OUTPUT_HOST:$OUTPUT_DIRPATH " test $? -eq 0 || die "scp failed to copy data to storage" fi if [ -n "$JOB_ID" ]; then set -x resourcesave \ --workunitid $WORKUNITID \ --storageid $STORAGEID \ --relativepath "$OUTPUT_RELATIVEPATH/$OUTPUT_FILENAME" \ --inputresourceid $INPUT_RESOURCEID \ --size $FILESIZE \ --filechecksum "$FILECHECKSUM" \ --name "$RESOURCE_NAME" \ --status "available" test $? -eq 0 || die "failed to create resource in B-Fabric" set -x externaljobsave \ --id $EXTERNALJOBID \ --status done \ --logthis "execution finished successfully" test $? -eq 0 || die "failed to update external job in B-Fabric" fi exit 0



Created by amarko. Last Modification: Friday February 2, 2024 13:34:48 CET by tuerker.