Score:1

Bash script not working in Ubuntu in Windows 10

ke flag

Is Ubuntu different from Linux on a Mac? Sorry if this is a basic question...I am new to this. I am trying to run a bash script that creates a .sh script out of some FASTQ files. This works in the terminal on a Mac OS. I am trying to run it on my Windows laptop and it ignores my escaping of #s and just states that several commands are not found. I have tried using dos2unix and double checked with cat -A file.sh but it hasn't helped.

The code I am trying to run takes all fastq files in a folder and creates a .sh file for SLURM job submissions using their file name (needed for my university's computer cluster, and I need to make 100+ job scripts). So the Mac OS version is as follows:

for FILE in *fastq;    #change file type when needed (e.g., fasta, fastq, fastq.gz)
do echo -e \
\#\!/bin/bash \
\\n\#SBATCH --partition=nonpre  \# Partition \(job queue\) \
\\n\#SBATCH --requeue                 \# Return job to the queue if preempted \
\\n\#SBATCH --job-name=samples      \# Assign a short name to your job \
\\n\#SBATCH --nodes=1                 \# Number of nodes you require \
\\n\#SBATCH --ntasks=1                \# Total \# of tasks across all nodes \
\\n\#SBATCH --cpus-per-task=64        \# Cores per task \(\>1 if multithread tasks\) \
\\n\#SBATCH --mem=180000              \# Real memory \(RAM\) required \(MB\) \
\\n\#SBATCH --time=72:00:00           \# Total run time limit \(HH:MM:SS\) \
\\n\#SBATCH --output=slurm.%N.${FILE}.out  \# STDOUT output file \
\\n\#SBATCH --error=slurm.%N.${FILE}.err   \# STDERR output file \(optional\) \
\\n \
\\n\#ADD WHATEVER CODE YOU WANT HERE AS YOUR SLURM JOB SUBMISSION \
\\n \
\\nsacct --format=JobID,JobName,NTasks,NNodes,NCPUS,MaxRSS,AveRSS,AveCPU,Elapsed,ExitCode -j \$SLURM_JOBID \#this will get job run stats from SLURM\; use these to help designate memory of future submissions \
> ${FILE}.sh; 
done

When running this on Windows, I get:

Slurm_Generator.sh: line 13: #!/bin/bash: No such file or directory
Slurm_Generator.sh: line 14: \n#SBATCH: command not found
Slurm_Generator.sh: line 16: \n#SBATCH: command not found
Slurm_Generator.sh: line 18: \n#SBATCH: command not found
Slurm_Generator.sh: line 20: \n#SBATCH: command not found
Slurm_Generator.sh: line 22: \n#SBATCH: command not found
Slurm_Generator.sh: line 24: \n#SBATCH: command not found
Slurm_Generator.sh: line 26: \n#SBATCH: command not found
Slurm_Generator.sh: line 28: \n#SBATCH: command not found
Slurm_Generator.sh: line 30: \n#SBATCH: command not found
Slurm_Generator.sh: line 32: \n#SBATCH: command not found
Slurm_Generator.sh: line 35: \n: command not found
Slurm_Generator.sh: line 36: \n#ADD: command not found
Slurm_Generator.sh: line 39: \n: command not found
Slurm_Generator.sh: line 40: \nsacct: command not found

Any help would be appreciated, and some explanation on what the difference is between Ubuntu on Windows vs. the Terminal on Mac. I've tried researching this but I keep finding suggested code with no explanation or it isn't particularly my issue. Thank you!

Edit: I tried running chmod +x script.sh and I will get the above errors. Am I running 'echo' wrong? Even running: 'for FILE in *fastq; do echo -e hello; done' says Command 'hello' not found

Edit: Running 'bash file.sh' yields the following: bash file.sh yields the following (for each of the 5 .fastq files in my directoy):

Slurm_Generator.sh: line 8: #!/bin/bash: No such file or directory
Slurm_Generator.sh: line 9: \n#SBATCH: command not found
Slurm_Generator.sh: line 11: \n#SBATCH: command not found
Slurm_Generator.sh: line 13: \n#SBATCH: command not found
Slurm_Generator.sh: line 15: \n#SBATCH: command not found
Slurm_Generator.sh: line 17: \n#SBATCH: command not found
Slurm_Generator.sh: line 19: \n#SBATCH: command not found
Slurm_Generator.sh: line 21: \n#SBATCH: command not found
Slurm_Generator.sh: line 23: \n#SBATCH: command not found
Slurm_Generator.sh: line 25: \n#SBATCH: command not found
Slurm_Generator.sh: line 27: \n#SBATCH: command not found
Slurm_Generator.sh: line 30: \n: command not found
Slurm_Generator.sh: line 31: \n#ADD: command not found
Slurm_Generator.sh: line 34: \n: command not found
Slurm_Generator.sh: line 35: \nsacct: command not found

If I run cat -A file.sh I see a $ at the end of each line. Even if I get rid of these, I get the same result as above. Running ls -al script.sh gived: -rwxrwxrwx 1 cerberus cerberus 1209 Jun 16 00:17 Slurm_Generator.sh

Edit: I changed my script to:

#! /bin/bash
for FILE in *fastq;    #change file type when needed (e.g., fasta, fastq, fastq.gz)
do echo -e \
"
#\!/bin/bash
#SBATCH --partition=nonpre  # Partition (job queue)
#SBATCH --requeue              # Return job to the queue if preempted
#SBATCH --job-name=samples     # Assign a short name to your job
#SBATCH --nodes=1              # Number of nodes you require
#SBATCH --ntasks=1             # Total # of tasks across all nodes
#SBATCH --cpus-per-task=64 # Cores per task (>1 if multithread tasks)
#SBATCH --mem=180000           # Real memory (RAM) required (MB)
#SBATCH --time=72:00:00        # Total run time limit (HH:MM:SS)
#SBATCH --output=slurm.%N.${FILE}.out # STDOUT output file
#SBATCH --error=slurm.%N.${FILE}.err  # STDERR output file (optional)

#ADD WHATEVER CODE YOU WANT HERE AS YOUR SLURM JOB SUBMISSION \

sacct --format=JobID,JobName,NTasks,NNodes,NCPUS,MaxRSS,AveRSS,AveCPU,Elapsed,ExitCode -j \$SLURM_JOBID \#this will get job run stats from SLURM\; use these to help designate memory of future subm$" \
> ${FILE}.sh;
done

My new output is the following (which is much better):

#\!/bin/bash
#SBATCH --partition=nonpre  # Partition (job queue)
#SBATCH --requeue           # Return job to the queue if preempted
#SBATCH --job-name=samples      # Assign a short name to your job
#SBATCH --nodes=1                 # Number of nodes you require
#SBATCH --ntasks=1                # Total # of tasks across all nodes
#SBATCH --cpus-per-task=64 # Cores per task (>1 if multithread tasks)
#SBATCH --mem=180000              # Real memory (RAM) required (MB)
#SBATCH --time=72:00:00           # Total run time limit (HH:MM:SS)
#SBATCH --output=slurm.%N.Sample1.fastq.out  # STDOUT output file
#SBATCH --error=slurm.%N.Sample1.fastq.er  # STDERR output file (optional)


#ADD WHATEVER CODE YOU WANT HERE AS YOUR SLURM JOB SUBMISSION

sacct --format=JobID,JobName,NTasks,NNodes,NCPUS,MaxRSS,AveRSS,AveCPU,Elapsed,ExitCode -j $SLURM_JOBID \#this will get job run stats from SLURM\; use these to help designate memory of future submissions

The only issue I am running into is that while it prints this out for each .fastq file, which is what I want, the resulting .sh files that it writes out are blank. So it is not recognizing the > ${File}.sh part of the script.

Thank you everyone!

Karlom avatar
de flag
Maybe `/bin/bash` is not in your Ubuntu terminal path. You can check this by `env` command. Try adding `#! /bin/bash` to the very beginning of the script. Save it and execute again.
waltinator avatar
it flag
Always paste your script into `https://shellcheck.net`, a syntax checker, or install `shellcheck` locally. Make using `shellcheck` part of your development process.
Justin avatar
ke flag
@Karlom Unfortunately that didn't work. I tried `#! /bin/bash` as well as `#! /bin/sh` and I am able to call on `bash` from my path.
Karlom avatar
de flag
How are you trying to run the script? What error do you get? Perhaps you have not permission to execute that particular file? try `chmod +x file.sh` and if it was fine then run `bash file.sh` and if you still get error, post that to your question.
Justin avatar
ke flag
@Karlom I tried but it yielded the same result. Am I running echo wrong? Is there a different syntax in Ubuntu for Windows compared to a Mac? I have tried using it by echoing the text with and without quotes but it just thinks that whatever comes after `echo` is a command instead of text.
Karlom avatar
de flag
@Justin, Ubuntu as a virtual machine on Windows is not different from stand alone Ubuntu. To see what is wrong, in Ubuntu shell terminal, go to the directory in which the `file.sh` is located, then run `bash file.sh` and copy here the error that you receive. Also it helps if you post the output of `ls -al file.sh` while you are in the terminal in the path where file.sh resides.
Justin avatar
ke flag
@Karlom I just posted the results of all of that; apologies if any of this is info I should have already given!
Karlom avatar
de flag
The scrpit says `SBATCH: command not found`. So apparently you are trying to run a Windows script on Linux and it can not find that command. Please note that running `dos2unix` does not autmatically make your Windows script executable in Linux.
Justin avatar
ke flag
Yes, but SBATCH should be commented out, shouldn't it? using `echo -e` would make it so that the `\\n` and `\#` make new lines and get commented out respectively, right? Or is my syntax wrong? It works fine when I do this in a Mac Terminal. I just can't figure out where my script is wrong.
Nate T avatar
it flag
@Karlom You can also check with `which bash`. In ubuntu, wsl or no, this should be sufficient, I would think.
Score:1
cn flag

Use heredoc instead:

for FILE in *fastq; #change file type when needed (e.g., fasta, fastq, fastq.gz)
do

cat <<-EOF > ${FILE}.sh
#!/bin/bash

#SBATCH --partition=nonpre             # Partition (job queue)
#SBATCH --requeue                      # Return job to the queue if preempted
#SBATCH --job-name=samples             # Assign a short name to your job
#SBATCH --nodes=1                      # Number of nodes you require
#SBATCH --ntasks=1                     # Total # of tasks across all nodes
#SBATCH --cpus-per-task=64             # Cores per task (>1 if multithread tasks)
#SBATCH --mem=180000                   # Real memory (RAM) required (MB)
#SBATCH --time=72:00:00                # Total run time limit (HH:MM:SS)
#SBATCH --output=slurm.%N.${FILE}.out  # STDOUT output file
#SBATCH --error=slurm.%N.${FILE}.err   # STDERR output file (optional)

#ADD WHATEVER CODE YOU WANT HERE AS YOUR SLURM JOB SUBMISSION

nsacct --format=JobID,JobName,NTasks,NNodes,NCPUS,MaxRSS,AveRSS,AveCPU,Elapsed,ExitCode -j \$SLURM_JOBID #this will get job run stats from SLURM; use these to help designate memory of future submissions
EOF

done
Score:0
ke flag

I realized that I had to add a \ before the > ${File}.sh. So the answer is:

#! /bin/bash
for FILE in *fastq;    #change file type when needed (e.g., fasta, fastq, fastq.gz)
do echo -e \
"
#\!/bin/bash
#SBATCH --partition=nonpre  # Partition (job queue)
#SBATCH --requeue              # Return job to the queue if preempted
#SBATCH --job-name=samples     # Assign a short name to your job
#SBATCH --nodes=1              # Number of nodes you require
#SBATCH --ntasks=1             # Total # of tasks across all nodes
#SBATCH --cpus-per-task=64 # Cores per task (>1 if multithread tasks)
#SBATCH --mem=180000           # Real memory (RAM) required (MB)
#SBATCH --time=72:00:00        # Total run time limit (HH:MM:SS)
#SBATCH --output=slurm.%N.${FILE}.out # STDOUT output file
#SBATCH --error=slurm.%N.${FILE}.err  # STDERR output file (optional)

#ADD WHATEVER CODE YOU WANT HERE AS YOUR SLURM JOB SUBMISSION \

sacct --format=JobID,JobName,NTasks,NNodes,NCPUS,MaxRSS,AveRSS,AveCPU,Elapsed,ExitCode -j \$SLURM_JOBID \#this will get job run stats from SLURM\; use these to help designate memory of future subm$" \
> ${FILE}.sh;
done
NotTheDr01ds avatar
vn flag
Good to hear you got it resolved. Please remember to "Accept" your answer so that this question gets marked as resolved. Unresolved questions get auto-bumped months or years later in an effort to try to solicit answers. Thanks!
mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.