Questions tagged as ['shell']
As part of automation in gitlab ci.
I am running a terraform template and creating a linux machine.
After that, I need to run few commands on the remote machine. I am running those using the ssh remote command. But at the end, even if it is failure, it is showing as successful.
Please let me know how to setup this kind of environment. Installing any tool as a step is also feasible
I would like to know how can I preserve file access time ("atime") when using "chmod". Sometimes I need to use the code below:
chmod -R 777 /directory
It works fine, however all the files inside that directory have the access time ("atime") changed to the current time. Do you have any idea?
NOTE: I am using CentOS 8.

I have executed the below command manually in the vm and i am able to generate the ssh keys but when i tried using the same script in the automation pipeline from github actions i am getting the error
Script: name: Azure CLI script
uses: azure/CLI@v1
with:
inlineScript: |
az vm run-command invoke --command-id RunShellScript -g "${{ env.RESOURCEGROUPNAME ...
When I login to my Proxmox VE7 host, I'd like to get the email that I entered when I set up Proxmox on installation. Is it possible?
The idea is to automate certbot initialization non-interactively and I would rather use the email I entered previously automatically than asking for the email in my script again.
To clarify, I wish to get the email that I entered here within a shell script:
I often type | less [ENTER]
.
I would like to optimize this.
Environment: Ubuntu 20.04
This needs to work for terminals running in the browser, too.
Any idea how I could enter above string with less effort?
It would be super cool, if the CapsLock key could be used for this, since I don't need this key (and it is easy to access with the ten finger touch-typing-system).
I have the below script in Shell:
read n
for ((i=1;i<=$n;i++))
do
echo "Connecting to $publicip"
ssh -i ./key.txt root@$publicip 'hostnamectl set-hostname autotest$i.domain.com && mv /etc/letsencrypt/live/autotest.domain.com /etc/letsencrypt/live/autotest$i.domain.com && reboot'
done
mv command makes use of a variable from the above commands. But it doesn't seem to be working. What ...

Ubuntu 20.04 LTS.
There is a simple bash script to add a new user via command line in interactive mode:
#!/bin/bash
# Script to add a user to Linux system
if [ "$(id -u)" -eq 0 ]; then
read -p "Enter username : " username
read -s -p "Enter password : " password
egrep "^$username" /etc/passwd >/dev/null
if [ $? -eq 0 ]; then
echo "$username exists!"
exit 1
else
...

I'm trying to run a script and bash export is outputting text when I don't want it to because it's breaking up the output. I need to run a script which extracts some information and then inserts it into the next commands environment, kind of like acquiring the AWS secrets for the awscli and transparently passing them into the aws environment. I'm getting inconsistent results and I'm unsure why
$ ./ ...

Running the following commands in a shell runs without issues:
ssh user@machine systemctl status my-service.service
ssh user@machine sudo systemctl stop my-service.service
scp -r ./my-service/* user@machine:/home/user/my-service
ssh user@machine chmod +x /home/user/my-service/my-service
ssh user@machine sudo systemctl start my-service.service
ssh user@machine sudo systemctl status my-service.servic ...
My system gets me an error that saying 'Too many open files'. I investigated this error and turned out that /usr/bin/uwsgi
created sockets(?) more than 1020. If it creates more than 1020, I guess, the above error comes up with.
So What I try to do is to run a shell script that monitors the number of open files and if it exceeds more than 1000, to kill its PID to resolve this error at this stage.

I recently took backup of one of my server's drive on AWS using one of the software and now at the time of restoring it i found out it took too long to restore as its having billions of files to restore. I tried to restore it from AWS itself but my problem is backup software created two directories inside my parent directories. I'm looking out some shell script by which i can move file parent direcoties ...
I'm trying to dump memory from a process on my Linux machine using GDB, but I'm trying to automate this using a script.
So far I've been using the following commands (example):
$ gdb --pid [pid]
(gdb) dump memory dump_file 0x00621000 0x00622000
Is there a way to do this using only one command that I can implement in a shell script? Or is there a way to perform gdb commands using shell scripts?
Any ...
when I do docker images I have below docker images as list where there are images with multiple tag and also image with latest tag value.
REPOSITORY TAG IMAGE ID CREATED SIZE
m1 latest 40febdb010b1 15 minutes ago 479MB
m2 130 ...

I often run programs in parallel via the shell.
generate_some_info (){
if [ $i -eq 4 ]; then echo "Here is some useful info." ; fi
if [ $i -eq 7 ]; then ls --errorx; fi
}
for i in {1..10}; do
generate_some_info&
done
Program output I'm trying to track is hard to see because the server notifies me every time a job starts or stops.
[2] 2052
[3] 2053
[2] - done generate_some_i ...
I'm using MacOS 11.6. I've written a small cron job that measures memory usage of a process and, if that usage exceeds a threshold, pops up a notification on screen. (The point is to remind me when a leaky process gets so big that it's time to restart it.)
All of this works just fine except the memory usage calculated by my technique never matches what iStat Menus reports (mine is always lower) a ...
i'm trying the Simple content filter example: i followed the steps mentioned here http://www.postfix.org/FILTER_README.html#simple_filter
but in line 24 of the content filter that can be a simple shell script like this
you need to specify your content filter
my question is :
is there any full example with a content filter ( line 24) that i can work with ?
1 #!/bin/sh
2
3 # Simple shell-based fi ...

I have zsh
configured to browse command history with prefix search. For example when I type ssh
and press ↑
, only my last ssh
commands are being displayed.
However, when I use zsh
within tmux
session, it stops working. The shell goes back to ordinary history browsing, like in default sh
.
Where should I look for configs that describe this interaction?
def NAMESPACE = "Dev"
def BODY= sh(
script:'''body=$(cat <<-EOF
{
"name": "${NAMESPACE}",
"type": "regularwebapp"
}
EOF
)
(echo $body)''',
returnStdout: true
).trim()
The above doesnt work, output is as follows:
{
"name": "",
"type": "regularwebapp"
}

I ask for your help to solve my problem because I am stuck.
I explain the situation to you: I want to copy files whose path I have on a txt file in specific subdirectories specified in a second file (I also have a complete csv file including these 2 columns: name of the subdirectories ($value1
), file path ($value2
))
I was able to automatically create the subdirectories using this command:
xargs mkdir -p & ...
I know that it does (often) include the grep process and I know adding | grep -v grep
or grepping on [f]oo
instead will prevent it, but my question is more about order of operations I guess.
For example, in this contrived example, I see several grep processes:
% ps -x | grep login | grep login | grep login | grep login
2475 ?? 0:00.03 /usr/libexec/loginitemregisterd
2115 ttys004 0:0 ...

I have a user that needs to connect to a remote machine M
(via ssh) and run one in a fixed set of commands (say N
in total).
These commands rely on python, libraries thereof, and privileged access to the network (which machine M
has).
Are there default strategies to limit the linux user shell only to the execution of these N
commands, without any possibility of:
- further access to the fs
- reading ...
As the title says: How do I know in my network which computers have the local administrator account active? Because, as per security consultant request, we have to know, and if possible, disable, every local administrator account on every of the 300+ notebook/desktops on the network.
There's a net use
or wmi
command to address it?
Can it be set recursively to ask every computer on the network?
We hav ...
im wondering if there is no representation for the sed string
sed -i -E '/searchString/ s/toreplace/newstring/g' file
in ansible.buildin.replace to replace only the 'toreplace' string in lines which have /searchString/ in
before and after doesnt help here.

I created a RHEL7 VM and a OS Policy assignment with a simple config. Here is the YAML I'm using to validate. Below is the same shell script used in yaml for reference:
export num=$(stat --format '%a' /etc/crontab); if [[ "$num" -eq 644 ]]; then exit 100; else exit 101; fi
Since I get the output 644 when I SSH and manually try the command, I'm checking the same with this script and after validating, i ...
I used to restart my services via init.d scripts on my debian servers. I moved to Monit to restart the services but now I don't have the output of the script when restarted. Basically, when the service is restarted, the init script returns :
Service stopping...
Service stopped.
Service starting...
Service started.
I'd like to see this output when I restart with Monit (especially because I have mult ...
I have freshly provisioned Linode instance with Fedora 34. The only thing I have installed on it is libcgroup. cgconfig
service is starting properly and there are no errors but the subsystem is not working.
I am getting the following error when I execute lscgroup command.
[root@localhost ~]# sudo lscgroup
cgroups can't be listed: Cgroup is not mounted
Further on the topic, when I execute lssubsy ...
I'm writing a simple bash script to shutdown tomcat, and if it doesn't stop gracefully then check if the tomcat's PID still exists and kill it.
I pass the tomcat name as a variable to the script as below. In some instances I pass two or three names of tomcat, which is why the use of FOR LOOP below
./shutdown.sh tomcat1
Content of the Shutdown.sh script
#!/bin/bash
for name in "$@"
do
bash /opt/$name ...
Trying to do a web interface IPtables management.
Created a file test.php
$output = shell_exec('sudo bash /usr/bin/iptables.sh 2>&1');
echo $output;
Gave /usr/bin/iptables.sh NOPASSWD so I can execute the file with sudo through apache without using a password
sudo iptables -L
sudoers file :
apache ALL=(root) NOPASSWD: /usr/bin/iptables.sh
But I am still getting error
We trust you have r ...

So for performance reasons I decided to move 2TB of small files, randomly written during some time - from a 14TB SATA HDD to a 4TB m.2 NVME SSD, both locally attached.
I've been struggling for 2 days straight to get reasonable copy performance.
cp
gets ~15MB/s, wich gives me an estimate of about two days of nonstop copying
rsync
is even worse at ~5MB/s
I guess the poor performance is due to a random ...

In most languages like Python, Java, Swift we can run functions/methods asynchronously and based on the outcome be it success/failure, we can implement and run different call backs.
How can we accomplish similar things in bash/shell scripting.
Let's say, I have to make a REST API call via curl / or delete the files and only then proceed to the next step. How can I achieve this.