Questions tagged as ['aws-cli']
I'd like to run a modify-instance-attribute command like the one below, but run it on multiple instances at once. I'm having a difficult time determining how to do this, as it seems the command only takes in a single instance id at a time. Is there a way to perform a 'lookup' of instance id's and run the the modify-command on multiple ec2 instances at once?
Command I'm using to supply instance id ...
Is there anyway to combine both commands below where it lists all the function app together with the storage account used for it?
This command gets all the function app in the subscription
az functionapp list
This command gets the storage account used by the function app
az functionapp config appsettings list --name <appname> -g <rg> --query "[].{name:name, value:value}[?name=='AzureWeb ...
I'm well aware of the aws sts ...
method but that requires a remote call. Is there a way to get my account id/number from local configuration?
Does Amazon provide an easy way extract a list of all folders that have files greater than 500 MB from a s3 bucket? want to limit the scope to the '/files/ftp_upload/' directories also This is so I can calculate my costs, etc.
I had tried this but doesn't get so much help
aws s3 ls s3://YOUR_BUCKET/YOUR_FOLDER/ --recursive --human-readable --summarize
what is the best approach here ?
My goal
I am trying to bring my own /46 IPv6 prefix to Amazon AWS following this documentation: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-byoip.html#prepare-for-byoip
What I need in a nutshell:
signing "1|aws|123456789012|abcd:efab:cde::/46|20230101|SHA256|RSAPSS" in a way with my private key that Amazon can verify the signature using my certificate.
What I did so far
So, I've follo ...
I want to run amazon linux commands as part of gitlab pipeline.
So, trying to use docker image as runner, amazonlinux:latest
So, connected to docker container and ran below command.
yum -yq install aws-cli
It installed aws-cli
Then, configured aws cli.
aws configure set region $AWS_REGION
aws configure set aws_access_key_id $AWS_ACCESS_KEY
aws configure set aws_secret_access_key $AWS_SECRET_KEY
aws con ...

We have hub and spoke model in our AWS environment.
We are allowed to perform AWS CLI commands from our HUB instances on all other instances.
This includes Stop/Start, so we would like to restrict stop/start activity to only one instance at a time.
In my scenario, I have some old EBS volumes that are not encrypted. To satisfy new corporate security measures, all data needs to be encrypted so I need to compile a plan to encrypt the unencrypted in the least disruptive way (Ideally with no downtime)?
Can anyone suggest What is the best way to accomplish this?
My company has two organizations: one is the primary organization that my user is part of. The other is a new one created for testing purposes.
I need to switch to that organization to create an S3 bucket (and possibly some other resources) so that they will be owned by the other organization and not the one my user is part of. I supposedly have access to do so.
How do I do that?
I would be nice to kno ...
We have enabled shadow copy in AWS Fsx I want schedule triggers with IST timezone there is no commands are mentioned in documents please help with the commands to schedule shadow copy with IST timezone https://docs.aws.amazon.com/fsx/latest/WindowsGuide/manage-shadow-cpy.html#shadow-schedules

After uploading a binary secret using something like
aws secretsmanager create-secret --name my-file-secret --secret-binary fileb://mysecret.file
I'm having trouble retrieving the file using the CLI.
How can I do this ?
In our scenario, We previously had some AWS keys. The IAM interface show/showed no usage for it but the employee has been able to upload resources. Could anyone advise how to check if the interface is just erring or if they were perhaps not using these credentials?
The ATHENA Queries I was tried
SELECT eventTime, eventName, userIdentity.principalId,eventSource
FROM athena-table
WHERE useridentity.acce ...

Is there any way of overriding the working-directory from the task-definition default when starting a task from the CLI? It seems like a fairly easy thing and a potentially important thing, but, if it is available, it looks like it's going to be very obscure, in that it's not advertised in the CLI documentation or UI.

I have an S3 bucket with millions of files, and I want to download all of them. Since I don't have enough storage, I would like to download them, compress them on the fly and only then save them. How do I do this?
To illustrate what I mean: aws s3 cp --recursive s3://bucket | gzip > file
I am trying to list all the images which has the name Ansible*.
If i can pull it off, i can use it to clean my AMI's that are created during patching activity. i am trying it via SSM Automation Document. below is the code i have.
description: This document is to remove AMI
schemaVersion: '0.3'
assumeRole: '{{ AutomationAssumeRole }}'
mainSteps:
- name: getImageId
action: 'aws:executeAwsApi'
...
Is there a way to see what actions the 'g2' IAM user is performing in S3, and which IP(s) they are running from? I have already enabled the logging of S3 actions.
One point I’m still not able to figure out is that when I’m trying to find logs in Cloud trail using an AWS access key or username in both cases, I’m getting results as No matches. But throughout the day that user (g2) interacts w ...

I'm running this command to change the master user password of a DB Cluster on AWS:
aws rds modify-db-cluster --db-cluster-identifier development-db \
--region us-east-2 --master-user-password newpassword --apply-immediately \
--no-cli-pager > /dev/null
When I do this the status of the cluster changes from available
to resetting-master-credentials
. Sometimes it'll be in this status fo ...
I'm trying to use the AWS cli + session manager plugin to get into a database container to run some migrations, and I am struggling to get it working. I'm trying to use the following command:
aws ecs execute-command --cluster {cluster} --task {task} --container {container} --interactive
--command "/bin/sh"
And the error message it gives me is: aws.exe: error: argument operation: Invalid choic ...
Trying to deploy an app on AWS and this is only one of the hurdles I've had to deal with. I am trying to connect to an Elastic Beanstalk instance and when I attempt to connect with the awsebcli tool I get this error:
ERROR: NotSupportedError - The EB CLI cannot find your SSH key file for keyname "HFA". Your SSH key file must be located in the .ssh folder in your home directory.
I do not have this ke ...
We hadn't specified a target bucket for our AWS access logs & I believe we set up an infinite loop https://aws.amazon.com/premiumsupport/knowledge-center/s3-server-access-logs-same-bucket leading to the creation of 1.4 million access logs since 2017. How can I bulk delete all of these files from our S3 bucket?
I have used the command line before but am not very comfortable with it. If this is the ...
I have nginx
configuration files which I would like to deploy on my Ubuntu
EC2
instance. I create instance using AWS CLI
:
aws ec2 run-instances --instance-type t2.micro \
--count 1 \
--image-id ami-0f8b8babb98cc66d0 \
--key-name "$key_name" \
--tag-specifications "ResourceType=instance,Tags=[{Key=Name,Value=$name}]" \
--query "Instances[0].{$instance_fields_selection}" \
...
I'm not able to install or update the package on my centos machine .
check with
vi /etc/sysconfig/network-scripts/ifcfg-lo
ONBOOT=yes
NAME=loopbac
[ec2-user@ip- ~]$ sudo yum update
Loaded plugins: langpacks, priorities, update-motd
amzn2-core ...

Is there a way to show which resources are connected to what other resources in AWS? Basically the kind of information that would allow one to understand/view the current architecture.
There are CLI tools like list-application-dependencies
and describe-network-interfaces
but I don't think this provides the information I'm looking for.
For example, say I have an Amazon service like SageMaker, which uses ...
Sometimes I'd like to spin up an instance and run an aws cli command on it quickly, but there seems to be a great delay in installing the aws cli. Is there any way to get a "minimal" installation that omits the many files unnecessary in an automated deploy/test, for example all the example files? Or maybe even versions of the cli that only contain a single command, such as s3
or ec2
?
Note that the ...
Previously, I was able to find the auto-assigned IPv6 address of an Amazon EC2 instance with something like the following:
aws ec2 describe-instances --region us-west-2 --instance-id i-09eca7af84e1ef806 \
| jq .Reservations[].Instances[].NetworkInterfaces[].Ipv6Addresses[]
(if you're unfamiliar, jq is a tool to process JSON data; in this case I'm using it to extract the value described in https://doc ...
first, sorry for my bad english. i create a vault in glacier service and then i uploaded an archive into vault and in out-put, aws gave back me a archive id. then using the archive-id, i created a job for downloading using this command:
aws glacier initiate-job --account-id - --vault-name <example-vault-name> --job-parameters file://<created-json-file-using-aws-documentation>.json
after th ...
I have AWS WAF CDK that is working with rules, and now I'm trying to add a rule in WAF with multiple statements, but I'm getting this error:
Resource handler returned message: "Error reason: You have used none or multiple values for a field that requires exactly one value., field: STATEMENT, parameter: Statement (Service: Wafv2, Status Code: 400, Request ID: 6a36bfe2-543c-458a-9571-e929142f5df1, Extende ...

I'm trying to get Ansible EC2 to provision instances that require IMDSV2.
Through the aws ec2 run-instances
I'm able to do it by adding --metadata-options "HttpEndpoint=enabled,HttpTokens=required"
to my command.
I'm not seeing a matching option in the Ansible EC2 module?
I'm sure I'm missing something basic.
Given the OwnerId
field returned from an AWS ami query such as:
$ aws ec2 describe-images --image-ids ami-015f906ef3e2123c0 --region ap-southeast-2 --query Images[].OwnerId
[
"602401143452"
]
how can I retrieve some information about who the owner actually is. The OwnerId
means nothing to me.
I have a number of lambda functions which run my serverless backend. Something somewhere is misbehaving, and I need to bring up/search all the logs from a particular time — from all log groups, not just a single one, or a single stream.
Is there a good way to search across all log groups and all streams?
I have tried the console, but this insists on driving down from log groups (for lambda, these eq ...