Score:0

What command line options for ffmpeg for separate audio and video output to dynamically set devices for streaming

ng flag
  1. Is there an useful way within Ubuntu OS from command line for separation of video and audio for streaming. For video being streamed to a network port (client media player connecting through udp://, tcp://, http://) and audio being redirected to varying (local) hardware devices (listed by aplay -l)?
    (If served with one input for ffmpeg all video and audio streams should start synchronized, AFAIK.)
  2. Can audio output devices be changed dynamically for redirection between audio hardware connected?

ffmpeg-suite would be the preferred tool on this installed OS (other options are mplayer, mencoder).

Following redirected audio to local stdout pipe for ffplay, but had increased delay to media player from udp streaming port:
media upstream: ffmpeg -i h264x_mp3.mp4 -preset fast -vcodec libx264 -f mpeg -map 0:v udp://localhost:port -map 0:a:0 -f mp3 - | ffplay -nodisp -i -
downstream client: mplayer udp://localhost:port

this example plays audio locally and should transfer a video stream over a named pipe, but connecting to named pipe does not show video output on media player client:
media server: mplayer -ao alsa,device=hw=0.0 -vo mpegpes:namedpipe
media client: mplayer -ao none namedpipe

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.