I have to give credit for this topic to my loving wife who to is a software engineer. The basis of this post is based on her observations she shared during recent a car ride.
There a number of risks when choosing to use old technology, make no mistake though, the use of old technologies is almost always a choice. While there are cases when migrating to new tech doesn't make sense or is cost preventative, more often the 'cost' typically accounts for the monetary costs of executing the transition rather than the costs of remaining use of the old tech.
While there are many risks of continuation of using old tech; downtime impacts, increased licensing/maintenance costs, decreased productivity, security and such...I'm going to specifically focus more on the indirect costs with respect to your engineering team(s).
Take a minute and close your eyes. Now imagine your dream job. Imagine what you're working on and what you're working with. If you are an aspiring journalist, did you imaging penning your masterpiece using a feather quill? If you're an experienced welder, did you imaging using an oxy-acetylene welding unit? If you're a software engineer, did you imaging firing up Windows 95 and Visual Studio 97? No? Really? Why not? Whelp, you're likely in good company. Given the choice, no one chooses to work with old shit. For now, stick a pin in this and we'll come back to it in a bit.
Take another minute and close your eyes. Now imaging your dream team. Are they well-versed in the latest technologies? Are they highly sought after, in demand or would they have a difficult time acquiring another job if it ever came to that? Concerning experience, are they primarily junior, primarily nearing the end of their career or are they distributed across various levels of experience?
Let me attempt to knit these two fundamental thoughts together now.
Great products are created by great teams. Great teams consist of a variety of experience levels; junior to expert-level contributors. Great team members prefer keeping current with latest technologies, for personal as well as professional reasons. So what happens if your company chooses to not make use of current technologies? Best case, your team keeps current individually in hopes that one day they can make use of it. Perhaps one day they get to apply their newly acquired knowledge on a future product; perhaps it'll be before a recruiter offers them alternative employment that currently uses that tech. Perhaps instead they simply stagnate and are ill-equipped to apply new tech when the someday comes.
New tech, old tech; job seekers will find job providers (and vice versa) it's a matter of compromises. The seeker may compromise on use of old tech, the provider may compromise on the ideal candidate. More junior level seekers are likely more willing to compromise on positions early in their career. Late-staged seekers (those nearing retirement from the profession) may also be more willing to compromise. What's less likely however are highly experienced seekers in the prime of their career compromising on a position that may place their competitive advantages at risk. Simply put, this industry moves so rapidly that great candidates can't afford to work for companies that are stuck in the past.
It's all about balance. This doesn't mean you should chase every new shiny technological button, but it also doesn't mean you should continuously reject introducing new techniques. Listen to your team; are they whispering of new technologies and techniques that could apply to your products? Are you listening? If people are leaving, are their new positions utilizing newer technologies?
Obviously, tech isn't the only factor in choice of employment but most of the colleagues I've worked with over these past decades hold it in pretty high regard. Please consider it as a factor in establishing your corporate talent pipeline.
Personal software engineering blog where I share a variety of topics that interest me.
Sunday, February 24, 2019
Sunday, February 17, 2019
FFMpeg Zoom
I won't embarrass myself referring to being an amateur videographer but I have set up a video camera, pointed it at something worthwhile and hitting record. With a high-def camera and a wide angle lens you can capture life in the making. While cameras offer zoom capabilities I'm far more likely to lose the subject so I've made the habit of setting the camera up on a tripod, zooming out to capture the entire scene and adding digital zoom effects post-processing. In the age of high-def cameras.....why not? I'm less likely to miss the shot and have numerous tries in adding effects afterwords.
Let's grab a video, apply a text target overlay (to make sure we're zooming where we think we are) and then zoom to that location.
$ youtube-dl https://www.youtube.com/watch?v=PJ5xXXcfuTc -o input
Let's slap a 'X' at 560,400 so we can confirm we're zooming to where we expect;
$ ffmpeg -y -i input.mkv -ss 30 -t 15 -vf drawtext="fontfile=/usr/share/fonts/truetype/droid/DroidSans.ttf:text='X':fontcolor=black:fontsize=24:box=1:[email protected]:boxborderw=5:x=560:y=400" -codec:a copy target.mp4
Finally, let's zoom to 560,400
$ ffmpeg -y -i target.mp4 -vf "scale=iw*2.0:ih*2.0,zoompan=z='min(max(zoom,pzoom) 0.05,5.0)':d=1:x='560*2.0-(560*2.0/zoom)':y='400*2.0-(400*2.0/zoom)'" -an output.mp4
In the above example, we're using the following values for scalars;
S=2.0
Z=5.0
K=0.050
Experiment with the scalars to get your desired affect;
Safe intellectual travels my fair reader.
Sunday, February 10, 2019
Applying Image Overlay to Video
Overlaying an image atop a video is a good way to add content to an informative video, or a means to apply a watermark.
In it's simplest form, the command takes the form of:
- specifying two input files, a video file and an image file
- image scaling size
- image overlay location
In it's simplest form, the command takes the form of:
- specifying two input files, a video file and an image file
- image scaling size
- image overlay location
$ cat go
#!/bin/bash
VidFile=/tmp/foo.mp4
ImgFile=/tmp/image.png
OutVidFile=/tmp/output.mp4
ffmpeg -y -i ${VidFile} -i ${ImgFile} -filter_complex "[1] scale=w=100:h=100 [tmp]; [0][tmp] overlay=x=10:y=10" -an ${OutVidFile}
mplayer ${OutVidFile}
If you want to have the overlay fade in/out the command is slightly more complex, the filter requires a fade in timestamp and a fade out timestamp. The following command has the image fade in at 5 seconds, and begins fading out at the 10 second mark:
$ cat go
#!/bin/bash
VidFile=/tmp/foo.mp4
ImgFile=/tmp/image.png
OutVidFile=/tmp/output.mp4
ffmpeg -y -i ${VidFile} -loop 1 -i ${ImgFile} -filter_complex "[1:v]fade=t=in:st=5:d=1,fade=t=out:st=10:d=1[over];[0:v][over]overlay=x=10:y=10" -t 20 -an ${OutVidFile}
mplayer ${OutVidFile}
The end result:Sunday, February 3, 2019
FFMpeg Dynamic Adjustment of Filters
FFMpeg has a full array of video and audio filters, specify the right parameters and it produces pure magic. The filter scalars can readily be specified as filter static parameters or in some cases based on time. But, what if you wish to dynamically modify filter parameters dynamically or in real-time? When compiled with ZeroMQ (0MQ) support, some filters can be adjusted in real-time by sending filter commands vi 0MQ.
The 0MQ support is optional, not configured by the default configuration, so it likely requires building Ffmpeg from source and configuring it for ZeroMQ support. The build procedure takes the form of a typical autoconf; configure, make, make install. Refer to my previous post for building on Ubuntu which includes instructions for adding package dependencies and building with what my common feature set; Building FFMpeg. The --enable-libzmq configure flag enables ZeroMQ based filter commands. It also requires installation of ZeroMQ development libraries pre-compilation (also found in the instructions).
Not all FFMpeg filters accept ZeroMQ commands, the ones that do are documented in the documentation; FFMpeg Filters, look for 'This filter supports the following commands'.
It's best to start by setting up your command line sequence, then update it to account for ZeroMQ command inputs. The FFMpeg documentation indicates the hue filter supports ZeroMQ commands; http://ffmpeg.org/ffmpeg-filters.html#Commands-14
10.90.2 Commands
This filter supports the following commands:
- b
- s
- h
- H
- Modify the hue and/or the saturation and/or brightness of the input video. The command accepts the same syntax of the corresponding option.If the specified expression is not valid, it is kept at its current value.
The following command applies the hue filter with h=90, s=1 and plays the video after the filter has been applied;
$ ffmpeg -loglevel debug -i /tmp/foo.mp4 -filter_complex "hue=h=90:s=1" -vcodec libx264 -f mpegts - | ffplay -
To apply filter commands via ZeroMQ you need to:
1) know the internal filter name of the pipeline
2) add ZeroMQ input to the filter
3) send the command via zmqsend command
We specifically added debug logging to our FFMpeg command so we could learn the name of the internal filter; Parsed_hue_1
[Parsed_hue_1 @ 0x3a1e600] H:0.5*PI h:90.0 s:1.0 b:0 t:11.9 n:357
Let's add ZeroMQ input to our filter, note the slight modification to our previous command;
$ ffmpeg -loglevel debug -i /tmp/foo.mp4 -filter_complex "zmq,hue=h=90:s=1" -vcodec libx264 -f mpegts - | ffplay -
Lastly, re-run the above command and within a new terminal send a hue filter parameter update;
$ echo Parsed_hue_1 h 50 | zmqsend
$ echo Parsed_hue_1 s 3 | zmqsend
Whelp, that's about all I've got. While I've on-and-off looked at ZeroMQ integration with FFMpeg on a few occasions over the past years I've never found any solid documentation. Hopefully this will help set you on your way. I'll likely post more as I go.
Cheers.
Subscribe to:
Posts (Atom)