Talk:Motion JPEG
From Wikipedia, the free encyclopedia
A comment on the 'criticisms' section: the first bullet stresses the existence of MJPEG "compatibility concerns", while in the last bullet, "broad compatibility" is attributed to the format. Would the originator of the text - or an expert in the area - please clarify this contradiction? Previously Used Coffins (talk) 20:07, 4 June 2008 (UTC)
Is anyone else confused by these statements?
"The resulting quality of intraframe video compression is independent from the motion in the image which differs from MPEG video where quality often decreases when footage contains lots of movement."
"Prior to the recent rise in MPEG-4 encoding in consumer devices, a progressive scan form of MJPEG also saw widespread use in e.g. the "movie" modes of Digital Still Cameras, allowing video encoding and playback through the integrated JPEG compression hardware with only a software modification. Again, the resultant quality is markedly reduced compared to MPEG compression at a similar bitrate, particularly as sound (when included) was often uncompressed PCM or low-compression (and low processor-demand) ADPCM."
I read the first one to say that MJPEG can offer better quality than MPEG, particularly when there's a lot of motion. The second statement seems to contradict that. How can it say "Again, the resultant quality is markedly reduced..." when there was nothing stated to that effect prior in the article?
—The preceding unsigned comment was added by SalsaShark42 (talk • contribs) .
- The first statement was misleading. I changed it. It did seem to give the impression that MJPEG provided better quality than MPEG when there's a lot of motion, but that's an incorrect impression. The degree of compression superiority of older MPEG technology (MPEG-1, for example) over JPEG was primarily the result of interframe prediction. If you don't use the interframe prediction, you get about the same compression capability as JPEG. Not worse. Just not better. In a way it's sort of like saying that if you don't compress the video pictures at all and just use PCM instead you get the advantage that the quality doesn't vary as a function of the picture's spatial frequency content. Maybe the compression is not varying, but it's not very good either. Or maybe it's like saying that if you stick your money under your mattress instead of in a savings account then you have the advantage that your return rate (of zero percent) is independent of the market interest rate fluctuations. -Mulligatawny 23:47, 26 September 2006 (UTC)
can anyone tell me the best way to save Motion Jpeg video files as MP4 files ? When I play these files they look fine but when I try to save them they break up and look very soft. —The preceding unsigned comment was added by Photogold (talk • contribs) .
It is stated that there are multiple formats of MJPEG, but did not mention about where to find specifications for them ? (unsigned)
"These features are not to be undervalued, as many applications in high framerate real time image processing require a stable open-source solution that can be easily implemented with a minimal impact on the processor load.[citation needed]".
Not a citation but a little explanation. Video non-linear editing systems often need to decode the video in the CPU rather than in the GPU because the GPU is set up to display frames rather than process them. They also often need to decode 2 video streams simultaneously to implement simple effects like crossfade and wipe. Decoding a single HD stream can severely tax a CPU. Decoding two HD video streams plus whatever processing is done on them is a lot of work for the CPU. Another consideration for video non-linear editing (NLE) system, and some other applications, that this article fails to take into account is that if a system needs to skip frames to keep up in real time, a sistem using interframe compression could depend on frames that were thrown away. In a program like cinelerra, the plugins may need preceeding frames for each frame output and if you need even more frames to reconstruct those frames due to the codec, then the number of frames required can be very high. Even if you try to synchronize non-skipped frames with the full frames in the original stream, the need by effects plugins for preceeding frames can thwart that. If you have a repeating pattern of one full frame followed by 9 incremental frames, then displaying every other frame still requires decoding every frame. Displaying every 5th frame requires decoding the first 6 frames out of each group so you find yourself decoding 60% of frames instead of 20%. If you apply a plugin that needs the previous frame for each frame processed, then you need to decode all 10 frames (100%) even when displaying every fifth because you need to decode the tenth frame to get the preceeding frame for the 11th and decoding the tenth requires decoding all 9 preceeding frames. Thus an encoding format where each frame can be decoded independently of the others is an advantage. Whitis (talk) 04:40, 30 April 2008 (UTC)

