Embedded systems for digital cameras are being asked to do more. Soon, wireless links will be de facto features.
Digital still cameras have been around for a few years but only recently have technical advancements brought their cost into the reach of most consumers. As the market becomes more competitive, manufacturers are adding features such as motion video, audio playback, and wireless connectivity. Designers need to consider hardware costs and development time along with such things as power consumption and other features that digital cameras are expected to have.
A digital still camera is a complex embedded system with a number of components interacting in real time. Central to the system is the control and processing module (CPM). The CPM should have enough memory for processing image data and Flash memory for booting the system on power up. It also must have enough processing horsepower to meet performance requirements.
CPM tasks include providing real-time feedback to the user about system status and photo quality as well as processing and compressing raw image data from the imager module. It also must check battery status, file system interface, motor control for the lens, and Flash control based on lighting situations.
The data sent from the imager and audio modules to the processor consists of raw CCD pixel and digital audio values. Various image-processing algorithms enhance the quality of the final image. The controller compresses this data into standard file formats such as JPEG, MPEG4, and MP3.
JPEG, for Joint Photographers Expert Group, defines a standard for image compression. It's used in digital cameras extensively because its definition accounts for qualities of the human visual system in its method of reducing visual data. A discrete cosine transform (DCT) represents the data as coefficients to a series of frequency components. Higher-frequency components are quantized at lower resolutions. Because the human eye is not sensitive to the high-frequency distortions introduced by this technique, decompressed images provide acceptable quality. A controller can implement a DCT with a series of multiply and accumulate instructions. A single-instruction-multiple-data (SIMD) architecture with processing elements capable of multiplication and addition is the method of choice for DCT implementations.
After conversion through DCT, the data is quantized. Quantization reduces the resolution of various coefficients to minimize the number of bits required to represent images. This usually takes place via look-up tables for which a Risc processor is most efficient. Huffman coding then further shortens the word codes by using statistical probability methods to look for redundancies.
For video, MPEG4 is the most common compression scheme. MPEG4 is an ISO/IEC standard developed by the Moving Picture Expert Group. The standard combines streaming audio, video, and graphics with interactivity and delivers data to personal computers, Internet appliances, and wireless devices.
A process called motion compensation takes advantage of temporal redundancy between individual still frames. A technique known as Sum of Absolute Difference (SAD) is repeatedly computed over different image blocks. The speed of the SAD operation directly effects MPEG4 speed efficiency. Processing requirements per pixel include two loads, one addition, one subtraction, and one absolute value. Video compression at 15 fps requires almost 800 Mips for encoding.
For audio, MP3 is the common format. It's an abbreviation for MPEG1, layer 3, which is the audio compression format used in the MPEG1 algorithm. The standard has lately become more common largely because of interoperability with personal computers. Digital audio typically consists of 16-bit samples recorded at a sampling rate exceeding twice the actual audio bandwidth -- for compact disks, 44.1 kHz. Without data reduction, it would take 1.4 Mbits to represent just 1 sec of music in two-channel stereo. Using MPEG audio coding, this shrinks by a factor of 12 without losing sound quality.
Because of the increase in networked systems, connectivity will also be a key feature in digital cameras. Although wireless connectivity will potentially provide the most convenient way to transfer data to and from the camera, designers should consider additional software development to make this feature truly convenient.
Transfer of captured images from camera to computer is arguably the top activity which detracts from the overall user experience. The 802.11 wireless standard is positioned to fill this need and will play a major role in delivering wireless connectivity to handheld appliances over the next few years.
As wireless networks proliferate, wireless connectivity between portable digital devices such as digital cameras and computers comes closer to reality. The most common protocol is 802.11 or Wi-Fi. This protocol is similar to 802.3 (Ethernet) with a few exceptions pertaining to security and mobility. Unlike wired devices, wireless networks are not physically limited in connectivity. Devices on wireless networks are also free to move across the network. The 802.11 standard addresses these issues in three ways: authentication, association, and encryption.
Authentication is the way a wireless device is granted access to a wireless network. There are various levels of security. Open authentication ensures access, provided the device is operating on the right channel with an appropriate ID. Shared-key authentication requires two communicating devices to use an encryption technique defined by the Wireless Equivalent Privacy (WEP) standard. With WEP, a challenge message is sent to the device to be encrypted. If the message is accurately encrypted, the authenticating system assumes that the device has the appropriate encryption key and authenticates its use of the network.
The 802.11 standard defines two modes of operation. The first, infrastructure mode, is for environments in which permanent installations called access points link mobile devices called stations to a wireless network. The second, ad hoc mode, handles situations in which mobile devices communicate directly with each other without going through an access point. Association is the process by which an authenticated mobile device links to an access point in infrastructure mode. This process is based on several variables and is analogous to how cell phones associate with cell towers as a cell phone user travels across multiple cells.
A new standard dubbed 802.11i is expected to be ratified soon. It contains a stronger encryption technique necessary because experienced hackers easily bypass the 802.11 encryption. The new standard will impact software requirements on host processors because 802.11b Media Access Controllers (MACs) were not designed to handle the new standard. 802.11i has two encryption upgrades. One is called Temporal Key Integrity Protocol (TKIP) and is an encryption and message-integrity check scheme. This, along with 802.1x-based authentication, is being rolled out as Wi-Fi Protected Access (WPA). TKIP was designed to take advantage of encryption accelerators in 802.11b systems, minimizing the overhead imposed on the host processor. The second upgrade is Advanced Encryption Standard (AES) encryption. It involves adding a hardware accelerator. Most 802.11g systems, however, should have the hardware built in.
Just as processing architectures can vary in degrees of programmability, 802.11 MACs are available with varying amounts of the protocol implemented in the MAC or in an associated processor. Although some MACs are designed to conduct all protocol functions without the assistance of a host processor, these MACs often require their own additional memory.
Designers must also consider that the wireless interface incorporates a radio transmitter, so the PCB holding it must be FCC certified. Certification requires a significant amount of time and expense. The easiest approach might be to start with a certified PCB design and add functions via a PCMCIA interface. PCMCIA, however, may not be the interface of choice because it employs a large number of address/data lines. The industry is tending toward custom interfaces or standard serial interfaces such as Secure Digital Input/Output (SDIO) to the host processor. Additional considerations include the evolution of other wireless standards such as 802.11a. A modular interface to the camera would let future versions support evolving standards.
Camera and phone in one
Most major cell-phone manufacturers are adding digital cameras to cell phones. The applications processor handles photo processing along with the GUI and the display.
Digital cameras on cell phones process data a bit differently than stand-alone digital cameras. An applications processor instead of the CPM handles camera interface and processing as well as the GUI. Typically, image processing of raw pixel data takes place on the sensor for resolutions at and below a megapixel, lowering the Mips requirement of the main processor. This configuration may change with rising requirements for high-res phonecams. The applications processor is likely to get more of the load.
Common digital still-camera featuresPower: Battery life, number of batteries, auto shutoff, early warning
Image size: 1 to 2 Mpixel - low cost, 2 to 4 Mpixel - midrange, 4 to 6 Mpixel - professional
Display: LCD size, brightness, NTSC/PAL, zoom, crop
Storage: Compact Flash, MMC, SD card, memory stick
Download: IrDA, USB, 1394, 802.11
Form factor: Medium, small, smaller
Functions: Analog/digital zoom, point-and-shoot, burst mode, shot-to-shot delay
File system: FAT16, EXIF 2.1, DCIM compliance
Audio: Storage and playback, AAC, WMA, MP3
Movie Mode: Frames/second, size, storage algorithm, MPEG4, H.263, H.26L
Algorithm complexity and resource analysis
|Image processing||20 kbytes||400|
|JPEG codec||16 kbytes||400|
|MPEG4 codec||40 kbytes||800|
|MP3 codec||16 kbytes||50|