This is my report of the three day brainstorming meeting in Warsaw, Poland,
Samsung Poland R&D Center:
Marek Szyprowski (Organizer)
Cisco Systems Norway:
Hans Verkuil (Chair)
Ideas On Board:
Willy Poisson <[email protected]>
Samsung System.LSI Korea
Jonghun Han <[email protected]>
Jaeryul Oh <[email protected]>
Samsung DMC Korea:
Guennadi Liakhovetski <[email protected]>
All presentations, photos and notes are available here:
This report is based on the etherpad notes. See the URL above for the raw
I hope I haven't forgotten anything, but I believe I've covered pretty much
1) Compressed format API for MPEG, H.264, etc.
The current API was developed for multiplexed MPEG transport and program
and it was not entirely clear how new formats like H.264 should be
Right now V4L2_PIX_FMT_MPEG can be used for any multiplexed stream and the
STREAM_TYPE/AUDIO_ENCODER/VIDEO_ENCODER controls are used to discover the
format of the multiplexed stream and the used audio and video encoders.
This scheme breaks down for elementary streams. After discussing this we
the conclusion that the current scheme should be used for multiplexed
while elementary streams should get their own pixel format. In that case
controls mentioned above would not exist.
We would need new pixel formats for elementary streams:
V4L2_PIX_FMT_H264 (H.264 using start codes for the 'Network Abstraction
Layer Units (NALU)')
V4L2_PIX_FMT_AVC1 (H.264 without start codes for the NALUs)
DivX formats also need FourCCs. The DivX specification is proprietary, we
know how the different DivX formats differ ('DIV3', 'DIV4, 'DIVX', 'DX50',
'XVID', ...). More information is needed there.
VC1 comes in two flavors, VC1 (containerless) and VC1 RCV (VC-1 inside and
container). More information is needed on the VC1 RCV format: should we
this as a multiplexed stream or as an elementary stream? Could be similiar
and AVC1 as described above.
ACTION: Kamil will propose new fourcc's.
The V4L2 spec already includes a codec class, but it's mostly an unused
The V4L2 codec definition is very similar to V4L2 M2M devices. V4L2 also
an effect device interface. The codec and effect device interfaces should
merged in the spec and be replaced by a M2M interface section.
ACTION: Kamil will make a small RFC for this.
M2M devices currently have both capture and output V4L2 capabilities set,
show up as capture devices in applications. This could be fixed by skipping
application level) devices that have both capabilities sets. But this
for devices that can do both memory-to-memory processing and live capture
the OMAP3 ISP) depending on the link configuration.
We probably need a new capability flag for M2M devices.
ACTION: Hans will look into this as this is also needed to suppress the
handling of VIDIOC_G/S_PRIORITY for M2M devices.
For the newer codecs new controls will be needed to set the various
Besides the Samsung hardware it might also be useful to look at the SuperH
compressed video processing library to get an idea about video processing
available on different types of hardware:
A V4L2_CTRL_CLASS_CODEC class was proposed. But the existing MPEG class
used instead even though the name is misleading. We should consider
that replace MPEG with CODEC.
Vendor-specific controls will be defined as vendor controls and can later
standardized if needed. Whether the vendor control can then be removed is a
Some controls shared by different codecs can have different min/max values.
the QP range is 0..51 for H.264 and 1..31 for MPEG4/H.263. Should we use
controls, or a single one? When switching between codecs, min/max changes
can be reported
to userspace through a V4L2 control event, so a single control can work.
Hans would rather
see separate controls if there's only a couple of such controls.
ACTION: Kamil will make a list of controls and send it to linux-media.
A special case is UVC and the 'H.264 inside MJPEG' format. See:
Introduced to overcome hardware limitation (lack of 5th usb endpoint in the
additional markers are introduced to carry H.264 data. UVC extension units
probably be used to detect whether the "feature" is available, otherwise
can hardcode USB VID:PIDs. We need a new pix_fmt for such streams and let
demultiplex. The driver will report 2 FourCCs: MJPEG (for compatibility,
all embedded streams) and logitech MJPEG. Which embedded streams are
enabled will be
selected through private controls.
ACTION: Laurent will check if the format (width/height) of the embedded
identical to the main stream.
2) Small architecture enhancements.
- Acquiring subdevs from other devices using subdev pool
Subdev registration is handled by bus-specific methods (I2C, SPI, ...).
thus need to handle bus-specific configuration themselves. This should be
Tomasz proposes to allow:
* accessing subdevs by name (and optionally by a numerical ID)
* creating several subdevs for a single physical device
Everyone agrees that improved support for registration of subdevs on
is desired. One suggestion is to provide helper functions to simplify
registration across different physical busses (for instance a function to
a subdev in a bus-agnostic fashion, using a union that provides
information for all supported bus types).
Whether a subdev pool should be used for this led to a lot of discussion.
observation was made that the v4l2_device struct already contains a list of
so why add another?
No conclusion was reached and this remains unresolved.
- Introducing subdev hierarchy.
Sub-subdevs might be useful to model certain sensors that supports
operations like scaling. Sub-subdevices can be used to model such scalers
export the possibility to set format on the scaler input and output and the
On stream start all formats on pads are verified. To support a hierarchy a
callback like verify_link should be added to subdev's ops.
The media controller also needs to be made aware of such parent-child
Overall the idea was received favorably.
ACTION: Tomasz can proceed with this.
- Allow per-filehandle control handlers.
The spec requirement about parameter consistency across open-close calls
be relaxed to cover cases, where different file-handles *have* to implement
parameter sets. No comments otherwise.
ACTION: Hans will implement this as part of the control events
- Subdev for exporting Write-back interface from Display Controller
The framebuffer device must allow other drivers to access its writeback
This was resolved on Friday with some fancy container_of use.
- Exynos mixer interface, V4L2 or FB ?
Implement a FB driver on top of a V4L2 driver. An option is to extend vb2
this easier and perhaps come to a generic solution.
ACTION: Marek will investigate whether it is possible to make a generic 'FB
of V4L2' solution using vb2.
- Entity information ioctl
Applications need more information than what is provided by the media
entity information ioctl to identify identities correctly. For instance, a
entity is identified by a 16-bytes GUID, which is not reported by entity
Another issue arises when subdev type needs to be reported: the current
mutually exclusive, and can't handle an entity that is both a sensor and a
controller for instance.
To solve those problems, an entity information ioctl should be added to
information to userspace. That ioctl should report a list of properties
by the media controller framework) in an easily extensible way.
ACTION: Laurent will make an RFC with a proposed solution. The idea is to
list of read-only 'properties' or 'attributes' for an entity.
3) Cropping and composing
For video device nodes, the VIDIOC_[GS]_CROP ioctls are too limited. We
need two ioctls
for crop/compose operations. VIDIOC_[GS]_EXTCROP is proposed for cropping,
VIDIOC_[GS]_COMPOSE for composing.
ACTION: RFC from Samsung suggesting a VIDIOC_S_EXTCROP and
4) Pipeline configuration
There was much discussion on this topic and unfortunately without
were some sub-problems that did come to a conclusion:
- Pads also need the total width and height (i.e. width/height + blanking)
to get the bus timings right.
- Some hardware can do cropping after scaling (aka 'clipping'). This means
format of the output pad can no longer be used as the target size of the
The solution is to add a new operation to explicitly set the scaler output
crop operation can then be used on the output pad to clip the result.
- Right now the width and height for an output pad are set explicitly by
controlling driver. It might be better to let the subdev return the
configured resolution instead for output pads.
- Some sensors have more complicated sensor array layouts (cross-shapes).
need the default active pixel array. A SENSORCAP ioctl was suggested.
- Binning/Skipping can be set through controls.
Unresolved issues are how to calculate the optimal crop rectangle for a
scaler output. Should you set the scaler output size and then let the crop
modify the crop rectangle? Or should there be an ioctl (or even userspace
that calculates this?
Also unresolved is how to configure a complex subdev with multiple inputs
with dependencies on one another. Should we introduce a transaction-like
ran out of time so this will have to be continued on the mailinglist.
The API will be reworked since it should be a subdev-level API.
The HDMI control names should clearly indicate when it represents a status.
DV_TX_DVI_HDMI_MODE: make it DV_TX_MODE to select between different
(HDMI, DVI, perhaps displayport specific modes also).
DV_RX_5V: too specific. RX_TX_POWER? RX_TXSENSE?
HDMI receivers can have multiple input ports. Each is active on the level
EDID and HDCP. But only one will stream (determined by a mux).
One way of implementing this is to create connectors in the MC (needed
anyway for ALSA)
and to connect those to input pads. A VIDIOC_S_MUX or something similar is
control internal muxes.
A new control type for bitmasks is needed to support detecting the e.g.
of multiple input pads at once (UVC also has bitmask controls).
ACTION: Martin/Hans: update the APIs to the subdev pad API, incorporate the
made and make a new RFC. Colorspace HDMI handling needs to be discussed
further on the
6) Sensor/Flash/Snapshot functionality
- Metadata. Usually histogram/statistics information. Often out-of-band
data that needs
to be passed to userspace as soon as possible (before the actual image
of this it is not a good fit for planes containing metadata.
Usually done through read/ioctl. But it was suggested to make a new 'video'
this to allow a DMA engine to be used.
Where possible the source of the metadata should parse it (since only the
how to handle the contents).
ACTION: RFC from Nokia (tentative)
- Flash. While common flash settings (flash, torch mode, privacy light,
LED hardware errors such as short circuit, overtemperature, timeout,
be set through common controls, the specifics of how flash works is highly
dependent. Therefore this is left to the driver.
ACTION: RFC for common flash API from Sakari.
- 'Bracketing' in SMIA++: it is possible to set parameters for X frames
streaming. When streaming starts, the settings will be applied for each
streaming stops after the last frame data is provided for. This should be
as a SMIA++ specific ioctl temporariliy overriding the e.g. exposure
ACTION: RFC from Nokia (tentative)
7) Multiple buffer queues
Typically sensors can have a 'viewer' and 'snapshot' mode. However, some
a 'monitor' mode where previously taken pictures can be viewed. So any
not limit itself to just two modes.
We need to be able to switch between modes efficiently. So buffers need to
to avoid time-consuming cache invalidate operations.
There are two core problems: we need to be able to create buffers of a
than the current format and we need to be able to prepare buffers without
The proposal is to add three new ioctls:
VIDIOC_CREATE_BUFS(bufcnt, size or fmt)
These add additional buffers of the given size (or format) and destroy
created buffers. This also makes it possible to vary the number of buffers
for e.g. capture on the fly (something that has been requested in the
This prepares a buffer but will not otherwise queue it.
The first two are just a more flexible version of doing REQBUFS.
With these ioctls userspace can preallocate the buffers of the required
and prepare them. After a STREAMOFF userspace can set up the new format and
queue buffers with the corresponding size that are already prepared and
streaming again. It seems like a simple, flexible and practical solution.
good to be true, really.
We also need a per-plane flag to skip cache invalidation/flush if not
ACTION: Guennadi: RFC and a guesstimate of the impact it has for vb2 and
8) Buffer pools (related to Linaro activities)
There are 3 building blocks:
- contiguous memory allocator
- iommu memory allocator
- user and kernel interface for allocating and passing buffers
- evaluate existing solutions
ACTION: All: make a list of requirements by March 30th. Post on the
mailinglist. When done, discuss various solutions in view of the
This concludes this report of the meeting. Any mistakes are of course mine.
I think such meetings are very productive and I hope we can repeat this
There is still so much that needs to be done...
I'd like to thank all participants for their input. Special thanks go to
Poland R&D Center for hosting this event and to Marek Szyprowski in
Hans Verkuil - video4linux developer - sponsored by Cisco