Video4Linux brainstorming session 2016 - Berlin

http://muistio.tieke.fi/p/v4l2-requests-2016 : goo.gl/k2b60h

1. Request API

Recapture of https://linuxtv.org/downloads/presentations/media_summit_2016_san_diego/pinchartl-20160405-elc.pdf

Discussed changes:

1.1 Example with two video nodes



1.2 Request validation

- When you queue a request it is validated.
- The request has a full state, so the validation is independent of any previously queued requests.
- The hardware current state is maintained. A queued request may either change that configuration or refer to the current configuration or to a different request.
- if an error occurs, no good error reporting is available, the only reason for such errors would be hardware failures, which usually indicate a serious unrecoverable problem and should trigger a halt (requests cancelled, flagged with error)
- Change in hardware configuration (e.g. disconnecting an HDMI input) could also invalidate requests

1.3 Completing requests

- The existing prototype implementation destroys requests once they're completed.

1.4 Request IDs: file handles vs. plain integers?

- The current implementation uses plain integers to refer to requests.
- File descriptors do have some overhead in creation.
- If file descriptors are used, then requests should probably be destroyed explicitly, otherwise re-used after completion

=> Use file handles for request

1.5 Concurrent access

- Exclusive access to the request queue? Or entities? Or pads? Really: how to make requests for pipelines? (currently there is no object represent a pipeline)
 - Define error codes in advance
 - Second instance requests exclusive access when an existing request is in progress. Which one fails.
 
 Easier to start fully closed - but restricts some use-cases.
  - Multiple pipelines run by independent applications not supported.
 Exclusive access to the full device.
  - Then the first implementer who needs 'shared' pipelines can implement
  - Implementations can try to request access to 'sub-parts' of pipelines

1.6 Events

- In some cases providing information to the user on a completed event is useful.
- If there's just a single buffer queue and the user wishes to act based on complete buffers, the event may be omitted.

- Sakari has patches that create an event that is sent when the full request has been completed (buffers filled, all controls are set, etc.)
We will need this.

http://git.retiisi.org.uk/?p=~sailus/linux.git;a=shortlog;h=refs/heads/request

1.7 Streamon and streamoff

- How to handle STREAMON/OFF?

1.8 Controls

- Allow associating controls with requests
- Controls stored inside the request state

1.9 Requests vs. w/o requests

- Three options: either legacy only; legacy and requests; requests only
- Add a capability flag for "requests required". Perhaps add a "requests supported with legacy" flag later on.

1.10 Minimum stateless codec requirements

- Request allocation
- Request capability flag (1.9)
- Associate controls and buffers with requests (1.11, 1.8)

1.11 Video buffer queues

- Currently vb2 only deals with one queue at a time
- Multiple queues has to be handled by the driver instead. This is quite painful if the queues are part of the same pipeline.
- vb2 should help with multiple queues. Helper function / framework at the same level than m2m framework?

1.12 Stateless codec behaviour

- Each OUTPUT buffer produces one CAPTURE buffer of data.
- The user is responsible for providing just the right amount of data to the decoder.

The hardware processes jobs that consume a single OUTPUT buffer and fill a single CAPTURE buffer. There are two ways to
associate buffers with requests:
    
    - We can create requests that contains one OUTPUT buffer and one CAPTURE buffer. When a request completes, the OUTPUT and CAPTURE buffers contain the request ID they were queued with. These are the same buffers that were associated to the request before the request was queued.
    - We can prequeue a set of buffers on the CAPTURE side, and create requests that contain one OUTPUT buffer. When
    a request completes, the next DQBUF call will return an unspecified buffer ID and the request ID of the request that just
    completed.

1.13. Asymmetric devices (that produce different number of buffers per a consumed buffer)
  - How do we handle M2M devices which could produce 0 or more buffers of output
  - We know the maximum number of buffers that we will produce, but not the actual number
  - Output buffers can be held internally until enough data available to Capture first result.

1.14. Sequence of calls for stateless codecs

- Open the video device node V -> fd1
- Option TBD: Open the media controller device node M -> fd2 ?
- Allocate a set of requests

#define MEDIA_REQ_CMD_ALLOC       0
#define MEDIA_REQ_CMD_DELETE     1
#define MEDIA_REQ_CMD_APPLY        2
#define MEDIA_REQ_CMD_QUEUE      3
#define MEDIA_REQ_CMD_INIT            4

struct media_request_cmd {
        __u32 cmd;
        __u32 request; /* Is a file descriptor */
        __u32 base; /* Is a file descriptor */
};



- Process a frame:
    
    struct v4l2_control[] ctrls;
    struct v4l2_ext_controls ext_ctrls = { .controls = ctrls, .request = ... };
    struct v4l2_buffer buf = { .index = ..., .request = ... };
    struct media_request_cmd cmd  = { .cmd = MEDIA_REQ_CMD_QUEUE, .request = ... };

    ioctl(fd1, VIDIOC_S_EXT_CTRLS, &ctrls);
    ioctl(fd1, VIDIOC_QBUF, &buf);
    ...
    ioctl(fd1 or fd2, MEDIA_IOC_REQUEST_CMD, &cmd);

1.15 Complex camera devices that have ISPs and I2C connected sensors or other sub-devices

- I2C bus access takes time and the sensor settings need to be applied at a particular point of time, well before the settings take effect
- In case of Android, the device specific HAL is aware of the timing model of the underlying hardware
- How to tell to the user what is the exact scope of the request API in the driver?

2 Conclusions

- File descriptors are used to refer to requests
- Request is validated when it is queued
- On mem-to-mem devices ONLY, the requests are created on the video device in order to associate it to the mem-to-mem context. On all other devices, the requrests are created on the media device.
- All mem-to-mem devices must have a per-file handle context
- Event created on request completion, optionally
- Finished requests cannot be requeued until you re-initialize the state to some initial state (either the HW state or that of another request). If it is re-initialized to itself, then nothing changes, other than that the request can now be requeued again.
- The reason for this is to force the application to think about the state that the request should contain.
- Another reason why you want to reuse requests is to avoid closing/opening fds all the time to avoid running out of fds in the middle of streaming, which can happen in e.g. a browser where some websites (facebook) request a huge number of fds.
- The initial request state may originate either from the current device state or a given request.
- The scope of the requests may be limited to less than the full configurability of the device

3 Open questions

- What happens if an application associates a buffer to a request and then calls streamoff?

4 Work split
 
Laurent:
- Change the request API to use file descriptors to refer to requests
- Core request API (allocate, delete, queue, clone) + QBUF

Sakari:
- Make sure each mem-to-mem device does use a context
- Fix the drivers that do not have a context
- Update the API documentation accordingly

Hans:
- Use core request API to add control support

Laurent:
- Use core request API to add support S_FMT, S_SELECTION

Last two actions would be in parallel.