-
Notifications
You must be signed in to change notification settings - Fork 594
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Using V4L2 to realize VideoDevice #664
Conversation
|
||
// Set capture format | ||
v4l2_format format; | ||
if (Settings.CaptureSize == (0, 0)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My Raspberry Pi CSI camera(unofficial) couldn't get maximum resolution, but the USB camera could get it. I guess it's a driver problem.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Did you try changing other parameters while increasing the size? (i.e. slow down capture rate or something similar - might be worth to check with strace what other software is doing)
src/System.Device.Gpio/System/Device/Media/Devices/Windows10VideoDevice.Windows.cs
Outdated
Show resolved
Hide resolved
src/System.Device.Gpio/System/Device/Media/VideoConnectionSettings.cs
Outdated
Show resolved
Hide resolved
That explains why you were not so active recently 😄 Nice work @ZhangGaoxing! |
Please add some sample code, haven't seen anything blocking! Per your question, you should put it in Iot.Device.Bindings. I'm not much familiar with how videos work on linux but here is some info I got last time I looked at it: |
In the new commit, I added some new v4l2 settings and converters for common pixel formats. For video shooting, there are still many problems to consider. Because the functions is not only for Raspberry Pi CSI cameras, but also for all USB cameras that meet the Linux standard. For example, Raspberry Pi CSI camera can directly output H264 format video. We don't need to do much work, just read and save data from the driver. Some video players can play H264 format video directly. However, many webcams only support YUV format, which requires some transformations to save videos. And the file format saved is avi or MP4 or flv... Implementing this requires a lot of work, and I haven't found the right NuGet package, no one has used C# to implement these functions, and. NET itself lacks some methods of media processing. I also have my own job, and I'm unlikely to concentrate on it... I have the following suggestions:
|
@ZhangGaoxing thanks, just pictures are fine for now. For H264 there is very likely some C library you can wrap to do it instead of implementing it from the scratch |
LGTM as first pass on video devices. |
706f4d1
to
c24818f
Compare
/azp run |
Azure Pipelines successfully started running 1 pipeline(s). |
|
||
using System.Runtime.InteropServices; | ||
|
||
internal partial class Interop |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please move all of those to Interop.Libc.cs since this seems a bit too granular. Also for the common stuff please also move out stuff from https://github.com/dotnet/iot/blob/master/src/devices/SocketCan/Interop.cs with whatever is common (close
at minimum; got mixed feelings about ioctl
; please review what makes sense to move to common)
internal partial class Interop | ||
{ | ||
[DllImport(LibcLibrary, SetLastError = true)] | ||
internal static extern int open([MarshalAs(UnmanagedType.LPStr)] string pathname, FileOpenFlags flags); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why can't you use FileStream and just get a handle out of it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I referred to the implementation of I2C
, and Interop was copied from System.Device.Gpio project.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok, we should clean duplicate code in the future but fine in this PR
} | ||
} | ||
|
||
private void Initialize() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can you just always initialize in the constructor and remove the lock at all?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If I change the shooting parameters after the first capture, then the second capture will throw an error. So the process must be initialization -> shooting -> close.
} | ||
} | ||
|
||
private void Close() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should this be happening only in Dispose?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It happens at the end of each capture.
@ZhangGaoxing only treat comment about the leak and returning Stream as blocking. Anything else I'd prefer to be fixed but I'm fine with this being left as is. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, thanks!
* draft VideoDevice * change namespace and add some new settings * add some settings, common pixels converter, async method * add license * fix build error * resolve comment * add README.md * remove xml comment * update README.md * edit xml comment * add Dockerfile * Update Dockerfile * remove dockerfile and use current directory * remove unused interop, add xml docs * fix docs error * add docs * catch TaskCanceledException * remove async method * update sample * move interop and add readme * fix doc error * fix some issues * Update README.md
I tested it on Raspberry Pi 3B+ and Orange PI Zero using CSI and USB cameras. This draft version can capture the static image correctly. It can't shoot videos yet because I don't know much about how video streams are handled. But the method to get the raw frame data is done. V4L2 struct and consts are not complete, they are too many, and I just transplanted what the methods needed. Now I have some questions, such as whether it should be placed in
System.Device.xxx
orIot.Device.xxx
namespace, and whether there are some extensions to help transform the image format(my USB camera only supports YUV format😄). I hope to receive your reply and help😄. @joperezr @krwqA simple example