Camera API

I have two questions about access to the API of photo on Playbook:

1. How can I add custom icons or controls on the camera? In other is words possible to overlay something on top of the default camera view? example: as a control custom zoom or something.

2. is there a way to incorporate the view camera inside a div element The takePicture() API takes up the entire window of the Playbook. I want to be in a DIV in the main window of the application

Grateful if someone can give some guidance.

In WebWorks, you can launch the camera view window (window displayed on top of your app) but do not incorporate the camera in your application WebContent.

Sorry, I can't talk about what can and cannot be done in Android, as I am not an expert in development of these applications.  Not all the features of Android are supported (not sure if the camera is or is not available in the BlackBerry for Android runtime).

You have an existing APK?  If so, try its best to test its compatibility by using the following compatibility:

https://bdsc.webapps.BlackBerry.com/Android/documentation/test_your_app_1985225_11.html

Tags: BlackBerry Developers

Similar Questions

  • Reminder problem video camera API

    I have reviewed most otherwise examples of Sean all camera APIs available publicly.

    Some use reminders and events.

    I'm interested in video reminder on the camera_start_video() function.

    What I want to achieve is

    1. -get each timestamp frame buffer and send it to a list stored in QML. Class doing is subclassing container.
    2. -each image translated and used in ImageView in QML image property.
    3. -If possible, transcode each image and, possibly, the video itself to resolution 480x480px (currently used is the only 720x720px for 1:1 ratio)

    Then cherry picked out of the headers and sources:

    //header
    public:
        static void vfCallbackEntry(camera_handle_t handle, camera_buffer_t* buf, void* arg);
        void vfCallback(camera_buffer_t* buf);
    
    //sources
    void VineRecorder::vfCallbackEntry(camera_handle_t handle, camera_buffer_t* buf, void* arg)
    {
        (void)handle;
        ((VineRecorder*)arg)->vfCallback(buf);
    }
    
    void VineRecorder::vfCallback(camera_buffer_t* buf)
    {
        //if (this->capturing()) {
            if (buf->framemetasize == 65536) return;
            if (buf->frametype != CAMERA_FRAMETYPE_NV12) return;
    
            quint64 currentFrameTimestamp = (quint64)buf->frametimestamp;
            //this->setLastFrameTimestamp(currentFrameTimestamp);
            qDebug() << currentFrameTimestamp;
        //}
    }
    
    //line where callback is used
    err = camera_start_video(mHandle, filename, vfCallbackEntry, NULL, NULL);
    

    What confuses me, is that vfCallback is not a static member and still it does not recognize "this", or anything else outside its local scope.

    Application crashes on each lines commented in vfCallback.

    Did get bad "arg" perhaps?

    Can someone maybe show an example of a non-static member is used as a reminder? A function pointer?

    Thanks for all of your discussions, responses and suggestions. Please help, it's very frustrating.

    Nope... I'm a model. I missed this part of your post somehow.

    in any case... your mistake is that you spent NOTHING for arg in camera_start_video().  documentation:

    @param arg the argument passed to the functions of callback, which is the
    * last argument in the callback functions.

    (a little difficult to put into words, but everything that you pass as an argument 'arg' of camera_start_video() is the same argument passed to your reminders like "arg")

    This is shown in most of the samples.

  • BB10 Webworks camera API?

    I was updated an application and found that the HTML5 WebWorks camera API (for example, "blackberry.media.camera.takePicture") is not usable for BB10.

    I dug through the poriton BB10 and did not find any direct appeal was once used.  Y at - it another option to use the camera in a webworks HTML5 app?  any sample taken in charge?

    Thanks in advance!

    In BB10, is in the stuff of Invoke

  • The camera API v32 iOS number 8

    I see that camera API was introduced in v29, http://www.adobe.com/devnet/digitalpublishingsuitea/articles/is-a started camera -api.html.

    I take the exact code and both producer Folio and v32 container and placed in an article he asks permission to the camera or anything when the user clicks the button on the camera.

    According to DPSViewerSDK - 2.32 initializePermissions() has been deprecated, how we can access the camera in an article?

    Hi Joseph,.

    The read API requires that you enable the "allow access to right information' in the control panel overlay of the Web to enable access to one of the read API.  If your straight help batteries HTML instead, this setting can be found in producer folio.

    Setting in the control panel of Web overlay:

    Parameter in Folio Producer (piles of HTML)

    Make sure that you have done this and update/re-publish content and try again.

  • With the help of the camera API

    Hello

    The camera API can be used in different ways, like to take screenshots and save them in the library?

    for example with a button on the screen saying save to photo library.

    Thank you

    I don't think it's possible. With the camera API, you can allow users to take a photo or select an image from the Photos app, but I don't think he has an option to take the screenshot.

    Couldn't provide you only instructions to take a screenshot? (Hold the power button and then press the home button).

  • Tablet Camera API

    Hello world.

    I'm new to BB tablet development but have been a professional developer of BB Phone for a lot of new times. I'm currently researching a project I'd like to undertake that would suit the BB Tablet and I have just a few questions that I would apprectiate anyone in the know have a look;

    1: it is possible to integrate the image of the camera within an application?

    2: is there a source code available for free / api for interpreting QR codes?

    3: there seems to be plenty of choice for the developing countries to the Tablet, which would be the best for an application that makes SOAP based web service calls, incorporates an image of the camera such as discussed in Q1 and interpretation of QR codes as discussed in Q2?

    Thank you

    Graeme.

    Graeme,

    Here is a simple code, which I used to access the camera. He puts a pretty big 'live view' on the stage with a button to capture. Then it freezes the image on the screen and allows you to provide a file name and save it. He puts it in a folder "documents" in the media directory.

    I have attached a .zip containing only the com.pfp imports for the Async JPEG encoder. It still "hangs" while the economy so some tweaking to do yet but works.

    package {
        import com.pfp.events.JPEGAsyncCompleteEvent;
        import com.pfp.utils.JPEGAsyncEncoder;
    
        import flash.display.Bitmap;
        import flash.display.BitmapData;
        import flash.display.Sprite;
        import flash.display.StageAlign;
        import flash.display.StageScaleMode;
        import flash.events.MouseEvent;
        import flash.filesystem.File;
        import flash.filesystem.FileMode;
        import flash.filesystem.FileStream;
        import flash.media.Camera;
        import flash.media.Video;
        import flash.utils.ByteArray;
    
        import qnx.dialog.AlertDialog;
        import qnx.ui.buttons.LabelButton;
        import qnx.ui.text.TextInput;
    
        public class fishyLightningCam extends Sprite {
            private var bitmapData:BitmapData = new BitmapData(972, 546);
            private var bitmap:Bitmap;
            private var byteArray:ByteArray;
    
            private var file:File = File.documentsDirectory;
            private var fstream:FileStream;
    
            private var captureBTN:LabelButton = new LabelButton();
            private var discardBTN:LabelButton = new LabelButton();
            private var saveBTN:LabelButton = new LabelButton();
            private var fileName:TextInput = new TextInput();
    
            private var cam:Camera = Camera.getCamera("1");
            private var vid:Video = new Video(972, 546);
    
            private var jpgEncoder:JPEGAsyncEncoder;
    
            public function fishyLightningCam() {
                super();
    
                // support autoOrients
                stage.align = StageAlign.TOP_LEFT;
                stage.scaleMode = StageScaleMode.NO_SCALE;
    
                cam.setMode(2592, 1456, 48);
                cam.setQuality(0, 100);     
    
                takePictures();
            }
            private function takePictures():void {
                if (cam != null) {
                    vid.attachCamera(cam);
                    vid.x = 26;
                    vid.y = 5;
                    addChild(vid);
                } else {
                    var noCamAlert:AlertDialog = new AlertDialog();
                    noCamAlert.title = "Camera Error";
                    noCamAlert.message = "No camera was detected. Please ensure no other apps are using the camera.";
                    noCamAlert.addButton("Okay");
                    noCamAlert.show();
                }
    
                captureBTN.label = "Capture!";
                captureBTN.setPosition(400, 550);
                captureBTN.width = 224;
                captureBTN.addEventListener(MouseEvent.CLICK, captureImage);
                addChild(captureBTN);
    
                fileName.prompt = "File Name";
                fileName.setPosition(36, 555);
                fileName.width = 350;
                fileName.visible = false;
                addChild(fileName);
    
                saveBTN.label = "Save";
                saveBTN.setPosition(400, 550);
                saveBTN.width = 100;
                saveBTN.visible = false;
                saveBTN.addEventListener(MouseEvent.CLICK, saveCapture);
                addChild(saveBTN);
    
                discardBTN.label = "Discard";
                discardBTN.setPosition(524, 550);
                discardBTN.width = 100;
                discardBTN.visible = false;
                discardBTN.addEventListener(MouseEvent.CLICK, discardCapture);
                addChild(discardBTN);
            }
            private function captureImage(e:MouseEvent):void {
                bitmapData.draw(vid);
                bitmap = new Bitmap(bitmapData);
                bitmap.x = 26;
                bitmap.y = 5;
                addChild(bitmap);
                removeChild(vid);
    
                captureBTN.visible = false;
                saveBTN.visible = true;
                discardBTN.visible = true;
                fileName.visible = true;
            }
            private function saveCapture(e:MouseEvent):void {
                jpgEncoder = new JPEGAsyncEncoder(100);
                jpgEncoder.addEventListener(JPEGAsyncCompleteEvent.JPEGASYNC_COMPLETE, encodeWIN);
                jpgEncoder.encode(bitmapData);
            }
            private function encodeWIN(e:JPEGAsyncCompleteEvent):void {
                addChild(vid);
                removeChild(bitmap);
                fstream = new FileStream();
                fstream.openAsync(file.resolvePath(fileName.text + ".jpg"), FileMode.WRITE);
                fstream.writeBytes(e.ImageData);
    
                /* byteArray = new ByteArray();
                byteArray = e.ImageData;
    
                byteArray.position = 0;
                fstream.writeBytes(byteArray);
                fstream.close();
                */
    
                captureBTN.visible = true;
                saveBTN.visible = false;
                discardBTN.visible = false;
                fileName.text = "";
                fileName.visible = false;
            }
            private function discardCapture(e:MouseEvent):void {
                addChild(vid);
                removeChild(bitmap);
    
                captureBTN.visible = true;
                saveBTN.visible = false;
                discardBTN.visible = false;
                fileName.text = "";
                fileName.visible = false;
            }
        }
    }
    
  • Troubleshooting the camera API

    Try to understand this kind of thing photo API. I downloaded the test files and put them in a superposition of web, but pressing the button does nothing. Looks like he's been pressed, but no dialogue does ask questions about options or anything like that.

    Is there some javascript that I play with in the files so that it can work, or I did something wrong on the way?

    Thank you!!

    You have selected the option ' allow right? Have you updated Adobe Content Viewer of v28?

    The question of the effects of the advice of DPS (v28) includes an example of the API of the camera as well as some tips and suggestions.

  • Camera API to crash after update to build 671

    Hello

    My app made use of the camera and then a few days ago I updated my device alpha Dev to build 671 (from 543) and my app out erros and accidents with the following result. Initially, I received my example in the sample PhotoBomber. I tried to run PhotoBomber it on the new update feature and I got the following result:

    ## TIMESTAMP pid=15278290 at 757978 ms -> "server thread started"
    ### Server Thread: STARTED
    startScreenEventThread(SUCCESS)
    ### PPS Thread: STARTED (10)
    ERROR:: QNXPpsSubscriptionServer: QNXPpsSubscriptionServer::createObject: (13) Failed to create dir /pps/services/automation/framework
    
    ERROR:: QNXPpsSubscriptionServer: QNXPpsSubscriptionServer::subscribe: Failed to open /pps/services/automation/framework/control?delta,notify=424:00000001
    
    ### TIMESTAMP pid=15278290 at 758168 ms -> "handle incoming events begin (1)"
    
    ### TIMESTAMP pid=15278290 at 758170 ms -> "handle incoming events end (1)"
    
    ### TIMESTAMP pid=15278290 at 758170 ms -> "waiting for events begin"
    
    ### TIMESTAMP pid=15278290 at 758171 ms -> "waiting for events end"
    
    ### TIMESTAMP pid=15278290 at 758171 ms -> "handle incoming events begin (1)"
    
    ### TIMESTAMP pid=15278290 at 758179 ms -> "handle incoming events end (1)"
    
    ### TIMESTAMP pid=15278290 at 758179 ms -> "waiting for events begin"
    CamCommandHandler::S_HandleCommandProcessing: start
    static void* PictureSaveHandler::S_RunSaveThread(void*) : DEBUG : save thread started
    QObject::connect: Cannot connect (null)::shutterFired() to PhotoBomberApp::onShutterFired()
    
    Process 15278290 (photobomber) terminated SIGSEGV code=1 fltno=11 ip=7807460c(/base/usr/lib/libbbcascadesmultimedia.so.1.0.0@+0x4b04) mapaddr=0001460c. ref=00000014
    

    Any ideas? I don't see anyone else complain about this issue so maybe I missed something?

    Thank you

    You're fixed the camera returns null

    I think I've solved the problem (although I don't know how I got as well the app photobomber and my application in the same State). Here's what I did:

    I checked around and saw this message in the console:

    WARNING: does not load symbols of shared library for 9 libraries, for example libQtDeclarative.so.4.
    Use the command "info launch" to see the complete list.
    You need 'set solib-search-path' or 'set sysroot?

    I thought that I must have somehow misconfigured my project (two projects). So, just to humor myself I redownloaded the sample photobomber and it worked! So I'll just recreate my project. Not sure how I have them in this State.

    I'll mark this as "resolved". If you have a theory as to what happened to wron made me know. Thanks for your help!

  • Frame API NV12 AVFrame (FFmpeg) camera

    I'm trying to use the camera API to stream video, but since it currently only writes to a file I'm trying to use the callback with the video viewfinder.

    void vf_callback(camera_handle_t handle, camera_buffer_t* buf, void* arg)
    

    Given that it only becomes a video image in NV12 (close enough to YUV420P? I think?) I'm trying to use FFmpeg to convert. I already wore FFmpeg and it works fine, but I can't seem to get the image to convert on one MPEG frame.

    My question is, does anyone know how to code the video image of the reminder using FFmpeg?

    What makes a dummy AVFrame which works well when the video file is created:

    /* Y */
    for(int y=0;yheight;y++) {
        for(int x=0;xwidth;x++) {
            picture->data[0][y * picture->linesize[0] + x] = x + y + a * 3;
        }
    }
    
    /* Cb and Cr */
    for(int y=0;yheight/2;y++) {
        for(int x=0;xwidth/2;x++) {
            picture->data[1][y * picture->linesize[1] + x] = 128 + y + a * 2;
            picture->data[2][y * picture->linesize[2] + x] = 64 + x + a * 5;
        }
    }
    

    I found this in the FFmpeg source, but it does not quite work to convert the image:

    int8_t *y, *u, *v;
    y = picture->data[0];
    u = picture->data[1];
    v = picture->data[2];
    const uint8_t *src=buf->framebuf;
    
    for (int i = 0; i < (c->height + 1) >> 1; i++)
    {
        for (int j = 0; j < (c->width + 1) >> 1; j++)
        {
            u[j] = *src++ ^ 0x80;
            v[j] = *src++ ^ 0x80;
            y[2 * j] = *src++;
            y[2 * j + 1] = *src++;
            y[picture->linesize[0] + 2 * j] = *src++;
            y[picture->linesize[0] + 2 * j + 1] = *src++;
        }
    
        y += 2 * picture->linesize[0];
        u += picture->linesize[1];
        v += picture->linesize[2];
    }
    

    Here is the reminder and any other test code:

    void vf_callback(camera_handle_t handle, camera_buffer_t* buf, void* arg)
    {
        if (buf->frametype != CAMERA_FRAMETYPE_NV12)
        {
            return;
        }
    
        printf("got video buffer of size %d x %d, bytes: %d\n",
                buf->framedesc.nv12.width, buf->framedesc.nv12.height,
                (buf->framedesc.nv12.height + (buf->framedesc.nv12.height / 2))
                        * buf->framedesc.nv12.stride);
    
        av_register_all();
    
        video_encode_example(buf, "/accounts/1000/shared/camera/VID_TEST.mpg",
                CODEC_ID_MPEG1VIDEO);
    }
    
    void video_encode_example(camera_buffer_t* buf, const char *filename,
            enum CodecID codec_id)
    {
        AVCodec *codec;
        AVCodecContext *c = NULL;
        int out_size, outbuf_size;
        FILE *f;
        AVFrame *picture;
        uint8_t *outbuf;
        int had_output = 0;
    
        printf("Encode video file %s\n", filename);
    
        /* find the mpeg1 video encoder */
        codec = avcodec_find_encoder(codec_id);
        if (!codec)
        {
            fprintf(stderr, "codec not found\n");
            exit(1);
        }
    
        c = avcodec_alloc_context3(codec);
        picture = avcodec_alloc_frame();
    
        /* put sample parameters */
        c->bit_rate = 400000;
        /* resolution must be a multiple of two */
    //    c->width = buf->framedesc.nv12.width;
    //    c->height = buf->framedesc.nv12.height;
        c->width = 352;
        c->height = 288;
        /* frames per second */
        c->time_base = (AVRational)
        {   1,25};
        c->gop_size = 10; /* emit one intra frame every ten frames */
        c->max_b_frames = 1;
        c->pix_fmt = PIX_FMT_YUV420P;
    
    //    if(codec_id == CODEC_ID_H264)
    //        av_opt_set(c->priv_data, "preset", "slow", 0);
    
        /* open it */
        if (avcodec_open2(c, codec, NULL) < 0)
        {
            fprintf(stderr, "could not open codec\n");
            exit(1);
        }
    
        f = fopen(filename, "wb");
        if (!f)
        {
            fprintf(stderr, "could not open %s\n", filename);
            exit(1);
        }
    
            /* alloc image and output buffer */
        outbuf_size = 100000 + 12 * c->width * c->height;
        outbuf = (uint8_t *) malloc(outbuf_size);
    
        /* the image can be allocated by any means and av_image_alloc() is
         * just the most convenient way if av_malloc() is to be used */
        av_image_alloc(picture->data, picture->linesize, c->width, c->height,
                c->pix_fmt, 1);
    
        /* encode 1 second of video */
        int a = 0;
        for (; a < 15; a++)
        {
    //      fflush(stdout);
    
            /* Y */
            for(int y=0;yheight;y++) {
                for(int x=0;xwidth;x++) {
                    picture->data[0][y * picture->linesize[0] + x] = x + y + a * 3;
                }
            }
    
            /* Cb and Cr */
            for(int y=0;yheight/2;y++) {
                for(int x=0;xwidth/2;x++) {
                    picture->data[1][y * picture->linesize[1] + x] = 128 + y + a * 2;
                    picture->data[2][y * picture->linesize[2] + x] = 64 + x + a * 5;
                }
            }
    
    //      uint8_t *y, *u, *v;
    //      y = picture->data[0];
    //      u = picture->data[1];
    //      v = picture->data[2];
    //      const uint8_t *src=buf->framebuf;
    //
    //      for (int i = 0; i < (c->height + 1) >> 1; i++)
    //      {
    //          for (int j = 0; j < (c->width + 1) >> 1; j++)
    //          {
    //              u[j] = *src++ ^ 0x80;
    //              v[j] = *src++ ^ 0x80;
    //              y[2 * j] = *src++;
    //              y[2 * j + 1] = *src++;
    //              y[picture->linesize[0] + 2 * j] = *src++;
    //              y[picture->linesize[0] + 2 * j + 1] = *src++;
    //          }
    //
    //          y += 2 * picture->linesize[0];
    //          u += picture->linesize[1];
    //          v += picture->linesize[2];
    //      }
    
            struct SwsContext* fooContext = sws_getContext(c->width, c->height,
                    PIX_FMT_YUV420P, c->width, c->height, PIX_FMT_RGB8,
                    SWS_FAST_BILINEAR, NULL, NULL, NULL);
    
            AVFrame* outpic = avcodec_alloc_frame();
            av_image_alloc(outpic->data, outpic->linesize, c->width, c->height,
                    PIX_FMT_RGB8, 1);
    
            sws_scale(fooContext, picture->data, picture->linesize, 0, c->height,
                    outpic->data, outpic->linesize);
    
            /* encode the image */
            out_size = avcodec_encode_video(c, outbuf, outbuf_size, outpic);
            had_output |= out_size;
            printf("encoding frame %3d (size=%5d)\n", a, out_size);
            fwrite(outbuf, 1, out_size, f);
        }
    
        /* get the delayed frames */
        for (; out_size || !had_output; a++)
        {
            fflush(stdout);
    
            out_size = avcodec_encode_video(c, outbuf, outbuf_size, NULL);
            had_output |= out_size;
            printf("write frame %3d (size=%5d)\n", a, out_size);
            fwrite(outbuf, 1, out_size, f);
        }
    
            /* add sequence end code to have a real mpeg file */
        outbuf[0] = 0x00;
        outbuf[1] = 0x00;
        outbuf[2] = 0x01;
        outbuf[3] = 0xb7;
        fwrite(outbuf, 1, 4, f);
        fclose(f);
        free(outbuf);
    
        avcodec_close(c);
        av_free(c);
        av_free(picture->data[0]);
        av_free(picture);
        printf("\n");
    }
    

    I use this with sample HelloVideoCamera, so if you want to run it, you can plug the recall on that.

    So I did a bit of preliminary inquiry and it seems that the following fields of the struct AVFrame interest a NV12-> YUV420P conversion:

    uint8_t* data[];
    uint8_t linesize[];
    int width;
    int height;
    

    It seems that in the case of YUV420P, data [0] is a pointer to the pixel plan Y data [1] are a pointer to pixel map U and data [2] is a pointer to the plan V pixel.

    You will notice that in the NV12 format, there are only 2 planes: a plan of Y and a plan combined UV.  The trick in this conversion process will be out interlacing the you and values V of the UV combined plan separate from you and V aircraft.

    The plan should be usable as - is.  You shouldn't even need to copy the pixel data.

    picture->data[0] = buf->framebuf;
    picture->linesize[0] = buf->framedesc.nv12.stride;
    

    The code above should be enough to put the plan in place Y.  If you really want, you could malloc air pixel data [0] and then memcpy() the Y data buf-> framebuf (line-by-line!), but it's probably a waste of time.  I noticed that you use av_image_alloc(), which you probably want to skip since you probably only want alloc data [1] and [2] data plans and will probably have to do it by hand... you can consider to implement a pool rather than return to malloc() in real-time.

    In any case, once you have the data [1] and would have [2] data had malloc (), you should be able to make an of interleave and copy from you and data buffer V of NV12 as follows:

    uint8_t* srcuv = &buf->framebuf[buf->framedesc.nv12.uv_offset];
    uint8_t* destu = picture->data[1];
    uint8_t* destv = picture->data[2];
    picture->linesize[1] = buf->framedesc.nv12.width / 2;picture->linesize[2] = picture->linesize[1];
    
    for (i=0; iframedesc.nv12.height/2; i++) {
        uint8_t* curuv = srcuv;
        for (j=0; iframedesc.nv12.width/2; j++) {
            *destu++ = *curuv++;
            *destv++ = *curuv++;
        }
        srcuv += buf->framedesc.nv12.stride; // uv_stride in later API versions
    }
    

    Note I guess one of strides you and plan V is desirable.  Then, if your allocator data [1] and [2] data plans with long strides, "Stride" pointers of dest as necessary at the end of the loop 'j '.

    Now you should have a YUV420P frame that will be compatible with your encoder.  Or at least that's how to interpret headers no matter what I was watching

    See you soon,.

    Sean

  • Camera for video API

    Is the camera api limited to access the camera work or is there a way to access video functions?

    Thank you

    Darrin

    Just the camera and photo roll, no videos.

  • Why is access to the camera

    What reason, firefox should access to my camera for?

    Hi ankanamoon:

    Yes, I confirm that if a web page requires the use of the camera via the camera API, a web page can take a picture!
    https://developer.Mozilla.org/en-us/docs/DOM/Using_the_Camera_API

    hope that answers to the part of the camera of this question API

  • Acquire GigE camera data using labview CIN or DLL to call.

    I am tring to acquire data from a basler runner line CCD camera (GigE).

    Because the NI Vision Development Module is not free, and the camera provide a C++ API and C and also some examples, so I plan on using the function CIN or call DLLS in labview to achieve. Is this possible?

    I tried to generate a DLL with the example of the company code of the camera. But encounter difficulties;

    I did that a little background in C++, but not familiar with it. The C++ Code example provides the camera is a C++ (a source Code file) and a .cproj file, it depends on other files, the camera API directory.

    If I build the project directly, it will create an application window, not in a DLL. I don't know how to convert a DLL project, given that other information such as dependence in the .cproj file, other than source code.

    Can someone help me with this?

    Don't forget that for the acquisition of a GigE camera, you must only Module of Acquisition of Vision, not the entire Vision Development Module. Acquisition of vision is much lower price and also delivered free with hardware NI Vision current (for example a card PCIe - 8231 GigE Vision of purchase). You need only Vision Development Module if you then want to use pre-made image processing duties. If you are just display, save images to disk, or image processing using your own code (for example to manipulate the pixels of the image in a table) you can do so with just Vision Acquisition.

    It is certainly possible to call DLL functions if LabVIEW by using a node called library, it would be quite a lot of work unless you are very familiar with C/C++. Since their driver interface is C++, you need to create wrapper functions in C in a DLL that you write. Depending on how much you want to expose functions, this could be a bit of work.

    Eric

  • Cordova 3.4 / file plugin 1.0.1 - resolveLocalFileSystemURL for the camera image

    I'm having a problem with some code that runs on Android 2.3 4.4 and iOS 5-7.  In short, he calls the getPicture of camera api method which managed returns a file URI.  An example of this URI is as follows:

    file:///accounts/1000/shared/camera/IMG_0000025.jpg

    He then took the file URI and trying to solve an entry of this file as follows:

    window.resolveLocalFileSystemURL (fileUri, success, failure);

    Note that it is now URL instead of the URI.  URI is deprecated since version 1.0.0 of the plugin file.  The recall of the error (failure in the example above) is always called.  He went to an error object as the API said it must. It has 1 property on this subject called "code" but instead of being one of the numbers that indicate what kind of file error happens the property code is just null.

    I also tried to change the URI to exclude the file:// to the path part, but the resolveLocalFileSystemURL then returns an error with code 5 (error coding).

    I made sure that I added this to the config.xml:

    And I also made sure that the config.xml file after running a build of cordova has permission for a shared access:

    <>ermissions >
       <>Ermit > access_sharedermit >
       <>Ermit > read_device_identifying_informationermit >
    ermissions >

    Someone at - it overview of what could be the cause?

    Thank you.

    Hello

    Thanks for the report! I connected a problem to make sure this is fixed in the next version of plugins.

    https://issues.Apache.org/jira/browse/CB-6242

    You can work around this by using instead the resolveLocalFileSystemURI:

    if (cordova.platformId === "blackberry10") {
        window.resolveLocalFileSystemURL = window.resolveLocalFileSystemURI;
    }
    
  • Camera work is not in HTML5 on BB Z10 web application

    Hello

    I'm working on a simple HTML5 web application which allows users to take a photo and upload it.

    IM using the following HTML5 code which works fine in the BB Z10 Simulator:

    
    

    However, when you run the web application business of a real BBZ10, I get only a file browser which lists all images that were taken by the camera before - icon to directly access the camera and take a picture is missing.

    Any idea what's going wrong here?

    -Unfortunately, I can't test the web application on the side deprived of my BB (this is not an option that the web application should work commercially and I can not access the server from the private side)

    -WebWorks Help or some other related client API is not an option, we want to stick to the plain, client-agnostic HTML5

    -J' already found this link, http://supportforums.blackberry.com/t5/Web-and-WebWorks-Development/BB10-Webworks-Camera-API/m-p/245... (exactly the code that I use also).

    -Z10 BB model STL100-2, the software Version is 10.2.0.424

    Any help would be greatly appreciated

    Kind regards

    Sebastian

    Unfortunately the camera app is not present in the scope of work; This means that you can not call the specific application of the camera .

    From what you see, it seems that this code is either:

    
    

    (A) directly rely on the camera application; or

    (B) invoking the default image into the Manager.

    If it is (A), then we can not use the approach described above through the entry. However, if it is (B), then taking another photo manager is installed, like peaks cloudy, then he must be able to be invoked. A simple test would be to install the cloudy peaks and see if it is called, or if you still see issues.

    http://devBlog.BlackBerry.com/?s=cloudy+pics&x=0&y=0

    If you still see issues, this means that procedure input specifically called the camera application.

    Here, the solution would be to create your own button to invoke the taker of the photo. The framework of the call allows you to directly call the cloudy peaks, or call the taker of the generic photo (which would by default Pics cloudy if the camera app is not present.)

    https://developer.BlackBerry.com/HTML5/documentation/beta/camera.html

    You must use the call as described in the link above, but omit the target ID; This will allow the system to choose the best Manager (available) for the invocation as opposed to explicitly point the camera application.

    For a full text on it, the 4th part of the series of peaks cloudy blog gives a lot of good information and background on this issue.

    http://devBlog.BlackBerry.com/2014/01/cloudy-pics-part-4-cards-and-Enterprise/

  • OpenGL and reminders of camera

    Hello

    I try to combine the camera API with openGL (so I can generate overlays and others on the images of the camera natively) and encounter a few problems, I hope someone here can help me to solve.

    I use the new NDK 2.1.0 beta for the playbook and AOS opengl during initialization and the egl via bbutil generic functions provided in the examples. After that, I initialize the library of the camera, put in place a suitable photo viewfinder and issue the command to take a picture. This set works very well without worries.

    I then use the image_callback to camera_take_photo() to perform additional analysis on the photo taken, extract the image of her data, everything works still fine, upward until the point where I'm trying to generate a texture from the data.

    I have a global variable of GLuint named tex - like in the examples, for simplicity - and I call glGenTextures(1,&tex) in the callback function, which causes segfault for me. It didn't matter if I use a GLuint * tex, with appropriate adaptations of variable and it does not work when I switch the GLuint as the last parameter of the callback function.

    Everyone is already has some experience on the camera API callbacks, or does anybody know that opengl is still completely initialized when run it callback functions? The function of debugger displays a large number of threads, but I'm not skilled enough in BB programming yet to know that it is what goes wrong.

    Any help would be appreciated,

    OK, so apparently a lot more trial and error after exploration, it turns out that openGL is indeed not initialized during the execution of the callback functions.

    Just thought I should let people here know to maybe save the headaches.

Maybe you are looking for