Tablet Camera API

Hello world.

I'm new to BB tablet development but have been a professional developer of BB Phone for a lot of new times. I'm currently researching a project I'd like to undertake that would suit the BB Tablet and I have just a few questions that I would apprectiate anyone in the know have a look;

1: it is possible to integrate the image of the camera within an application?

2: is there a source code available for free / api for interpreting QR codes?

3: there seems to be plenty of choice for the developing countries to the Tablet, which would be the best for an application that makes SOAP based web service calls, incorporates an image of the camera such as discussed in Q1 and interpretation of QR codes as discussed in Q2?

Thank you

Graeme.

Graeme,

Here is a simple code, which I used to access the camera. He puts a pretty big 'live view' on the stage with a button to capture. Then it freezes the image on the screen and allows you to provide a file name and save it. He puts it in a folder "documents" in the media directory.

I have attached a .zip containing only the com.pfp imports for the Async JPEG encoder. It still "hangs" while the economy so some tweaking to do yet but works.

package {
    import com.pfp.events.JPEGAsyncCompleteEvent;
    import com.pfp.utils.JPEGAsyncEncoder;

    import flash.display.Bitmap;
    import flash.display.BitmapData;
    import flash.display.Sprite;
    import flash.display.StageAlign;
    import flash.display.StageScaleMode;
    import flash.events.MouseEvent;
    import flash.filesystem.File;
    import flash.filesystem.FileMode;
    import flash.filesystem.FileStream;
    import flash.media.Camera;
    import flash.media.Video;
    import flash.utils.ByteArray;

    import qnx.dialog.AlertDialog;
    import qnx.ui.buttons.LabelButton;
    import qnx.ui.text.TextInput;

    public class fishyLightningCam extends Sprite {
        private var bitmapData:BitmapData = new BitmapData(972, 546);
        private var bitmap:Bitmap;
        private var byteArray:ByteArray;

        private var file:File = File.documentsDirectory;
        private var fstream:FileStream;

        private var captureBTN:LabelButton = new LabelButton();
        private var discardBTN:LabelButton = new LabelButton();
        private var saveBTN:LabelButton = new LabelButton();
        private var fileName:TextInput = new TextInput();

        private var cam:Camera = Camera.getCamera("1");
        private var vid:Video = new Video(972, 546);

        private var jpgEncoder:JPEGAsyncEncoder;

        public function fishyLightningCam() {
            super();

            // support autoOrients
            stage.align = StageAlign.TOP_LEFT;
            stage.scaleMode = StageScaleMode.NO_SCALE;

            cam.setMode(2592, 1456, 48);
            cam.setQuality(0, 100);     

            takePictures();
        }
        private function takePictures():void {
            if (cam != null) {
                vid.attachCamera(cam);
                vid.x = 26;
                vid.y = 5;
                addChild(vid);
            } else {
                var noCamAlert:AlertDialog = new AlertDialog();
                noCamAlert.title = "Camera Error";
                noCamAlert.message = "No camera was detected. Please ensure no other apps are using the camera.";
                noCamAlert.addButton("Okay");
                noCamAlert.show();
            }

            captureBTN.label = "Capture!";
            captureBTN.setPosition(400, 550);
            captureBTN.width = 224;
            captureBTN.addEventListener(MouseEvent.CLICK, captureImage);
            addChild(captureBTN);

            fileName.prompt = "File Name";
            fileName.setPosition(36, 555);
            fileName.width = 350;
            fileName.visible = false;
            addChild(fileName);

            saveBTN.label = "Save";
            saveBTN.setPosition(400, 550);
            saveBTN.width = 100;
            saveBTN.visible = false;
            saveBTN.addEventListener(MouseEvent.CLICK, saveCapture);
            addChild(saveBTN);

            discardBTN.label = "Discard";
            discardBTN.setPosition(524, 550);
            discardBTN.width = 100;
            discardBTN.visible = false;
            discardBTN.addEventListener(MouseEvent.CLICK, discardCapture);
            addChild(discardBTN);
        }
        private function captureImage(e:MouseEvent):void {
            bitmapData.draw(vid);
            bitmap = new Bitmap(bitmapData);
            bitmap.x = 26;
            bitmap.y = 5;
            addChild(bitmap);
            removeChild(vid);

            captureBTN.visible = false;
            saveBTN.visible = true;
            discardBTN.visible = true;
            fileName.visible = true;
        }
        private function saveCapture(e:MouseEvent):void {
            jpgEncoder = new JPEGAsyncEncoder(100);
            jpgEncoder.addEventListener(JPEGAsyncCompleteEvent.JPEGASYNC_COMPLETE, encodeWIN);
            jpgEncoder.encode(bitmapData);
        }
        private function encodeWIN(e:JPEGAsyncCompleteEvent):void {
            addChild(vid);
            removeChild(bitmap);
            fstream = new FileStream();
            fstream.openAsync(file.resolvePath(fileName.text + ".jpg"), FileMode.WRITE);
            fstream.writeBytes(e.ImageData);

            /* byteArray = new ByteArray();
            byteArray = e.ImageData;

            byteArray.position = 0;
            fstream.writeBytes(byteArray);
            fstream.close();
            */

            captureBTN.visible = true;
            saveBTN.visible = false;
            discardBTN.visible = false;
            fileName.text = "";
            fileName.visible = false;
        }
        private function discardCapture(e:MouseEvent):void {
            addChild(vid);
            removeChild(bitmap);

            captureBTN.visible = true;
            saveBTN.visible = false;
            discardBTN.visible = false;
            fileName.text = "";
            fileName.visible = false;
        }
    }
}

Tags: BlackBerry Developers

Similar Questions

  • Reminder problem video camera API

    I have reviewed most otherwise examples of Sean all camera APIs available publicly.

    Some use reminders and events.

    I'm interested in video reminder on the camera_start_video() function.

    What I want to achieve is

    1. -get each timestamp frame buffer and send it to a list stored in QML. Class doing is subclassing container.
    2. -each image translated and used in ImageView in QML image property.
    3. -If possible, transcode each image and, possibly, the video itself to resolution 480x480px (currently used is the only 720x720px for 1:1 ratio)

    Then cherry picked out of the headers and sources:

    //header
    public:
        static void vfCallbackEntry(camera_handle_t handle, camera_buffer_t* buf, void* arg);
        void vfCallback(camera_buffer_t* buf);
    
    //sources
    void VineRecorder::vfCallbackEntry(camera_handle_t handle, camera_buffer_t* buf, void* arg)
    {
        (void)handle;
        ((VineRecorder*)arg)->vfCallback(buf);
    }
    
    void VineRecorder::vfCallback(camera_buffer_t* buf)
    {
        //if (this->capturing()) {
            if (buf->framemetasize == 65536) return;
            if (buf->frametype != CAMERA_FRAMETYPE_NV12) return;
    
            quint64 currentFrameTimestamp = (quint64)buf->frametimestamp;
            //this->setLastFrameTimestamp(currentFrameTimestamp);
            qDebug() << currentFrameTimestamp;
        //}
    }
    
    //line where callback is used
    err = camera_start_video(mHandle, filename, vfCallbackEntry, NULL, NULL);
    

    What confuses me, is that vfCallback is not a static member and still it does not recognize "this", or anything else outside its local scope.

    Application crashes on each lines commented in vfCallback.

    Did get bad "arg" perhaps?

    Can someone maybe show an example of a non-static member is used as a reminder? A function pointer?

    Thanks for all of your discussions, responses and suggestions. Please help, it's very frustrating.

    Nope... I'm a model. I missed this part of your post somehow.

    in any case... your mistake is that you spent NOTHING for arg in camera_start_video().  documentation:

    @param arg the argument passed to the functions of callback, which is the
    * last argument in the callback functions.

    (a little difficult to put into words, but everything that you pass as an argument 'arg' of camera_start_video() is the same argument passed to your reminders like "arg")

    This is shown in most of the samples.

  • BB10 Webworks camera API?

    I was updated an application and found that the HTML5 WebWorks camera API (for example, "blackberry.media.camera.takePicture") is not usable for BB10.

    I dug through the poriton BB10 and did not find any direct appeal was once used.  Y at - it another option to use the camera in a webworks HTML5 app?  any sample taken in charge?

    Thanks in advance!

    In BB10, is in the stuff of Invoke

  • The camera API v32 iOS number 8

    I see that camera API was introduced in v29, http://www.adobe.com/devnet/digitalpublishingsuitea/articles/is-a started camera -api.html.

    I take the exact code and both producer Folio and v32 container and placed in an article he asks permission to the camera or anything when the user clicks the button on the camera.

    According to DPSViewerSDK - 2.32 initializePermissions() has been deprecated, how we can access the camera in an article?

    Hi Joseph,.

    The read API requires that you enable the "allow access to right information' in the control panel overlay of the Web to enable access to one of the read API.  If your straight help batteries HTML instead, this setting can be found in producer folio.

    Setting in the control panel of Web overlay:

    Parameter in Folio Producer (piles of HTML)

    Make sure that you have done this and update/re-publish content and try again.

  • With the help of the camera API

    Hello

    The camera API can be used in different ways, like to take screenshots and save them in the library?

    for example with a button on the screen saying save to photo library.

    Thank you

    I don't think it's possible. With the camera API, you can allow users to take a photo or select an image from the Photos app, but I don't think he has an option to take the screenshot.

    Couldn't provide you only instructions to take a screenshot? (Hold the power button and then press the home button).

  • Tablet - Question API

    Hello

    because I'm not too familiar with the BlackBerry Tablet API or the Adobe AIR APIs, I have two questions on whether or not it is possible:

    1. can I configure VPN appliance settings programmatically?

    2. can I lock or wipe the BlackBerry Playbook by programming?

    Thanks for all the answers!

    I personally haven't seen a demo where the person can lock the screen.  He could be here, I've just not seen it.

    With regard to wiping.  If there is a Java API for that, then perhaps it would be possible in a future API.  I'm just surprised that such an API is on the side of the phone.

  • Camera API

    I have two questions about access to the API of photo on Playbook:

    1. How can I add custom icons or controls on the camera? In other is words possible to overlay something on top of the default camera view? example: as a control custom zoom or something.

    2. is there a way to incorporate the view camera inside a div element The takePicture() API takes up the entire window of the Playbook. I want to be in a DIV in the main window of the application

    Grateful if someone can give some guidance.

    In WebWorks, you can launch the camera view window (window displayed on top of your app) but do not incorporate the camera in your application WebContent.

    Sorry, I can't talk about what can and cannot be done in Android, as I am not an expert in development of these applications.  Not all the features of Android are supported (not sure if the camera is or is not available in the BlackBerry for Android runtime).

    You have an existing APK?  If so, try its best to test its compatibility by using the following compatibility:

    https://bdsc.webapps.BlackBerry.com/Android/documentation/test_your_app_1985225_11.html

  • Troubleshooting the camera API

    Try to understand this kind of thing photo API. I downloaded the test files and put them in a superposition of web, but pressing the button does nothing. Looks like he's been pressed, but no dialogue does ask questions about options or anything like that.

    Is there some javascript that I play with in the files so that it can work, or I did something wrong on the way?

    Thank you!!

    You have selected the option ' allow right? Have you updated Adobe Content Viewer of v28?

    The question of the effects of the advice of DPS (v28) includes an example of the API of the camera as well as some tips and suggestions.

  • Camera API to crash after update to build 671

    Hello

    My app made use of the camera and then a few days ago I updated my device alpha Dev to build 671 (from 543) and my app out erros and accidents with the following result. Initially, I received my example in the sample PhotoBomber. I tried to run PhotoBomber it on the new update feature and I got the following result:

    ## TIMESTAMP pid=15278290 at 757978 ms -> "server thread started"
    ### Server Thread: STARTED
    startScreenEventThread(SUCCESS)
    ### PPS Thread: STARTED (10)
    ERROR:: QNXPpsSubscriptionServer: QNXPpsSubscriptionServer::createObject: (13) Failed to create dir /pps/services/automation/framework
    
    ERROR:: QNXPpsSubscriptionServer: QNXPpsSubscriptionServer::subscribe: Failed to open /pps/services/automation/framework/control?delta,notify=424:00000001
    
    ### TIMESTAMP pid=15278290 at 758168 ms -> "handle incoming events begin (1)"
    
    ### TIMESTAMP pid=15278290 at 758170 ms -> "handle incoming events end (1)"
    
    ### TIMESTAMP pid=15278290 at 758170 ms -> "waiting for events begin"
    
    ### TIMESTAMP pid=15278290 at 758171 ms -> "waiting for events end"
    
    ### TIMESTAMP pid=15278290 at 758171 ms -> "handle incoming events begin (1)"
    
    ### TIMESTAMP pid=15278290 at 758179 ms -> "handle incoming events end (1)"
    
    ### TIMESTAMP pid=15278290 at 758179 ms -> "waiting for events begin"
    CamCommandHandler::S_HandleCommandProcessing: start
    static void* PictureSaveHandler::S_RunSaveThread(void*) : DEBUG : save thread started
    QObject::connect: Cannot connect (null)::shutterFired() to PhotoBomberApp::onShutterFired()
    
    Process 15278290 (photobomber) terminated SIGSEGV code=1 fltno=11 ip=7807460c(/base/usr/lib/libbbcascadesmultimedia.so.1.0.0@+0x4b04) mapaddr=0001460c. ref=00000014
    

    Any ideas? I don't see anyone else complain about this issue so maybe I missed something?

    Thank you

    You're fixed the camera returns null

    I think I've solved the problem (although I don't know how I got as well the app photobomber and my application in the same State). Here's what I did:

    I checked around and saw this message in the console:

    WARNING: does not load symbols of shared library for 9 libraries, for example libQtDeclarative.so.4.
    Use the command "info launch" to see the complete list.
    You need 'set solib-search-path' or 'set sysroot?

    I thought that I must have somehow misconfigured my project (two projects). So, just to humor myself I redownloaded the sample photobomber and it worked! So I'll just recreate my project. Not sure how I have them in this State.

    I'll mark this as "resolved". If you have a theory as to what happened to wron made me know. Thanks for your help!

  • Frame API NV12 AVFrame (FFmpeg) camera

    I'm trying to use the camera API to stream video, but since it currently only writes to a file I'm trying to use the callback with the video viewfinder.

    void vf_callback(camera_handle_t handle, camera_buffer_t* buf, void* arg)
    

    Given that it only becomes a video image in NV12 (close enough to YUV420P? I think?) I'm trying to use FFmpeg to convert. I already wore FFmpeg and it works fine, but I can't seem to get the image to convert on one MPEG frame.

    My question is, does anyone know how to code the video image of the reminder using FFmpeg?

    What makes a dummy AVFrame which works well when the video file is created:

    /* Y */
    for(int y=0;yheight;y++) {
        for(int x=0;xwidth;x++) {
            picture->data[0][y * picture->linesize[0] + x] = x + y + a * 3;
        }
    }
    
    /* Cb and Cr */
    for(int y=0;yheight/2;y++) {
        for(int x=0;xwidth/2;x++) {
            picture->data[1][y * picture->linesize[1] + x] = 128 + y + a * 2;
            picture->data[2][y * picture->linesize[2] + x] = 64 + x + a * 5;
        }
    }
    

    I found this in the FFmpeg source, but it does not quite work to convert the image:

    int8_t *y, *u, *v;
    y = picture->data[0];
    u = picture->data[1];
    v = picture->data[2];
    const uint8_t *src=buf->framebuf;
    
    for (int i = 0; i < (c->height + 1) >> 1; i++)
    {
        for (int j = 0; j < (c->width + 1) >> 1; j++)
        {
            u[j] = *src++ ^ 0x80;
            v[j] = *src++ ^ 0x80;
            y[2 * j] = *src++;
            y[2 * j + 1] = *src++;
            y[picture->linesize[0] + 2 * j] = *src++;
            y[picture->linesize[0] + 2 * j + 1] = *src++;
        }
    
        y += 2 * picture->linesize[0];
        u += picture->linesize[1];
        v += picture->linesize[2];
    }
    

    Here is the reminder and any other test code:

    void vf_callback(camera_handle_t handle, camera_buffer_t* buf, void* arg)
    {
        if (buf->frametype != CAMERA_FRAMETYPE_NV12)
        {
            return;
        }
    
        printf("got video buffer of size %d x %d, bytes: %d\n",
                buf->framedesc.nv12.width, buf->framedesc.nv12.height,
                (buf->framedesc.nv12.height + (buf->framedesc.nv12.height / 2))
                        * buf->framedesc.nv12.stride);
    
        av_register_all();
    
        video_encode_example(buf, "/accounts/1000/shared/camera/VID_TEST.mpg",
                CODEC_ID_MPEG1VIDEO);
    }
    
    void video_encode_example(camera_buffer_t* buf, const char *filename,
            enum CodecID codec_id)
    {
        AVCodec *codec;
        AVCodecContext *c = NULL;
        int out_size, outbuf_size;
        FILE *f;
        AVFrame *picture;
        uint8_t *outbuf;
        int had_output = 0;
    
        printf("Encode video file %s\n", filename);
    
        /* find the mpeg1 video encoder */
        codec = avcodec_find_encoder(codec_id);
        if (!codec)
        {
            fprintf(stderr, "codec not found\n");
            exit(1);
        }
    
        c = avcodec_alloc_context3(codec);
        picture = avcodec_alloc_frame();
    
        /* put sample parameters */
        c->bit_rate = 400000;
        /* resolution must be a multiple of two */
    //    c->width = buf->framedesc.nv12.width;
    //    c->height = buf->framedesc.nv12.height;
        c->width = 352;
        c->height = 288;
        /* frames per second */
        c->time_base = (AVRational)
        {   1,25};
        c->gop_size = 10; /* emit one intra frame every ten frames */
        c->max_b_frames = 1;
        c->pix_fmt = PIX_FMT_YUV420P;
    
    //    if(codec_id == CODEC_ID_H264)
    //        av_opt_set(c->priv_data, "preset", "slow", 0);
    
        /* open it */
        if (avcodec_open2(c, codec, NULL) < 0)
        {
            fprintf(stderr, "could not open codec\n");
            exit(1);
        }
    
        f = fopen(filename, "wb");
        if (!f)
        {
            fprintf(stderr, "could not open %s\n", filename);
            exit(1);
        }
    
            /* alloc image and output buffer */
        outbuf_size = 100000 + 12 * c->width * c->height;
        outbuf = (uint8_t *) malloc(outbuf_size);
    
        /* the image can be allocated by any means and av_image_alloc() is
         * just the most convenient way if av_malloc() is to be used */
        av_image_alloc(picture->data, picture->linesize, c->width, c->height,
                c->pix_fmt, 1);
    
        /* encode 1 second of video */
        int a = 0;
        for (; a < 15; a++)
        {
    //      fflush(stdout);
    
            /* Y */
            for(int y=0;yheight;y++) {
                for(int x=0;xwidth;x++) {
                    picture->data[0][y * picture->linesize[0] + x] = x + y + a * 3;
                }
            }
    
            /* Cb and Cr */
            for(int y=0;yheight/2;y++) {
                for(int x=0;xwidth/2;x++) {
                    picture->data[1][y * picture->linesize[1] + x] = 128 + y + a * 2;
                    picture->data[2][y * picture->linesize[2] + x] = 64 + x + a * 5;
                }
            }
    
    //      uint8_t *y, *u, *v;
    //      y = picture->data[0];
    //      u = picture->data[1];
    //      v = picture->data[2];
    //      const uint8_t *src=buf->framebuf;
    //
    //      for (int i = 0; i < (c->height + 1) >> 1; i++)
    //      {
    //          for (int j = 0; j < (c->width + 1) >> 1; j++)
    //          {
    //              u[j] = *src++ ^ 0x80;
    //              v[j] = *src++ ^ 0x80;
    //              y[2 * j] = *src++;
    //              y[2 * j + 1] = *src++;
    //              y[picture->linesize[0] + 2 * j] = *src++;
    //              y[picture->linesize[0] + 2 * j + 1] = *src++;
    //          }
    //
    //          y += 2 * picture->linesize[0];
    //          u += picture->linesize[1];
    //          v += picture->linesize[2];
    //      }
    
            struct SwsContext* fooContext = sws_getContext(c->width, c->height,
                    PIX_FMT_YUV420P, c->width, c->height, PIX_FMT_RGB8,
                    SWS_FAST_BILINEAR, NULL, NULL, NULL);
    
            AVFrame* outpic = avcodec_alloc_frame();
            av_image_alloc(outpic->data, outpic->linesize, c->width, c->height,
                    PIX_FMT_RGB8, 1);
    
            sws_scale(fooContext, picture->data, picture->linesize, 0, c->height,
                    outpic->data, outpic->linesize);
    
            /* encode the image */
            out_size = avcodec_encode_video(c, outbuf, outbuf_size, outpic);
            had_output |= out_size;
            printf("encoding frame %3d (size=%5d)\n", a, out_size);
            fwrite(outbuf, 1, out_size, f);
        }
    
        /* get the delayed frames */
        for (; out_size || !had_output; a++)
        {
            fflush(stdout);
    
            out_size = avcodec_encode_video(c, outbuf, outbuf_size, NULL);
            had_output |= out_size;
            printf("write frame %3d (size=%5d)\n", a, out_size);
            fwrite(outbuf, 1, out_size, f);
        }
    
            /* add sequence end code to have a real mpeg file */
        outbuf[0] = 0x00;
        outbuf[1] = 0x00;
        outbuf[2] = 0x01;
        outbuf[3] = 0xb7;
        fwrite(outbuf, 1, 4, f);
        fclose(f);
        free(outbuf);
    
        avcodec_close(c);
        av_free(c);
        av_free(picture->data[0]);
        av_free(picture);
        printf("\n");
    }
    

    I use this with sample HelloVideoCamera, so if you want to run it, you can plug the recall on that.

    So I did a bit of preliminary inquiry and it seems that the following fields of the struct AVFrame interest a NV12-> YUV420P conversion:

    uint8_t* data[];
    uint8_t linesize[];
    int width;
    int height;
    

    It seems that in the case of YUV420P, data [0] is a pointer to the pixel plan Y data [1] are a pointer to pixel map U and data [2] is a pointer to the plan V pixel.

    You will notice that in the NV12 format, there are only 2 planes: a plan of Y and a plan combined UV.  The trick in this conversion process will be out interlacing the you and values V of the UV combined plan separate from you and V aircraft.

    The plan should be usable as - is.  You shouldn't even need to copy the pixel data.

    picture->data[0] = buf->framebuf;
    picture->linesize[0] = buf->framedesc.nv12.stride;
    

    The code above should be enough to put the plan in place Y.  If you really want, you could malloc air pixel data [0] and then memcpy() the Y data buf-> framebuf (line-by-line!), but it's probably a waste of time.  I noticed that you use av_image_alloc(), which you probably want to skip since you probably only want alloc data [1] and [2] data plans and will probably have to do it by hand... you can consider to implement a pool rather than return to malloc() in real-time.

    In any case, once you have the data [1] and would have [2] data had malloc (), you should be able to make an of interleave and copy from you and data buffer V of NV12 as follows:

    uint8_t* srcuv = &buf->framebuf[buf->framedesc.nv12.uv_offset];
    uint8_t* destu = picture->data[1];
    uint8_t* destv = picture->data[2];
    picture->linesize[1] = buf->framedesc.nv12.width / 2;picture->linesize[2] = picture->linesize[1];
    
    for (i=0; iframedesc.nv12.height/2; i++) {
        uint8_t* curuv = srcuv;
        for (j=0; iframedesc.nv12.width/2; j++) {
            *destu++ = *curuv++;
            *destv++ = *curuv++;
        }
        srcuv += buf->framedesc.nv12.stride; // uv_stride in later API versions
    }
    

    Note I guess one of strides you and plan V is desirable.  Then, if your allocator data [1] and [2] data plans with long strides, "Stride" pointers of dest as necessary at the end of the loop 'j '.

    Now you should have a YUV420P frame that will be compatible with your encoder.  Or at least that's how to interpret headers no matter what I was watching

    See you soon,.

    Sean

  • Camera for video API

    Is the camera api limited to access the camera work or is there a way to access video functions?

    Thank you

    Darrin

    Just the camera and photo roll, no videos.

  • Why is access to the camera

    What reason, firefox should access to my camera for?

    Hi ankanamoon:

    Yes, I confirm that if a web page requires the use of the camera via the camera API, a web page can take a picture!
    https://developer.Mozilla.org/en-us/docs/DOM/Using_the_Camera_API

    hope that answers to the part of the camera of this question API

  • Re: Which CD-ROM/DVD-ROM drive will fit in my Toshiba protégé M700 Tablet

    My Toshiba protégé M700 Tablet came without a CD/DVD-Rom drive. What brand/model will fit in this machine and where is the best (and cheapest) place to the source of this element.

    Best regards

    Don

    Hello

    I found some info that Portege M700 comes with drive of DVD Super-Multi Panasonic UJ-852STJ-Z.
    I wonder your one comes without optical drive.
    Another device is placed instead of ODD?

  • HP Jet 7 Tablet 5709: 7 HP flow will not reset recover

    I went to reset the tablet to factory deleting all my files, etc. The Tablet has a 32 GB card in addition to the main unit.

    During the recovery process, a message appeared that the reset was able to complete. A Cancel button was presented to that I pushed.

    When I went to restart the Tablet, came the HP logo but nothing else. I stop and reloaded several times. The Tablet is now a brick...

    I then connected the tablet to my PC. 10 Windows PC does not recognize the tablet.

    What can I do to at least restore to factory settings?

    You can order a USB at HP recovery or do a new install of Windows 10 using the downloadable installation of Microsoft image, instructions are here.

    If you have connected the charging cable into the PC's USB port, the PC does not recognize the tablet of flow, it's normal.

  • Acquire GigE camera data using labview CIN or DLL to call.

    I am tring to acquire data from a basler runner line CCD camera (GigE).

    Because the NI Vision Development Module is not free, and the camera provide a C++ API and C and also some examples, so I plan on using the function CIN or call DLLS in labview to achieve. Is this possible?

    I tried to generate a DLL with the example of the company code of the camera. But encounter difficulties;

    I did that a little background in C++, but not familiar with it. The C++ Code example provides the camera is a C++ (a source Code file) and a .cproj file, it depends on other files, the camera API directory.

    If I build the project directly, it will create an application window, not in a DLL. I don't know how to convert a DLL project, given that other information such as dependence in the .cproj file, other than source code.

    Can someone help me with this?

    Don't forget that for the acquisition of a GigE camera, you must only Module of Acquisition of Vision, not the entire Vision Development Module. Acquisition of vision is much lower price and also delivered free with hardware NI Vision current (for example a card PCIe - 8231 GigE Vision of purchase). You need only Vision Development Module if you then want to use pre-made image processing duties. If you are just display, save images to disk, or image processing using your own code (for example to manipulate the pixels of the image in a table) you can do so with just Vision Acquisition.

    It is certainly possible to call DLL functions if LabVIEW by using a node called library, it would be quite a lot of work unless you are very familiar with C/C++. Since their driver interface is C++, you need to create wrapper functions in C in a DLL that you write. Depending on how much you want to expose functions, this could be a bit of work.

    Eric

Maybe you are looking for