points v pixels

Hi, I work with a simple button in photoshop. The button requires a cerebral/boarder stroke of 1 pixel. In photoshop, it would appear that you can use the points system. My problem is that 1 pixel is not equal to 1 point. .75pt seems not equal 1 pixel either. What is the correct conversion in photoshop?   Thank you

Don't worry figured this out - school boy error

Recently, I rebuilt my mac and installed cc. The photoshop preset returned to points by cm instead of dpi.

Thank you

Tags: Photoshop

Similar Questions

  • Points to pixels - how?

    I have a customer who wants his line thickness to 3 points. He said also that he is is 1.06 mm when its logo is 100 mm in height.

    Just how the logo height affects the thickness of line I don't know, unless he wants to put to this width so that if the logo is zoomed, it will look in proportion.

    Is there a formula that translates points to pixels?

    I work at 600 dpi on a logo that is 910 pixels high. What should be my line thickness?

    My calculations indicate that this resolution, 1.06 mm equals 25,039 pixels.

    This seems a bit on the large size, or I am, as I suspect, missing something?

    Help please

    Howard Walker

    Then you could look at Inkscape, which is free, and can produce vector graphics, that you can use in printed as well as in Fireworks.

    http://www.Inkscape.org/

    Fireworks is not the best app to use to design a vector in logo. FW is intended for graphic display, this isn't a General drawing/illustration tool.

  • Align the vector shape points to pixels

    I'm trying to write a JS script in Photoshop CS5 that align all points in a vector layer selected to the nearest whole pixel.  This is extremely useful when you generate illustrations for the web pages, mobile phone, applications etc.

    I understand how to get the list of vector points (via doc.pathItems [] .subPathItems [] .pathPoints []), and it is easy to quantify these values to the nearest integer.  The little that I can't understand is how to apply the values quantified in the document.  I tried to simply write the value in the object [] obtained pathPoints but this seems to be read-only and any changes are not picked up by the document.

    Anyone have advice on the final piece of this puzzle?

    The guide of script has example script to create a new path (rather than edit an existing one), then another idea would be to build a new path (broken) and layer based on a complete copy of the original and then delete the original layer.  This seems a bit of a long way round.

    See Document.suspendHistory () in the object model viewer.

    "Or look for the" suspendHistory "here or in the www.ps-scripts.com, you can probably find some examples.

  • Adobe Illustrator uses points as their SVG user unit?

    I've been looking for a while now, but I can't find anything. I noticed that if I save one 10x10mm box created in Illustrator in an SVG, that it has a size of 28x28 user units in the svg. W3C Standard recommends a 35x35 Although the box.

    If I do exactly the same in Inkscape, I get a 35x35 box in SVG.

    After searching for a bit, I discovered that Illustrator works with dots in the background that it converts 1mm to 2.8346dt . (which incidentally is around 3.5433px , as in the W3C Standard).

    So my question is: is Adobe Illustrator use points as their SVG user unit?

    Basic unit of Illustrator is the point (1/72nd of an inch). It is also the actual measure that illustrator arbitrarily suppose pixels (whereas Inkscape takes 90 pixels per inch by default, but provides the user setting in preferences).

    10 mm is 28,347 pt. If you draw a square of 10mm in Illustrator and save it as an SVG, it's why you see 28.347 in the resulting SVG code (which, of course, turns to 28 pixels).

    So while you work in Illustrator to SVG, create your rules of Points (or "Pixels" If you must; it's the same thing in Illustrator) and size the objects according to the desired pixels you want to practice the SVG.

    If you want to work with the value leaders milimeters when drawing in Illustrator and want to have a measure of outcome of 10 mm in 35 pixels when recording as SVG, then before you save the SVG, scale everything to 123.47 percent.

    JET

  • Documents InDesign CS6 for not the same pixel resolution

    I just opened in InDesign CS6, a document originally prepared in CS5.5. This document is in pixels. At 100%, the document must be pixel for pixel with my monitor (iMac 27 "to about 104 ppi), but it seems that it is more 100% if you calculate to 72 dpi. If a document of 980 pixel wide 13 inches on my screen! This is different from the previous way of showing pixel documents. And I don't see any preference that could bring resolution to the old way to show the scale.

    In Adobe Acrobat format, you have preferences to control it.

    This number as a bug to me.

    Spen says:

    Yes, I read your first post about the ratio of 1:1 ID. That's fine, but my questions remain. Why can I no longer specify text and traits using pixels? and why can I no longer see my files as they will appear in a browser. These features are available in CS5 they are now gone. Why?

    Your comment confuses me.

    Did you mean to say that you can no longer configure ID to display text units and traits as the pixels?

    This is not the same thing that you cannot specify text and strokes and pixels.

    So:

    • Ability to display dashes/fonts in pixels and not in points is a purely cosmetic function. Internally, ID treats points and pixels exactly the same.
    • Because it is aesthetic, and the calculation of points in pixels is really easy (the same number!), could you help us understand what is your real concern?
    • If you really want to, you can always type in "12 px" in the field.
    • I had the vague impression that when you created a Web document or an intention to Digital, then the specified unit is pixels instead of points. Appears not to be the case - my guess is that it's a screwup.
    • If indeed it was supposed to be automatic, I feel it's why the preference has been removed.

    (Incidentally, in the case where any who behind the scenes I was wondering, while the script API allow you to set this value in CS5.5, with app.viewPreferences.textSizeMeasurementUnits = MeasurementUnits.PIXELS; which doesn't seem to work in CS6, it generates an error: ' error: textSizeMeasurementUnits ')...

    To reiterate, I think it's probably a screwup in CS6 relating to a mismatch between what has been actually implemented (deletion of preference) and what was planned (automatic unit display) and not an intentional removal of the ability to show pixels. And yet, I do not understand why it really matters. What is a warm-fuzzy feeling you're looking for?

  • Stylus Pro/iOS 9.2/3rd party iPad bug

    It's my second iPad Pro since Dec. 23. I had my first replaced because he was a topic on the screen which seems not to respond with my jot adonit stylus pro. So when I first came to this replacement, after the service provider deemed first one defective, I tested it before going to the store. Everything was fine. The pen worked all over the screen. Then, I have upgraded to iOS 9.2. The bug is back and on the same sector than before! The area is near the smart connector. (see attached photo) It was very similar, if not exactly the same area where the iPad Pro I had replaced had stylus response problem. I really think that it was a bug in iOS, since before the upgrade to 9.2, this problem did not exist. Please fix this in the next update.

    I use a point w/pixel Adonit Jot touch with my iPad Pro and have not noticed this problem at all.

    I have a 128 GB Pro iPad.

    You use the Adonit Jot Pro with the disc on the end?

    Make sure that is not the drive or the pen end i.e. issues like screen technology is different in the iPad Pro, in particular for certain types of stylii?

  • Chart with graphic behavior

    I would ask for advice about how to best manage the following scenario:

    I often update a XY Chart, with 12 curves. I created a Subvi ActionEngine which acts as a buffer (add points, read out data) of the graph, and has a few other actions, like emptying the buffer, decimating the data points, etc. So at 1 Hz rate I send data to this XYGraph 12 curves, and it will redraw, everything is OK.

    There are 12 permanent tasks of data acquisition, with a frequency of 1 Hz, I want to show the user how to change the curves. It is also important to have a line of vertical slider so that the user can examine the values to different timestamps (X - values are absolute timestamps). Everything works fine, except that when I turn off the X-autoscale, if the user cannot see lets say the last hour of data points (3600 X 12 points curves, I know I should decimate usually, but LabView usually completely manipulates the situation OK when you have several points and pixels... at least then I can live with that).

    So the problem: If X autoscale is disabled and I have send new data to the chart sets every second, the chart does not display the points on the right side, as in the case of a graph. Shell how I make the chart handle this situation? Shell, I put the 'max' (X-scale - range - Maximum property) of the scale-x via property nodes every second? Is this OK to do? Any other idea?

    A kind of workaround would be to use a chart XY-Graph. The user could inspect lets say later 1 hour of data with the table (or disable autoscale, the last minutes if necessary) and the XY-Graph would work as an overview, the user can manually update by a button and after the game with the zoom features, etc...

    What do you think? How do you manage such a request for data acquisition? If the user needs to see what is happening recently with curves (pressure, flow rates, temperatures), but also be able to examine the evolution of the curves of the last 24 hours...

    Edit: hmm, usually I get new ideas after writing my problem I think I try to attack the problem from the wrong side: instead of fighting with the properties of the graph, I could just create a 'Set Range' action in my Subvi buffer, so when the user wants to see lets say that the last 10 minutes of data , I only send these data to the Subvi points to the curve... hmm I think this would be a more pleasant solution... What do you think?

    Here are a few options. You may want to consider depending on your actual application:

    1. A graph-based mouse event opens a new VI, where you put all the data that you want and let the user to play with him. When they are done, they close and you will return to the live graph. It's somehow easier, because you are working on a separate copy.
    2. The range of the actual data fed to the curve is controlled by a separate scrollbar X. The scale is then set to autoscale. When the user changes the value of the bar to scroll or zoom, you stop the chart update. That you take back the update after a time-out or after the user makes the scrollbar of the max value or after they have pushed a button to refresh.
    3. Feed you all the data in the chart, but you control the X scale. In this case, you must still decide when to stop the change of scale and when come back.
  • The exact font custom size

    Hello

    I have a label in QML. I applied a textstyle to the label.

    textStyle {}
    fontSize: FontSize.PointValue
    fontSizeValue: ui.sdu (2.7)
    color: Color.White
    }

    I would like the text in the label of a height of text sizes exactly 27 (on Z10). It turns out that fontSizeValue is specified in postscript points format, which is not the same as the design of units... How do I know how many points postscript is required for 1 on Z10?

    BR, René

    Hello

    This turns out not to be a specific problem BB10. A good explanation is located here:

    http://StackOverflow.com/questions/139655/convert-pixels-to-points

    In Blackberry font sizes are specified in points (pt) postscript. There are 72 points per inch. So 1 point is 1/72 inch.

    In user interface designs sometimes fonts are specified in pixels (in my case). If we have the dpi of the screen, then we can convert points to pixels and vice versa.

    This method in QML will convert pixels into points for fontSize:

    function pixelsToPoints (pixels) {}
    var resolution = displayInfo.resolution;
    1 inch = 2.54 cm.
    var dpi = Math.round ((resolution.width / 100.0) * 2.54);
    Console.Debug ("PPP:" + dpi);
    Calc var = (pixel / dpi) * 72;
    var pt = Math.round (calc + 0.5);
    Console.Debug ("pt:"+ pt ');
    return pt;
    }

    BR & thanks,

    René

  • Need box Lat Long in MapField OS6.0

    Hello world

    I want to find the left, right, up, card low latitude longitude,

    I used the MapDimentions and MapField API 6.0.

    Using,

    mapDimentions.getbottom (); It will give me the lower coordinate. or as getleft, gettop...

    but it will not change when I moved to the card.

    I need a coordinated box or lat lon(all four) and also, this will change when the card installs...

    Thank you

    Here's the first corner:

    //Just creating a new Coordinates object, the values don't matter
    Coordinates topLeft = new Coordinates(0,0,Float.NaN);
    //The field point in pixels that you want to convert, this is where you put your four corners
    XYPoint coords = new XYPoint(0, 0);convertFieldToWorld(coords, topLeft);
    

    After this topLeft will be your latitude and longitude of the upper-left corner of your map. I hope this helps!

  • Frame API NV12 AVFrame (FFmpeg) camera

    I'm trying to use the camera API to stream video, but since it currently only writes to a file I'm trying to use the callback with the video viewfinder.

    void vf_callback(camera_handle_t handle, camera_buffer_t* buf, void* arg)
    

    Given that it only becomes a video image in NV12 (close enough to YUV420P? I think?) I'm trying to use FFmpeg to convert. I already wore FFmpeg and it works fine, but I can't seem to get the image to convert on one MPEG frame.

    My question is, does anyone know how to code the video image of the reminder using FFmpeg?

    What makes a dummy AVFrame which works well when the video file is created:

    /* Y */
    for(int y=0;yheight;y++) {
        for(int x=0;xwidth;x++) {
            picture->data[0][y * picture->linesize[0] + x] = x + y + a * 3;
        }
    }
    
    /* Cb and Cr */
    for(int y=0;yheight/2;y++) {
        for(int x=0;xwidth/2;x++) {
            picture->data[1][y * picture->linesize[1] + x] = 128 + y + a * 2;
            picture->data[2][y * picture->linesize[2] + x] = 64 + x + a * 5;
        }
    }
    

    I found this in the FFmpeg source, but it does not quite work to convert the image:

    int8_t *y, *u, *v;
    y = picture->data[0];
    u = picture->data[1];
    v = picture->data[2];
    const uint8_t *src=buf->framebuf;
    
    for (int i = 0; i < (c->height + 1) >> 1; i++)
    {
        for (int j = 0; j < (c->width + 1) >> 1; j++)
        {
            u[j] = *src++ ^ 0x80;
            v[j] = *src++ ^ 0x80;
            y[2 * j] = *src++;
            y[2 * j + 1] = *src++;
            y[picture->linesize[0] + 2 * j] = *src++;
            y[picture->linesize[0] + 2 * j + 1] = *src++;
        }
    
        y += 2 * picture->linesize[0];
        u += picture->linesize[1];
        v += picture->linesize[2];
    }
    

    Here is the reminder and any other test code:

    void vf_callback(camera_handle_t handle, camera_buffer_t* buf, void* arg)
    {
        if (buf->frametype != CAMERA_FRAMETYPE_NV12)
        {
            return;
        }
    
        printf("got video buffer of size %d x %d, bytes: %d\n",
                buf->framedesc.nv12.width, buf->framedesc.nv12.height,
                (buf->framedesc.nv12.height + (buf->framedesc.nv12.height / 2))
                        * buf->framedesc.nv12.stride);
    
        av_register_all();
    
        video_encode_example(buf, "/accounts/1000/shared/camera/VID_TEST.mpg",
                CODEC_ID_MPEG1VIDEO);
    }
    
    void video_encode_example(camera_buffer_t* buf, const char *filename,
            enum CodecID codec_id)
    {
        AVCodec *codec;
        AVCodecContext *c = NULL;
        int out_size, outbuf_size;
        FILE *f;
        AVFrame *picture;
        uint8_t *outbuf;
        int had_output = 0;
    
        printf("Encode video file %s\n", filename);
    
        /* find the mpeg1 video encoder */
        codec = avcodec_find_encoder(codec_id);
        if (!codec)
        {
            fprintf(stderr, "codec not found\n");
            exit(1);
        }
    
        c = avcodec_alloc_context3(codec);
        picture = avcodec_alloc_frame();
    
        /* put sample parameters */
        c->bit_rate = 400000;
        /* resolution must be a multiple of two */
    //    c->width = buf->framedesc.nv12.width;
    //    c->height = buf->framedesc.nv12.height;
        c->width = 352;
        c->height = 288;
        /* frames per second */
        c->time_base = (AVRational)
        {   1,25};
        c->gop_size = 10; /* emit one intra frame every ten frames */
        c->max_b_frames = 1;
        c->pix_fmt = PIX_FMT_YUV420P;
    
    //    if(codec_id == CODEC_ID_H264)
    //        av_opt_set(c->priv_data, "preset", "slow", 0);
    
        /* open it */
        if (avcodec_open2(c, codec, NULL) < 0)
        {
            fprintf(stderr, "could not open codec\n");
            exit(1);
        }
    
        f = fopen(filename, "wb");
        if (!f)
        {
            fprintf(stderr, "could not open %s\n", filename);
            exit(1);
        }
    
            /* alloc image and output buffer */
        outbuf_size = 100000 + 12 * c->width * c->height;
        outbuf = (uint8_t *) malloc(outbuf_size);
    
        /* the image can be allocated by any means and av_image_alloc() is
         * just the most convenient way if av_malloc() is to be used */
        av_image_alloc(picture->data, picture->linesize, c->width, c->height,
                c->pix_fmt, 1);
    
        /* encode 1 second of video */
        int a = 0;
        for (; a < 15; a++)
        {
    //      fflush(stdout);
    
            /* Y */
            for(int y=0;yheight;y++) {
                for(int x=0;xwidth;x++) {
                    picture->data[0][y * picture->linesize[0] + x] = x + y + a * 3;
                }
            }
    
            /* Cb and Cr */
            for(int y=0;yheight/2;y++) {
                for(int x=0;xwidth/2;x++) {
                    picture->data[1][y * picture->linesize[1] + x] = 128 + y + a * 2;
                    picture->data[2][y * picture->linesize[2] + x] = 64 + x + a * 5;
                }
            }
    
    //      uint8_t *y, *u, *v;
    //      y = picture->data[0];
    //      u = picture->data[1];
    //      v = picture->data[2];
    //      const uint8_t *src=buf->framebuf;
    //
    //      for (int i = 0; i < (c->height + 1) >> 1; i++)
    //      {
    //          for (int j = 0; j < (c->width + 1) >> 1; j++)
    //          {
    //              u[j] = *src++ ^ 0x80;
    //              v[j] = *src++ ^ 0x80;
    //              y[2 * j] = *src++;
    //              y[2 * j + 1] = *src++;
    //              y[picture->linesize[0] + 2 * j] = *src++;
    //              y[picture->linesize[0] + 2 * j + 1] = *src++;
    //          }
    //
    //          y += 2 * picture->linesize[0];
    //          u += picture->linesize[1];
    //          v += picture->linesize[2];
    //      }
    
            struct SwsContext* fooContext = sws_getContext(c->width, c->height,
                    PIX_FMT_YUV420P, c->width, c->height, PIX_FMT_RGB8,
                    SWS_FAST_BILINEAR, NULL, NULL, NULL);
    
            AVFrame* outpic = avcodec_alloc_frame();
            av_image_alloc(outpic->data, outpic->linesize, c->width, c->height,
                    PIX_FMT_RGB8, 1);
    
            sws_scale(fooContext, picture->data, picture->linesize, 0, c->height,
                    outpic->data, outpic->linesize);
    
            /* encode the image */
            out_size = avcodec_encode_video(c, outbuf, outbuf_size, outpic);
            had_output |= out_size;
            printf("encoding frame %3d (size=%5d)\n", a, out_size);
            fwrite(outbuf, 1, out_size, f);
        }
    
        /* get the delayed frames */
        for (; out_size || !had_output; a++)
        {
            fflush(stdout);
    
            out_size = avcodec_encode_video(c, outbuf, outbuf_size, NULL);
            had_output |= out_size;
            printf("write frame %3d (size=%5d)\n", a, out_size);
            fwrite(outbuf, 1, out_size, f);
        }
    
            /* add sequence end code to have a real mpeg file */
        outbuf[0] = 0x00;
        outbuf[1] = 0x00;
        outbuf[2] = 0x01;
        outbuf[3] = 0xb7;
        fwrite(outbuf, 1, 4, f);
        fclose(f);
        free(outbuf);
    
        avcodec_close(c);
        av_free(c);
        av_free(picture->data[0]);
        av_free(picture);
        printf("\n");
    }
    

    I use this with sample HelloVideoCamera, so if you want to run it, you can plug the recall on that.

    So I did a bit of preliminary inquiry and it seems that the following fields of the struct AVFrame interest a NV12-> YUV420P conversion:

    uint8_t* data[];
    uint8_t linesize[];
    int width;
    int height;
    

    It seems that in the case of YUV420P, data [0] is a pointer to the pixel plan Y data [1] are a pointer to pixel map U and data [2] is a pointer to the plan V pixel.

    You will notice that in the NV12 format, there are only 2 planes: a plan of Y and a plan combined UV.  The trick in this conversion process will be out interlacing the you and values V of the UV combined plan separate from you and V aircraft.

    The plan should be usable as - is.  You shouldn't even need to copy the pixel data.

    picture->data[0] = buf->framebuf;
    picture->linesize[0] = buf->framedesc.nv12.stride;
    

    The code above should be enough to put the plan in place Y.  If you really want, you could malloc air pixel data [0] and then memcpy() the Y data buf-> framebuf (line-by-line!), but it's probably a waste of time.  I noticed that you use av_image_alloc(), which you probably want to skip since you probably only want alloc data [1] and [2] data plans and will probably have to do it by hand... you can consider to implement a pool rather than return to malloc() in real-time.

    In any case, once you have the data [1] and would have [2] data had malloc (), you should be able to make an of interleave and copy from you and data buffer V of NV12 as follows:

    uint8_t* srcuv = &buf->framebuf[buf->framedesc.nv12.uv_offset];
    uint8_t* destu = picture->data[1];
    uint8_t* destv = picture->data[2];
    picture->linesize[1] = buf->framedesc.nv12.width / 2;picture->linesize[2] = picture->linesize[1];
    
    for (i=0; iframedesc.nv12.height/2; i++) {
        uint8_t* curuv = srcuv;
        for (j=0; iframedesc.nv12.width/2; j++) {
            *destu++ = *curuv++;
            *destv++ = *curuv++;
        }
        srcuv += buf->framedesc.nv12.stride; // uv_stride in later API versions
    }
    

    Note I guess one of strides you and plan V is desirable.  Then, if your allocator data [1] and [2] data plans with long strides, "Stride" pointers of dest as necessary at the end of the loop 'j '.

    Now you should have a YUV420P frame that will be compatible with your encoder.  Or at least that's how to interpret headers no matter what I was watching

    See you soon,.

    Sean

  • PowerPoint using a projector

    When displayed via a projector the image of my laptop is greater than any version of Powerpoint that I use when I put on a powerpoint presentation. Please can anyone suggest how I can do the same size image?

    Thank you

    It's not something you can change - the projector uses a different resolution (the number of points of pixels that make up the screen), so a 'size' different occurs systematically.

  • How I've vectorized it on photoshop?

    I created a logo in photoshop, but the text is grainy, the logo has a resolution of 300 dpi,

    How can I get it cleaned?

    Thank you

    You retype the text, or re-create with paths and shape layers. Except that it's a moot point. Pixel based work always has a fixed resolution and your data will be always interpolated when editing at the pixel level, especially if she has specific effects applied that apply the rasterization, forefeiting the benfits of editable text.

    Mylenium

  • CC Bend it effect anchors don't turn?

    Hello

    I have an image and I use the 'CC Bend It' after effects CC 2014. I animated and changed the elbow Start and End anchors and they work fine, but as I convert this image into vector shapes, the anchors start and end stopped turning and the rotation of the shape.

    How can I run with the whole shape as before?

    Can someone help me please?

    Thanks in advance!

    If you import illustrator files to AE and increase you you must enable continuous pixelation. There no reason secret to shape layers vector layers unless you use facilitators of capping layer as "Connect the tracks" or "Repeater". If you do not need to convert Illustrator layers into forms only animations that will be applied to the new shape layers are the properties of transformation of the base of anchor Point, Position, scale, Rotation and opacity.

    If you must convert your layers HAVE in shape layers and you want to apply distortion effects then you need to apply the effect to the shape layer after its creation. The effects will not come through...

    I don't know what kind of content you've created in Illustrator, but here are some guidelines.

    1. Your Illustrator must contain only a single plan of work and must contain all the pictures you want to be visible in After Effects. All art on the outside or cut of the artboard will be cropped
    2. Objects created in Illustrator should be the size you want to make them in AE. One point = one pixel
    3. When you create fine lines or fine detail, it's important to have pixel preview and hang it on the lit pixel
    4. It is best to have all the objects in Illustrator to be an even number of points (pixels) high and wide. It is very important with thin lines. A line of 1 point will be positioned on half pixel in a model standard (standard video images are even numbers) and it will be performed there before and softened as the single point line is superimposed on the pixel grid. To be that even make sure that all your strokes are 2 points (pixels) or more.
    5. If allow you to ask a great scene in Illustrator do great work in artificial intelligence plan, import as a model, and then resize the comp in AE and adjust the frame rate to match your project. If you use video only the standard presets that use square Pixels. If the Composition settings panel shows you customized in the built-in field so you know exactly what you do and why you do.
  • Keep the right ratio of picture in After Effects to send to Photoshop

    Hello

    I need to send to a printing company one of my pictures to print a 37 9/16 "x 32" 3/8 of my image and make a poster.  I got my picture in After effects: render out, like a 1920 x 1080 PNG file - thrown into Photoshop and resized to be 37 9/16 "x 32" 3/8.  When I do that the photo is the long stretch of road, so I guess I'm not keeping the right ratio of back in After Effects.  This is my first time I try to print something this size.  How can I tell After Effects to keep the correct ratio of sort when I size upward in Photoshop to be 37 9/16 "x 32" 3/8 it's quite tense?

    Thank you

    You are using the wrong tool.  It is a work for Photoshop, not AE.

    The poster is darn near the place, but you did the broad picture in AE.  Why is this?

    You said nothing about the CIO during which the poster will be printed.  Let's say it's something lame, like 150 DPI.  Dimension longer the poster would be more than 5500 points or pixels.  Then why you limit yourself to 1920 pixels everywhere... and in AE, to start?

    I think that I sit down and have a nice conversation with the printer what it expects of you deliver for best quality.  I think it would be a revelation for you.

  • Leaders of reading

    ruler.jpg

    That means each line in the section of a stand for sovereign, 1px or 10px or something else? The parts of the documentation I've read, not say how to read?

    It varies depending on the level of size and zoom.

    900 images px H, reading of the rule with the left hand and the units of pixels

    200% gives 20 pixels points with 4 divisions of 5 px each

    100% gives 50 points with 10 divisions of 5 px pixel each

    50% gives 100 points of pixel with 10 divisions of 10 px each

    And if you don't want to read the rules, the information panel will read it for you. Measurement readings using the ruler tool.

    Gene

Maybe you are looking for