Doubt about 660GTX vs 770/780GTX + CPU Performance

First of all, I'm new here and my English isn't very good, so I'll try to explain it correctly:

I really am the owner of:

Motherboard: ASUS P8Z68 - V PRO GEN3

CPU: Intel i5 - 2500K (no OC)

RAM: 8 GB DDR3 1333 mhz

GPU: (GB) Nvidia Ti 660GTX OC 2GB

SSD: SAMSUNG EVO 860 250 GB

HDD: WD Blue 1 TB

PSU: CORSAIR AX860

OS: Windows 7 64 bit family Premium

As you can see, I currently own a Nvidia 660GTX Ti OC 2 GB.

I am currently using Adobe first Pro CC and I'm really happy with it, thanks to the "GPU Mercury Playback" thing (Yes, I know, he warned me that the card is perhaps not compatible but I went), I was able to do an hour long videos to be exported/rendering in around 35 minutes 720 p 30 FPS for Youtube. I use "another program" that worked really well with my old 460GTX but, every time I put my card blabla 660GTX inside, he didn't use them is the power GPU at all no matter the options which I stuffed in the settings... instead, regardless of the option, he would make with the central unit, lasting 2 to 3 hours for a simple 30/60 min vid... But not with the CC the prob is gone! (Although I had to go into some tutorials...)

On this subject! I thought to upgrade my GPU for a GIGABYTE Nvidia 770GTX 4 GB or 780GTX / 3GB (Gaming purposes too but it's vids here) so... The thing is, if I upgrade my card to one of these 2, will I see a significant difference in rendering time?... like, instead of waiting 35 minutes for a video of 1 h, this would take a lot less (15?).

In addition, it would benefit the speed of rendering and stuff if I upgrade my RAM/CPU/MB to a socket/Intel i7 Z79 4770 / 16GB RAM to compatibilize better with the new GPU?.

Thanks in advance

(Yes, I used the 'Ask' thing, but no results of what I tried. Not even youtube has stuff of performance with the above maps)

While that makes DVD clearly experience how powerful your GPU capabilities are, makes h.264 (better for YouTube by Ann Bens) format is largely affected by how your CPU is powerful. So presumably, changing to a more powerful GPU, or by adding a GTX card (CC supports more than one card for MPE material), probably not help much at all.

While doing a record, you can check if your CPU work hard (more than 80% for all the hearts, measured with the Task Manager). If so this is a good indication that you are "bound to the CPU" (CPU's your weak link) for this particular workflow (refunded at the h.264 format). If however your CPU does not very hard, it could be that more RAM might help you some. 8GB is not that much these days, especially for some of the higher pixel count media formats.

Kind regards

Jim

You English is OK. I just want to say when you say "doubts" the correct English translation is 'issues '. You may be Spanish or Portuguese language?

Tags: Premiere

Similar Questions

  • G7 notebook pc Pavilion: the CPU performance

    Hallo,

    My computer hp laptop has i5 cpu and ran it with 60% of 5 minutes after opening it causes heat and the backround programms. I watched a few videos about it and it says to 'set affinity' of some programms to process high to run with 1 or 2 or 3 carrots. After testing this on my laptop, my cpu performance went to 40% and less, indeed, if it worked. In the future it would damage my laptop?

    Hi @warriorito,

    Thank you for your query.  I understand that your computer is slow because of the heat and open programs.  I will do my best to help you overcome this difficulty.  Here is a link to HP PC - computer is slow (Windows 7) which should contribute to the performance, here is a link to help you with the problem of overheating. HP Notebook PC - reduce the heat inside the laptop to prevent overheating.  Please let me know if this helped.

    If this post helps you solve the problem, please pay to the front by clicking on "accept as Solution", to the right of the Thumbs up icon. You can click on the " " "Twww.Mountainview.rsb.qc.ca Up Icon" gratitude!

  • Doubts about licenses

    Hi all

    I have a few doubts about the price of licenses.

    I understand, I can deploy an APEX Server 11g XE free of charge, but what happens, if I want to install, a version no XE?

    Imagine a billing application, for 10 users, and I will assume that a Standard is sufficient. With the help of [this price list | http://www.oracle.com/us/corporate/pricing/technology-price-list-070617.pdf], how much exactly will cost?

    I understand I can get a license by user or server, or I have to license user and server too?

    Kind regards.

    Hello
    metric license is named plu user or license CPU (see the table of the core).

    for a quote, you can take a look in the oracle store or ask your dealer for an exact price oracle.

    concerning
    Peter

  • CPU performance issue

    So I have a question about the use of the CPU on a Mac with Ubuntu as a guest in VMware fusion 3, looked around, but do not find the answers or solutions.

    I use a dual core I5 macbook pro and try to perform simulations through the guest OS.  These simulations are very intensive processor, and I would like to make use of all as much CPU as possible (without development parallel code for now).  Linux on my old machine, I'd start a simulation and completely, he would use one of the processors, but with the guest OS, a single simulation will use only about 25% of the availability of such CPU as displayed by my mac activity monitor.

    I tried different settings on the virtual machine, but it makes no difference, and I was wondering if anyone knew if it was a question of settings or if it was part of the way in which the host controls the use of the processor of the guest.

    These simulations are very intensive processor, and I would like to make use of all as much CPU as possible (without development parallel code for now).  Linux on my old machine, I'd start a simulation and completely, he would use one of the processors, but with the guest OS, a single simulation will use only about 25% of the availability of such CPU as displayed by my mac activity monitor.

    Are you sure you're correctly interpret? 100% would be all your processing power, but you said that you are on a computer with dual-core and your code is not multiprocessor aware, so there is no way you would get 100% usage. If you have a dual-core Hyperthreaded computer, pegging the base seems to be 25% usage.

  • Qosmio X 770-GPU/CPU temperatures

    Hi all

    What temperatures you guys for the CPU and the GPU?
    Mine are usually about
    45-50 deg for CPU and GPU - normal conditions
    for CPU and GPU 78 - 85 deg during the games of 1920 x 1080 resolution details

    Kind regards

    I Don t this Qosmio but it sounds good to me. CPU/GPU temperatures are OK.

  • Anyone have information about the software to improve the performance of a computer called "MECHANICAL SYSTEM"

    original title: windows xp

    has anyone information about the software to improve the performance of a computer called "MECHANICAL SYSTEM"?

    It is located on the website at WWW.IOLO.COM.

    Maybe someone of expertise of knowledge about this software?   or maybe someone has experience with this product?

    There is a similar product, without the costs?

    DO NOT BUY OR INSTALL "SYSTEM MECHANIC".

    IT will bog your system with errors, which would mean probably a complete restoration of your system.

    Dell recommends and sells it from their technical support group and SHAME on THEM for DOING SO.

    THIS SOFTWARE IS A TRAGEDY IN THE MAKING.

    Go to PC Mag reviews and you will see for yourself that there is a very LOW RATING and under normal conditions, you need software to mess with your registry.

    I know someone who has installed this program and we always try to fix all of the errors generated by mechanical system.

    We did a complete restoration and stil having problem.

    Thus he PURCHASE a CERTAINLY DO NOT GOLD INSTALLER CE PROGRAM.

  • I have a doubt about the file .folio and publications

    Hello, I m new here.

    I want to start working with DPS, but I have a doubt about which version to buy.

    At the moment I have one customer just wants to publish a magazine, but my intention is to have more customers and publish more magazines.

    If I buy the unique edition of DPS, I read that I can publish a single file .folio. What it means? Each folio file represents a publication?

    Please, I need help to understand this before you purchase the software.

    Thank you very much

    Paul

    Here's a quick blog I wrote to compare the simple edition and

    multifolio apps:

    http://boblevine.us/Digital-Publishing-Suite-101-single-Edition-vs-multi-Folio-apps/

    Bob

  • Doubts about event handlers

    Hello

    I had some doubts about the event handlers in the IOM 11.1.1.5...

    (1) I want to use the same event handler for the message insert and update Post task... Can I use the same handler for this... If Yes, then how can I make...

    (2) can I create the single class of Plugin.xml and add all the jar files in IE single lib folder and zip them all together... If yes then what changes I need to do? Need only add that the plugin tags for different class in the plugin.xml file files? OR need to do something extra too...?

    (3) if I need to change something in any class handler... Is it need to unregister the plugin and register again...?
    If Yes... Is it need to delete the event handler using the weblogicDeleteMetadata command?

    (4) that we import the event handler of the path as event manager/db /... If we add all the evetn handler.xml files in this folder... As when importing weblogicImportMetadata called recursively all files in this folder... Now, if I need to change anything in one of the event handler class... so if import us from the same event manager/db folder... What to do... Create the copy of the eventhandlers? OR should I not add Eventhandler.xml files to class files, I made the changes...

    (5) given that I need to create emails on the creation of the user while recon and identification of email updated as a first name or surname updates... I had to use in the event handler.xml (entity-type = 'User' operation = "CRΘER") or something else...


    Help me clarify my doubts...

    Yes, on the update post you need to be check first if the first and last name change to update the mail electronic id, rather then calculation always email identification. So, you can check the path name are updated through the previous code.

    -Marie

  • Doubt about appsutil.zip in R12

    Hi all
    I have doubts about the application of rapid Clone on 12.1.3.I the latest patches have applied the fix using adpatch. After that, it must synchronize directories appsutil
    in RDBMS oracle home. I created appsutil.zip in the application layer and copied in the RDBMS oracle home. If I move the old appsutil to appsutil.old and extract appsutil.zip, the new appsutil directory should not constituted by the context file (I think). So, I have to run the automatic configuration based on the old cotextfile. Below, I have summarized the steps that I follow. Please check and correct me if I'm wrong.

    Copy appsutil.zip to $INST_TOP/admin/out of RDBMS oracle home
    CP $CONTEXT_FILE /tmp/mytest_vis.xml
    MV appsutil appsutil.orig
    unzip appsutil.zip
    Run autoconfig based on/tmp/mytest_vis.xml.


    Thank you
    Jay

    Jay,

    Is there a reason why do not use the old file context? What is the difference between the context file that will be generated by adbldxml.pl and the old file context?

    If there are updates in the application, it will be updated in the new xml file generated by adbldxml.sh, but he's not in the old file.

    So it is always best to run adbldxml.sh and autoconfig.

    Amulya

  • Doubts about RAC infrastructure with a disk array

    Hello everyone,

    I am writing because we have a doubt about the correct infrastructure to implement RAC.

    Please, let me first explain the current design we use for storage Oracle DB. Currently, we are conducting multiple instances in multiple servers, all connected to a SAN disk storage array. As we know that it is a single point of failure so we have redundant controlfiles, archiveds and Orde in the table and in the internal drive of each server, in which case table has completely failed us 'just' need to recover cold backup nightly, applied hoops and Oder and everything is ok. This is possible because we have autonomous bodies and we can assume that this downtime of 1 hour.

    Now, we want to use these servers and implementing this table to a RAC solution and we know that this table is our only point of failure and wonder if it is possible to have a RAC multi-user solution (not AS a node) with controlfiles/archs/oder redundant internal drives. Is it possible to have each written full node RAC controlfiles/archs/oder in drives internal and applies these files systematically when the ASM filesystem used for CARS is restorations (i.e. with a softlink in an internal drive and using a single node)? Maybe the recommended solution is to have a second table to avoid this single point of failure?

    Thank you very much!

    CSSL wrote:

    Maybe the recommended solution is to have a second table to avoid this single point of failure?

    Fix. It is the right solution.

    In this case, you can also decide to simply use the distribution on both tables and mirror of the array1 array2 on table data using the ASM redundancy options.

    Keep in mind that the redundancy is also necessary for connectivity. If you need at least 2 switches to connect on two tables and two HBA ports on each server, 2 fibers running, one to each switch. You will need driver multichannel s/w on the server to deal with the multiple I/O paths for storing same lun.

    Similarly, you will need to repeat this step for your Interconnect. 2 private switches, 2 cards on each server which are pasted. Connect then these 2 network cards on the 2 switches, one NETWORK card per switch.

    Also, don't forget to spare parts. Spare switches (one for the storage and interconnection). Spare cables - fiber and everything that is used for the interconnection.

    Bottom line - not a cheap to have a redundancy solution. What we can do is to combine the layer of Protocol/connection of storage with the interconnection layer and run both on the same architecture. Oracle database machine and Exadata storage to servers. You can run your storage Protocol (e.g. PRSS) and your Protocol (TCP or RDS) interconnection on the same 40 GB Infiniband infrastructure.

    As well as 2 switches Infiniband are needed for redundancy, plus 1 spare. With each server running a dual port HCA and one cable for each of these 2 switches.

  • Doubt about the Performance

    Dear all,
    I'm running a procedure, and it should update the 8 million records.
    It uses a packaged procedure which is going on in another data base using the link of database to get certain values based on certain parameters.
    If the procedure doesn't return any data (in THE settings) then it must be inserted into a table of newspaper.
    If the procedure returns some data then it must be applied to the record in the database, like that it must update all records of 8 million

    But my procedure takes more than 14 hours to run.

    The following procedure is that I'm getting.
    It seems very simple, but I really don't understand why she's taking a lot of time.

    Guess aside.
    1 > to fill the lack of recording in the JOURNAL Table I use a PRAGMA AUTONOUMOUS_TRANSACTION procedure in the procedure itself and commit the data to see the results of the PAPER while still it runs the procedure. This will cause this procedure got COMMIT inside.

    2 > or it's because we use an external DB which is present in another data base package.

    The procedure seems very simple, but I don't know why it takes a long time.

    Appreciate any feed back.

    Thanks and greetings
    Madhu K

    create or replace procedure pr_upd_pb2_acctng_trx_gl_dist is

    cursor cur_pb2_acctng_trx_gl_dist is
    Select

    PATGD.patgd_id, patgd.company,

    PATGD.profit_center, patgd.department,

    PATGD. Account, patgd.sub_account,

    PATGD. Product, patgd.project

    of pb2_acctng_trx_gl_dist patgd;

    -where patgd_id in (4334663,227554); *


    v_r12_company varchar2 (100);
    v_r12_profit_center varchar2 (100);
    v_r12_department varchar2 (100);
    v_r12_account varchar2 (100);
    v_r12_product varchar2 (100);
    v_r12_project varchar2 (100);
    v_r12_combination_id varchar2 (100);
    v_error_message varchar2 (1000);
    number of v_patgd_id;

    procedure pr_pb2_acctng_dist_error -> THIS PROCEDURE IS USED to COMMIT for THE JOURNAL DATA TABLE. (THIS HAS GOT TO COMMIT) (WHAT IS THE REASON)
    *(*
    number of v_patgd_id

    v_company varchar2,

    v_profit_center varchar2,

    v_department varchar2,

    v_account varchar2,

    v_product varchar2,

    v_project varchar2
    *)*
    is
    pragma autonomous_transaction;
    Start
    insert into pb2_acctng_trx_gl_dist_error
    *(*
    patgd_id,

    company,

    profit_center,

    Department,

    account,

    product,

    project
    *)*
    values

    *(*
    v_patgd_id,

    v_company,

    v_profit_center,

    v_department,

    v_account,

    v_product,

    v_project

    *);*

    commit;
    end;


    Start

    run immediately 'truncate table pb2_acctng_trx_gl_dist_error;


    for rec_pb2_acctng_trx_gl_dist loop cur_pb2_acctng_trx_gl_dist
    v_patgd_id: = rec_pb2_acctng_trx_gl_dist.patgd_id;
    CGL.mis_mapping_util_pk_test1.get_code_combination@apps_r12 -> THIS IS THE DB LINK EXTERNAL PROCEDURE.
    * ('SQLGL', *)
    *'GL # », *
    NULL,
    rec_pb2_acctng_trx_gl_dist.company,
    rec_pb2_acctng_trx_gl_dist.profit_center,
    rec_pb2_acctng_trx_gl_dist. Department,
    rec_pb2_acctng_trx_gl_dist. Account,
    rec_pb2_acctng_trx_gl_dist.sub_account,
    rec_pb2_acctng_trx_gl_dist. Product,
    rec_pb2_acctng_trx_gl_dist. Project,
    v_r12_company,
    v_r12_profit_center,
    v_r12_department,
    v_r12_account,
    v_r12_product,
    v_r12_project,
    v_r12_combination_id,
    v_error_message
    *);*


    If (v_r12_company is null or v_r12_profit_center is null or v_r12_department is null
    or v_r12_account is null or v_r12_product is null or v_r12_project is null) then


    pr_pb2_acctng_dist_error (rec_pb2_acctng_trx_gl_dist.patgd_id,
    rec_pb2_acctng_trx_gl_dist.company,
    rec_pb2_acctng_trx_gl_dist.profit_center,
    rec_pb2_acctng_trx_gl_dist. Department,
    rec_pb2_acctng_trx_gl_dist. Account,
    rec_pb2_acctng_trx_gl_dist. Product,
    rec_pb2_acctng_trx_gl_dist. Project);

    on the other

    Update pb2_acctng_trx_gl_dist
    define society = v_r12_company,

    profit_center = v_r12_profit_center,
    Department = v_r12_department,
    account = v_r12_account,

    sub_account = null,
    product = v_r12_product,.
    project = v_r12_project
    where patgd_id = rec_pb2_acctng_trx_gl_dist.patgd_id;

    end if;


    end loop;

    -commit; *

    exception

    while others then

    mis_error.log_msg (0,
    NULL,
    * 'Patgd ID =' *.
    *|| v_patgd_id *.
    *|| '. SQLCODE ='*.
    *|| SQLCODE *.
    *|| '. SQLERRM ='*.
    *|| SQLERRM *.
    *);*

    end;

    Sins:

    (i) treatment of line by line - especially with a dblink.
    (II) in the course of committing
    (III) unnecessary use of an autonamous transaction.

    Looks like you need to rethink this approach and use SQL directly instead.

  • Doubts about the speed

    Hello gentlemen;

    I have a few questions, I would like to ask more experienced people here. I have a program running on a computer that has a processor i7 processor. In this computer that I have programmed in LabVIEW, meanwhile in another lab, we have another PC, a little older, a dual core 2.3 Ghz, in this pc, we perform a testing platform for a couple of modems, let us not get into the details.

    My problem is that I discovered recently that my program, I programmed in the computer, i7, much slower work in the other machine, the dual core, so the timings are all wrong and the program does not run correctly. For example, there is a table with 166 values, which, in the i7 machine are filled quickly, leaving almost without delay, however, the double machine heart, it takes a few milliseconds to fill about 20 values in the table, and because of the timing, it can fill more values and so the waveform that I use is all wrong. This, of course, live of the whole program and I can't use it as a test I need to integrate.

    I have create a .exe program in labview and try it in the different PC that's how I got to this question.

    Now, I want to know if there is actually a big problem due to the characteristics of the computer, the program is slow in one machine. I know that, to ensure the eficiently program, I need to use States, sub - vi, idea of producer-consumer machines and other things. However, I discovered this is not a problem of the speed generated by the program, because, if that were the case, the table would eventually fill it completely, however in slow computer, it is not filled more with 20 values.

    Else, helps to hide unnecessary variables in the front panel?, because the time beeing I have keep track of lots of variables in the program, so when I create the .exe I still see them runing to keep this follow-up. In the final version, that I won't need them so I'll delete some and hide front panel some. It helps that require less condition?

    I would like to read your comments on this topic, if you have any ideas in machines to States, sub - vi, etc., if there is a way to force the computer to use more resources in the Labview program, etc.
    I'm not add any VI because, in the current state, I know you will say, state machines, sub.vi and so on, and I think that the main problem is between the difference in computers, and I'm still working in the things of the State/sub-VI/etc

    Thank you once again, we just let this hollow.

    Kind regards

    IRAN.

    Get started with, using suitable as a machine for States stream you can ensure that your large table would be always filled completely before moving on, regardless of how long it takes. Believe it or not add that a delay to your curls will do more all the program run faster and smoother, because while loops are eager and can consume 100% of CPU time just a loop waiting for a button press, at the same time all other processes are fighting for time CPU.

  • The downgraded CPU performance

    Hi, a few months ago I accidentally converted my main hard disk from basic to dynamic. However, I was able to convert it back to the base, but I had to format my hard drive. I then installed Win 7 Ultimate 32 bit (or x 86) instead of the default Win 7 Ultimate 64-bit (cause hardship to turn games). But now my CPU (intel i3 m350 @ 2.26 Ghz. Its an inspiron n5010) which is still used to display an average rating on about 6.6 is now demoted to half i.e 3.3. Why is it so? Will I roll back to 64-bit or I, I jumped a few pilots. Because when I brought, he was the pilot of the technology installed turbo boost. I didn't install this time cause AFAIK turbo boost only is not supported on i3, and I also skipped some firmware drivers. What should I do? (instead of download the drivers... which I do not now)

    And Yes, I downloaded the driver-http://ftp.dell.com/Pages/Drivers/inspiron-15-intel-n5010.html and dell has given this laptop with Win 7 64 bit.

    Hello Yadullah,

    Thanks for posting your query in Microsoft Community.

     
    Windows Experience Index measures the capability of the hardware and software of your computer configuration and expresses this measurement as a number called a base score. A higher base score means that your computer will perform better and faster than a computer with a score of lower base, especially when more advanced and resource-intensive tasks running. If your computer is
    you are using an older version of the display driver, your scores can't be updated. If the display driver update would be recommended.
    Update drivers: recommended links
     
    You can check the link for more information on average rating below.
     
    What is the Windows experience index?

    http://Windows.Microsoft.com/en-us/Windows/what-is-Windows-experience-index#what-is-Windows-experience-index=Windows-7

    I hope this helps. Please let us know the results. Feel free to write us again for any help.

  • Why upgrade my video card slow down my CPU performance?

    I have a HPE - 570T. Recently I have upgreaded video card a Radeon HD5450 for a GTX460 nVidia and also bought a 650 watt power supply. I did the benchmarks with the passmark software before and after. Why was my benchmark CPU 8100 before the upgrade and 7400 after the upgrade? Video card has been much faster from about 320 to 2300 for 3D applications well.

    Hi Spencer,.

    I didn't notice that a CPU issue when I installed my GTX 460 1 GB. I'm also using the Passmark Performance (64-bit) version 1019 software.  I have two marks with the same GTX 460 1 GB differnet. A reference point was about 7% more fast (Passmark rating) than the other criterion of reference.  I did RMA the first card to the manufacturer, which explains the two different references. 3D performance was the also the big change for me.  I have improved since the video drivers then perhaps it's time for an another landmark.

    You can send an email to the pass mark and ask what happened.

  • Stroke of lightning & CPU performance

    Hello world

    Sorry if this is a newbie question:

    I think to buy an audio interface of stroke of lightning and I saw that the main advantage is lower than firewire/usb latency.

    My question is:

    Thunderbolt leads to a performance increase in Logic Pro? Sense that may have multiple channels most plugins etc before overload?

    Or is it only the lag time? Is there an increased global buffer size 1024 leading to increased performance or it reaches only a lower latency to the same buffer size?

    Only if you are in real time of recording, or output in real time, 16 + channels at the same time, you bolt would have shown no benefit.  A USB 3.0 drive will work fine.

    Player that improves performance recording and playback in a certain measure.  The rest is CPU/GPU/RAM.  You won't see LPX better work together while plugging a drive you bolt.  Unless you have really huge projects and a mixer 24 channels run all channels for live recording.

Maybe you are looking for

  • Problems to install the new network card (WiFi) on Portege R

    Hello I would, following restoration problems reinstall the network card,I need the name of the manufacturer and in the installation manual can someone help me Best regardsSTONE

  • Tecra: Documen Word crashes

    Hello! I work on a word document - that has the text boxes and arrows etc and when I load were not - so I have to reduce and restore, then it shows - how can I avoid this and change it? Thank you!

  • Current GPS location

    Hello To get the current GPS location, I tried with this code, but doesn't seem to work private void testGPSLocation(){ Thread t = new Thread(new Runnable() { public void run() { //GPS Location try{ LocationProvider provider = LocationProvider.getIns

  • USE_HASH hint

    HelloWhat happens if this suspicion has the parameter table name?Do whatever it is in the following example:SELECT / * + INDEX of USE_NL ORDINATE (LM M) USE_HASH (THE) (E Employee_Last_Name)INDEX of the INDEX (THE Location_Description) (Employee_Pkey

  • Reuse of rolling stock and the license of YouTube

    Hello Adobe community!For one of my clients, I create a video using the video of stock.adobe.com stock only. Given that the customer is faced with the overall price of the necessary film, it would be a real advantage if images could be reused in seve