HP 10BII + statistics PN questions?

After getting weird with the HP 10BII PN + statistics (unexpected) results, I dug out some old books with sample data and checked I get different results on the 10BII + on other machines.

Examples of data of HP - 34 c Manual (p. 38):

x / y: 696/1945, 994/1950, 1330/1955, 1512, 1960, 1965-1750, 2162/1970, 2243/1971, 2382/1972, 2482/1973.

I enter all 9 data and even on the 10BII +, I check the linear regression and then use:

RS-5, RS - K (Swap) to get the slope, m - HP 10BII + returns 0.01612 [should be 61.1612]

RS-6, RS - K (Swap) to get the y originally b - HP 10BII + return 1934.1695 [should be-118, 290.6295]

Maybe I'm missing something basic, but the 10BII + gives the answers that appear all just bad. I checked the "should be" translated on a HP - 34 c, WP 34s and HP - 15 c and they agree.

I read (and reread) the 10BII User Guide + to ensure the correct keystrokes, and also made C.ALL to ensure no conflict of memory (e.g. too many CFlows, etc.). Curiously, linear estimates appear to be OK when I interpolate w/samples.

What happens to Tim?

but here's an interesting result...

If the order of the columns is changed (x and y values are reversed)...

X Y
[[1945 696.]
[1950 994.]
[1955 1330.]
[1960 1512.]
[1750 1965.]
[2162 1970.]
[1971 2243.]
[1972 2382.]
[1973 2482.]]

Now run a linear regression model... and the result is...

'(-118242.173643) +(61.1364341085*X)'

the issue seems to be the way in which data is entered into the calculator...

Tags: HP Tablets

Similar Questions

  • can I exclude statistics and index both at the same time

    Hi people,

    I do big table re-organization, so my question is can I exclude statistics and index at the same time

    I am planing this activity as below, please advise me if I'm wrong.

    (1) create table xx_031114 in select * from xx;   (it's like the precautionary measures)

    (2) export the table xxx using its owner the user yy

    yy/passwd@tnsnames = DIRECTORY expdp export DUMPFILE = xx.dmp = xx LOGFILE = xx.log tables

    (3) to truncate the table or drop table

    SQL > truncate table xx;

    SQL > drop table xx;

    (4) import the table with metadata without statistics & index as below

    Impdp yy/passwd@tnsnames DIRECTORY = export statistical DUMPFILE = xx.dmp = xx = exclude TABLES, index CONTENT = METADATA_ONLY LOGFILE = imp_031114.log (import only meta data without statistics & index)


    Question 1: can I exclude the statistics and indexes at once?

    Impdp yy/passwd@tnsnames DIRECTORY = export of TABLES DUMPFILE = xx.dmp = xx CONTENT = exclude LOGFILE = imp_dataonly_031114.log index = DATA_ONLY (import data only without index)

    Impdp yy/passwd@tnsnames DIRECTORY = export of TABLES DUMPFILE = xx.dmp = xx include = index (import only indexes)

    2 question: can I import the indexes only as above?

    Thank you and best regards.

    Younus

    for your question: you use the exclude options separate to exclude indexes and statistics

    Question 2: directory of the user/password impdp dumpfile you_dir = your_dump = include = INDEX

    Hope this helps

    concerning

    Pravin

  • How to avoid statistics more than questionable permanently?

    Experts in the morning

    I want to avoid EXP-00091: export statistics more than questionable permanent.

    Whenever I don't want to use statistics = none.

    Anyone can provide a permanent solution.

    SQL > select VALUE

    2 of nls_database_parameters

    3 where PARAMETER = "NLS_CHARACTERSET";

    VALUE

    ----------------------------------------

    WE8MSWIN1252

    SQL > exec dbms_stats.gather_schema_stats ('U1');

    PL/SQL procedure successfully completed.

    SQL > show parameter nls;

    VALUE OF TYPE NAME

    ------------------------------------ ----------- ------------------------------

    NLS_CALENDAR chain

    nls_comp BINARY string

    nls_currency string

    NLS_DATE_FORMAT chain

    nls_date_language chain

    nls_dual_currency string

    nls_iso_currency string

    nls_language string of AMERICA

    nls_length_semantics string OCTET

    string nls_nchar_conv_excp FALSE

    nls_numeric_characters chain

    VALUE OF TYPE NAME

    ------------------------------------ ----------- ------------------------------

    nls_sort chain

    nls_territory string of AMERICA

    nls_time_format string

    nls_time_tz_format string

    NLS_TIMESTAMP_FORMAT string

    NLS_TIMESTAMP_TZ_FORMAT string

    $ exp file = tts_exp.dmp log = tts_exp.log TRANSPORT_TABLESPACE = Y TABLESPACES = "TBS1, TBS2".

    Export: Release 11.2.0.1.0 - Production Fri Apr 3 10:04:40 2015

    Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

    User name: / as sysdba

    Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production

    With partitioning, OLAP, Data Mining and Real Application Testing options

    Export in US7ASCII and AL16UTF16 NCHAR character set

    Server uses WE8MSWIN1252 (possible character set conversion) character set

    Note: data in table (lines) will not be exported

    To export metadata from transportable tablespace...

    For the tablespace TBS1...

    . export cluster definitions

    . export table definitions

    . . export the table EMP

    EXP-00091: exporting more than questionable statistics.

    EXP-00091: exporting more than questionable statistics.

    EXP-00091: exporting more than questionable statistics.

    EXP-00091: exporting more than questionable statistics.

    EXP-00091: exporting more than questionable statistics.

    EXP-00091: exporting more than questionable statistics.

    . . export of table DEPT

    EXP-00091: exporting more than questionable statistics.

    EXP-00091: exporting more than questionable statistics.

    EXP-00091: exporting more than questionable statistics.

    . . export of table on PAYROLL

    EXP-00091: exporting more than questionable statistics.

    EXP-00091: exporting more than questionable statistics.

    EXP-00091: exporting more than questionable statistics.

    For the TBS2 tablespace...

    . export cluster definitions

    . export table definitions

    . export of referential integrity constraints

    . export of triggers

    . export of metadata for the transportable tablespace end

    Export completed successfully with warnings.

    Hello

    The key is in the newspapers:

    Export in US7ASCII and AL16UTF16 NCHAR character set

    Server uses WE8MSWIN1252 (possible character set conversion) character set

    Change the character set before you run the exp with one

    Export NLS_LANG = AMERICAN_AMERICA. WE8MSWIN1252

    --

    Bertrand

  • Question about performance statistics

    William-

    I was looking on the 'doc' House to see if there was reference to the period of time represented in the performance data given in the output of the script.

    Can you tell me what we actually with the host / cluster (CPU, memory) performance statistics?

    Also - I was wondering why "ESX/ESXi Hardware Configuration" includes CPU utilization and memory - that's just how the API works?  It just seems odd to include these data with summary material...

    Forgive the noob questions (and anything that might follow) I (FINALLY) movement in a project that will allow me to work much more consistent with the API, and this script makes so much to make my life easier.  We put the spotlight on performance and the ability of prediction, so I'll try to learn what can offer the API (overlooking DOCS 9840 & 10665 as time lost...)

    Thank you!
    Don

    These are just rough summary level a time interval counters, if you are looking for granular performance information, you will need to look in performanceManager which is the entitity wherein you will get more details.

  • EXP-00091 exporting questionable statistics error in utility Exp

    Hello everyone,

    I use oracle 10g, I have a UTILISATEURTEMP user, when I try to Exp utility user at the command prompt, I get the following Notification;
    EXP-00091 exporting more than questionable statistics
    successfully completed, export activity but I am worried about the Notification.
    any suggession.
    Thank you and best regards.

    Hello

    no need to worry... alll you mentioned is exported.

    Kind regards
    Deepak

  • Question about changing the refresh frequency for the perf of run-time statistics

    Hello

    I would like to know if there is no API or refresh rate of orders to change the provider.

    For example when I train to recover PerfProviderSummary.getRefreshRate () disk performance statistics, I see that the refresh rate is set at 20 seconds (by default). Call of PerfProviderSummary.setRefreshRate (5) has no effect on future calls and the refresh rate remains at 20 seconds.

    ESXTOP however to change the refresh rate up to 2 seconds.

    Thank you and best regards,

    Prashanth

    Hi Julie,.

    It is not possible to change the frequency of refresh in real time of 20 seconds. The vim. PerformanceManager interface is limited to a minimum of 20 seconds sampling period.

    In addition, it is always recommended to call the QueryPerfProviderSummary API on the entity to verify the refresh rate and use this query for performance statistics refresh rate.

    I hope this answers your concern!

    -Angela-

  • Beginner or questions on statistics?

    I work with an old database (version 8i) which is still in use, but be eliminated and I am experiencing performance problems when querying it. (For the record, I am not the ADMINISTRATOR, I simply query the database data). Usually, for simple queries, the performance are acceptable, except on some larger tables (> 1 million rows). I know that the default optimizer mode is based on rules, but I know that I can use the indicator to choose to force using the CBO. I checked to see when the statistics were generated last for some tables involved in slow queries, and 'last_analyzed' is empty for some of the paintings and more than two years for others (no more recent than the one at all). I guess it's bad. I want to tell someone to fix this problem, but I'd like to at least know what I'm talking about when I ask...

    Then...
    Statistics should be generated for each table or only some?
    Are there restrictions on whether or not the statistics can be generated?
    Are there guidelines on how often of statistics should be generated/how do you know that statistics are too old?
    Statistics can be generated when the database is used? (i.e. the application using the database is to write on it, and people are there access)
    Is there anything else important I should know to intelligently demand to be fixed?

    Just to be clear, the / * + CHOOSE * / suspicion does not force the Oracle to use the CBO. Instead, it forces the optimizer_mode for reporting to CHOOSE. Which, in turn, means that if at least one of the objects in the query has statistics, the CBO will be used. Otherwise, we would use the RBO.

    If the optimizer_mode for instance has been set to the RULE and other applications that the database never change the optimizer_mode (whether by changing the value of their session or through councils or by using one of the structures that will force the CBO to use) access, then collection of statistics is a relatively low risk. Check that no application changes never optimizer_mode for their session or for one of their applications, however, can be a little tricky. I saw a lot of applications that swore to people using only the RBO who suddenly entered the wrong herbs because some developers at some point had written a query that forced the CBO to be used or changed optimizer_mode of the session or something in that sense. So even if everyone expects that the collection of statistics would cause no problem, if you already have fire-fighting people in fashion, they missing you something may be too large to justify the collection of statistics for queries (of course, I do not know how critical your reports are compared to other processes running on the database) It's quite possible that the risk is worth).

    You can force the CBO to use whether or not statistics exist on objects by using the / * + ALL_ROWS * / index (or by changing the to your session at ALL_ROWS optimizer_mode). If an object is not statistical, Oracle dynamically gather statistics during parsing of the query. As I mentioned earlier, which is generally less accurate than to have appropriate statistics, but it is often close enough.

    Justin

  • HP 10bII +: wrong number of displayed digits

    If you want your 10bII + to display two digits for example, and you want to calculate the following: 1.1234 [/] 100 [=]

    You expect the answer displayed to be: 0.01 because you set your 10bII + to display two digits. But in fact the 10bII + poster 0.011.

    But if you calculate for example 1 [/] 3 [=] it displays 0.33 which is correct flare response.

    To double check this behavior, I discovered that my 12 c also expands the numbers displayed, but only if the rounded answer correctly would be 0.00. I discovered that by typing the following: 0.1234 [ENTER] 100 [/]. This gives me on my 12 c 0.001 accordingly. If it it would manage strictly, according to the figures to display, it should have showed 0.00. But it shows me 0.001 rather to remind me that the right answer is not null.

    But if you calculate the same example with the 10bII + (configured to display two digits), it shows you the answer to four digits: 0.0012.

    It's a buggy thing!

    Hello

    This isn't a bug, but rather the way that basically all calculators have dealt with results around 0 for very long. The problem is that when you are close, but not quite 0, these "hidden numbers" can have an impact without the knowledge of the user because they if it were "just 0" so that in reality there is a small, non zero value it. ". In some calculations, this could be quite essential to avoid new mistakes.

    In the scientific calculators, you would see an exponentiated rather as 1E-3 number. Because who is not probably wished for a financial calculator, as you've seen the numbers simply extend out until you can see the non-0 value. This is to help avoid the user wins a misunderstanding that everything worked wonderfully to 0 when there was actually a non-0 value.

    The 10bII + actually uses the convention of "the numbers after the decimal, or if the result is not no rounding to 0, two figures after the first significant digit of the mantissa". Unfortunately, that would open up more questions then he would avoid actually I think. A better description in the user guide probably would have understood this longer explanation, but to my knowledge, you are the only person I've ever seen validation to complain. :-)

    Please keep asking questions. I think you will find that the 10bII + is actually an amazing little machine for a flight to a price. I am also the main person who writes the firmware inside, so I'm certainly happy to discuss anything whatsoever on that you want!

  • HP 10bII +: behavior wrong calculation of straight-line depreciation

    Depreciation calculation ignores the residual value of the asset. The user guide is an example on page 9. If you calculate it with the 10bII it gives you a bad remaining depreciable value for each year. It seems to subtract the value of recovery, which is not correct. After year 5, the salvage value must be 500. The 10bII gives 0 as response.

    Hello

    You are right, the number reported is NOT the book value.

    The 10bII + follows the convention of 12 c - named of the salvage value is not included in the calculated amortization * of the asset itself *. In fact, I think that maybe it's one of the ways accounting more frequent in the things of life real handles. The recovery is tracked separately as an asset.

    Think of it this way, you get the number that your result is real depreciation - for example the total value lost or bought for that specific period. It is not the book value of assets + the balance of the depreciable value. That's why it ends 0. There is no more 'remaining depreciable value"that can happen again.

    Since you will recover the recovery then the numbers reported are purely value less the recovery as you pointed out. This is not incorrect, but rather can be a different convention then you might be used to. However, it is a perfectly valid solution to achieve this type of calculation which * I think * reflects how financial accounts actually seem to follow things in many places.

    Does that answer your question? Otherwise please ask for more and I can better explain if all goes well.

  • min/max with outputs ttl statistics

    This is a weird problem. I have attached the worksheet because it is difficult to explain the problem. First of all, let me explain what this thing is supposed to do. A generator of signals outputs sine 5Vp - p, in addition to four. After being added, I use a module of statistics to determine the Min/Max. All I need is the maximum, the minimum is ignored (I'm only looking to the + pics). The + peaks are evaluated to identify uniquely to the final output, which sinewave (s) have been entered in the worksheet. Since I finally need 16 - bit I had to add a scalar unit (scale module) to create the entry 16 (max 15 son allowed an output) by expanding the 15th input to two outputs. I see the expected level of TTL is issued by the module of statistics on three modules Y/t diagram. This tells me that things seem a little work at the exit of the module of SEO (the values of hysteresis in the stat module need to be tweaked to produce all the unique values (16), but it works at least. The problem is that the module of bitmask (set to combine tips - 16-bit conversion for a wide release) generates no output regardless of sinewave different combinations of entry. I thought that I have had set a good example of C.J. provided. I hooked of DMM to also monitor the inputs to the module of bitmask (called 16-bit encoder) - I can't get the digital multimeter to display the output of the module of the stat, but the modules Y/t show the output TTL values there. Both show the modules expected to show which is output, but don't--that intrigues me. The frequency of the sine wave is set to 1,2,4 & 8 Hz for debugging, so I know it is not too fast for the DMM display - I proved this by connecting the sum as an input for the senior DMM sinewave and it displays the voltage changes without problem.

    Thus, the two questions are: 1) why the DMM is not working at the release of the Y/t modules or Module Min/Max of Stat?  (2) why the bitmask Module cannot evaluate its entries? The added sine wave is continuous and constant phase.

    Any help would be appreciated. This has really baffled me, trying to debug.

    It dawned on me that the DMM is placed where they will not work because they are supposed to show a too short period of tension. They would appear between 5V and 25V depending on the number of 5V wfm summary, but each TTL output, they try to show are nothing more than the duration milliseconds--not a good application for a DMM. Now, it's just a question of what is the problem with the 16-bit conversion package around!

    Any suggestions on the problem?

  • How I replace perfectly my record excel sheet with ability of database? + General questions about computing distributed with LabVIEW

    Surprisingly, I'm almost finished with a full blown control-simulation application, that I've been working on for more than a year now, thanks in no small part of this community. The final step is to run on the simulations of k ~ 8 and be able to meet a simulation and overall statistics on performance. Each simulation is taking about 6 minutes of real time to run (~ 2 seconds of real time per hour of simulation time, valid for 7 days of simulation), as we seek to about 800 hours of your time to simulate. I have 5 computers available and a raspberry 2 Pi these simulations on, I'm looking to set up a kind of compute cluster at the end in about 2 weeks.

    The ability of current logging is sketchy; I got about 40 columns of data, and they are written in a spreadsheet with a .xls format tabs-delimited. This works very well for individual simulations, but it would be quite heavy to deal with if I had more than 20,000 of them. I think this must be done with a relational database sort, but my experience with databases is very limited, especially then, when it comes to LabVIEW. Here are my questions:

    -Can I create a kind of master-slave configuration where a computer (and probably the Pi) keeps track of the simulations are complete, which are running, and who have never run? Computers slaves ask for simulation settings, and IP would give them to him.

    -How should I take care of the database? Each simulation is about 500 k in .xls format, it's about 5 GB of data in all. Computers slaves synchronization from time to time to take care of the redundancy?

    -How can I refine my memory + General fresh disk I/O? How can I know which items from my point of view most of them?

    -Do you have suggestions for the implementation of clusters of databases relational/computer with LabVIEW?

    I have attached a picture of my configuration of logging + the overall structure of the application. It is a state machine with a structure of the event for the interruptions.


  • Statistics in tiara

    Hi all

    I have a problem with statistics and, respectively, histogram generation of a data set

    Below you can see how looks like my data channels.

    DIAdem notice I have->

    Name | CH1 Dev1. CH2 Dev1. Ch3 Dev1. CH1 Dev2. CH2 Dev2. Ch3 Dev2. CH1 Dev3 | CH2 Dev3 | Dev3 CH3 |

    Number | 1. 2            |3            |4             |5             |6             |7            |8             |9             |

    Length | 3. 3            |3            |3             |3             |3             |3            |3             |3             |

    Unit | V            | V            | V           | V            | V            | V            | V           | V            | V            |

    Content channels

    1         | 1.52 | 1.51 | 1.53 | 1.52 | 1.55 | 1.51 | 1.50 | 1.53 | 1.55 |

    2         | 1.62 | 1.61 | 1.63 | 1.62 | 1.65 | 1.60 | 1.62 | 1.67. 1.65 |

    3         | 1.52 | 1.51 | 1.53 | 1.52 | 1.55 | 1.51 | 1.50 | 1.53 | 1.55 |

    What I need is to create a histogram for a given line-> in this case number 2

    It is this right before or should I make a few adjustments before that?

    I hope my question is clear and easy to follow.

    Thanks in advance to anyone who can offer you a working solution.

    Hello

    You can do this in a scipt.

    Erase the dataportal, load the file attached .tdm and run the following script:

    I have Sun
    Dial the ChnAlloc ("Result", 3, 1, DataTypeFloat64, "Numeric")
    for i = 1 to 3
    CHD (i, "result") = CHD(1,i)
    Next

    Then all the unique values of the 3 channels will be copied to the result string.

  • Simple Data Acquisition question please Help!

    I have problems with a simple LabVIEW program.

    I'm analyzing two inputs voltage and at the moment I have the VI work so that both are analyzed at 10 k Hz simultaneously and displayed in two tables of waveform.  What I would do, it is write values for them at 1 Hz in an excel spreadsheet or a text file with the time, the value of the first and the second value.  So I would create an array of [value1, value2] and export that to a line in a text file, I can analyze and then later in excellent.  I want to keep it that way for the duration of the program. In addition, it is possible to have the LabVIEW to generate such a table statistics once the user closes the loop, it would be even better.

    I can only assume that it is not too complicated, but I've never done any export data and just a quick explanation probably will save me a lot of time.  Any help would be great, even if it's just to say "use this feature.  Thank you very much.

    Pieter

    Pieter,

    Please post on the Forums of NOR. I could take an example that was similar to yours and organize data in the format you want. I've included a photo as well as saved for LabVIEW 8.6 VI. I took the data from the berries that were from the channel to read and handle while he was in the format of time, channel 0 channel 1. Hope this helps and let me know if you have any questions. Thank you!

  • Find questions, I asked

    I keep reading "go to your question" "go to your question. Well want to tell me how to get once I posted a question and come back in a few days looking for the shape of my questions.

    Hello

    Connect, then click on your display name in these Forums on the left side of this page under the big issue, or the upper right side of this page or here:

    MoreFrustrated

    This will take you to your profile:

    You have tabs across it under your site statistics by saying:

    My questions   My useful rated answers all Threads Notifications of private Messages

    See you soon.

  • SG-300 QoS Cisco on SNMP statistics

    Hello.

    I would like to monitor my Cisco SG-300 statistical QoS switches SNMP.

    I found the statistical QoS configuration page where I could set up two counters.

    Now, I have two questions:

    (1) how to read statistics QoS on SNMP counters?

    (2) I get the distinct quality of service statistics for each single port or following QoS limited to only these two counters?

    OK, move this thread... He worked subsequently in a manner:

    • Download Managed Switch MIB - 1.4.0 available here
    • If you have Linux, extract and put all the files in/usr/share/snmp/MIB/directory
    • now, you'll be able to get all the stats desired by yourself using snmpwalk
    • Here is list of the available QoS all variables related MIB:

    rlQosAceTidxTable
    rlQosAclTable
    rlQosAggregatePolicerStatisticsTable
    rlQoSApplicationDefaultAction
    rlQosClassifierRulesNumberUtilizationSystem
    rlQosClassifierUtilizationSystem
    rlQosClassifierUtilizationTable
    rlQosClassMapTable
    rlQosClearCounters
    rlQosCosQueueDefaultMapTable
    rlQosCosQueueTable
    rlQosDscpMutationTable
    rlQosDscpQueueDefaultMapTable
    rlQosDscpQueueTable
    rlQosDscpRemarkTable
    rlQosDscpToDpTable
    rlQosEfManageTable
    rlQosFreeIndexesTable
    rlQosIfPolicyTable
    rlQosIfProfileCfgTable
    rlQosMaxNumOfAce
    rlQosMibVersion
    rlQosModeGlobalCfgTable
    rlQosNamesToIndexesTable
    rlQosOutQueueStatisticsTable
    rlQosPolicerTable
    rlQosPolicyClassPriorityRefTable
    rlQosPolicyClassRefTable
    rlQosPolicyMapTable
    rlQosPortToProfileMappingTable
    rlQosQueueProfileTable
    rlQosQueueShapeProfileTable
    rlQosSinglePolicerStatisticsTable
    rlQosTupleTable

    • and you can extract data using the snmpwalk command (you must have installed the net-snmp package):

     snmpwalk -v 2c -c CommunitySecret X.X.X.X MIBvariable

    where:

    • CommunitySecret is the Readonly or Readwrite community string, you have defined on the switch
    • Where X.X.X.X is your IP of the switch management
    • MIBvariable is your MIB variable name selected in the list above.

Maybe you are looking for