Variant of data with Null void management / vi

Hi all

I create an application that uses a DataSocket to generate Variant data.  I then transform variant in its form 'real data' (Boolean, U8, DBL) by using the 'Variant' to Data.vi, it works very well when there are data from the DataSocket however if no data is present, an error is received (error code 91).  I added a check to see if there is no current using the code below, data however as there are more than a dozen cases of this code throughout my program of conversion of the variants for a mixture of; Boolean values, U8 and DBL, I want to put this code in a Sub VI, I believe this would be more careful and do real programmers.

My problem is that the "output data" must be able to adapt to the "Type" of entry before execution (as does "Variant of Data.vi"), but I could not find a way to do it.  Does anyone have any suggestions on what I need to do to get there?

Thanks in advance,

Mark

With the help of LabVIEW 2010 Full Development System

Wrap the function in a case structure and use empty String/path? with the cable Variant function to determine if the variant is empty.  Run the variant to the data if it is not empty.

Edit: Looks like you added a picture I posted and do what I was proposing.  Anyway, I don't know that you can really conclude it in a Subvi with different types.  If you have a limited number of different types, you could maybe do a polymorphic VI.

Tags: NI Software

Similar Questions

  • Remove data with null value

    Hello

    I'm using Oracle 11 g. I have a table with an id of 3, node, the value column. Combination of the column id and node, that must be taken account for deletion on the registers.

    Here, I need to delete lines with the NULL value in the value column. If for a combination of id and node with non-null values, then I need to delete rows with a null value for this combination.

    If the combination of id, node is not null value then this records should not delete.

    Below table, I need to remove the second row, for which is a value not zero VOICE CAL '10' is there, so I need to delete the row with null values. (VOICE, CAL, NULL)

    Network, FL, there is no value is non-null then I should NOT delete this line.

    This table is to have 100 s of this association, we can delete data in a single delete query?

    Or how I can delete rows with nulls for this combination.

    Tab1

    VALUE OF THE NŒUD ID

    VOICE CAL 10
    VOICE CAL NULL
    NETWORK NULL FL

    Thank you

    Hello

    oradba11 wrote:

    Hello

    I'm using Oracle 11 g. I have a table with an id of 3, node, the value column. Combination of the column id and node, that must be taken account for deletion on the registers.

    Here, I need to delete lines with the NULL value in the value column. If for a combination of id and node with non-null values, then I need to delete rows with a null value for this combination.

    If the combination of id, node is not null value then this records should not delete.

    Below table, I need to remove the second row, for which is a value not zero VOICE CAL '10' is there, so I need to delete the row with null values. (VOICE, CAL, NULL)

    Network, FL, there is no value is non-null then I should NOT delete this line.

    This table is to have 100 s of this association, we can delete data in a single delete query?

    Or how I can delete rows with nulls for this combination.

    Tab1

    VALUE OF THE NŒUD ID

    VOICE CAL 10
    VOICE CAL NULL
    NETWORK NULL FL

    Thank you

    You can do this in a single DELETE statement (it is not a request), using an EXISTS or IN the subquery.  For example:

    REMOVE table_x m

    WHERE the value IS NULL

    AND THERE ARE)

    SELECT 0

    FROM table_x s

    WHERE s.id = m.id

    AND s.node = m.node

    AND s.value IS NOT NULL

    )

    ;

    If you would care to post CREATE TABLE and INSERT instructions for the sample data, and then I could test it.

  • Classify data with null values

    Hello

    with t as

    (select 9 as double num

    Union of all the

    Select 11 double

    Union of all the

    Select double 7

    Union of all the

    Select double 0

    Union of all the

    Select double null)

    Select * from t by nvl (num, (select max (num) + 1 t)) / / desc

    Is there another way I can do...

    NULL comes first and remaining data later as per mentioned in [ORDER BY ASC/DESC]

    You can use the nulls FIRST/LAST in the ORDER BY clause

    Select * from t by num desc nulls first

    http://docs.Oracle.com/CD/B28359_01/server.111/b28286/statements_10002.htm#i2171079

  • Pavilion dm4 2191us: restoring a backup made with HP Recovery Manager - data recovery manager has since been removed

    My system (portable Pavilion dm4, Win 7 Home Premium x 64)) is works well but I would like to restore a backup of data made with HP Recovery Manager. HP Recovery Manager has since been removed from the system and I don't have a recovery disk. I tried to install sp45415 that supposedly runs a silent installation for the Recovery Manager program, but it fails silently on my system. The notes say that it applies to Windows 7 computers and the Pavilion dm1 and dm3. There is no mention of the dm4.

    Y at - there a trick to getting sp45415 to work properly?

    Y at - it somewhere that I can download an installation package for HP Recovery Manager that works on the Pavilion dm4?

    Is there another tool that can restore the backup made by HP Recovery Manager?

    Thank you!

    If you have uninstalled Recovery Manager he probably removed the recovery of files in the recovery partition. When these files are not deleted download Recovery Manager will install. You will need a recovery media to boot from to restore the files and the recovery partition. If you do not have a recovery media you order from HP.

    As for the files you are trying to restore - if it's a backup of personal files, photos, etc. to the outside hard disk or DVD must be a file .exe inside these files, probably of Restore.exe or Recover.exe. Double click on the .exe will extract files to a folder on the hard drive of your laptop.

  • Spark list ItemRenderer 'data' is null.

    I use the same code in one place that I use in another, and still the event creationComplete that they prepare in the itemRenderer, the first line of code, trace("data") returns "null".

    This code has been fine for months of work and suddenly stopped working with no explanation.  I lost two days within period of fixation which should not be broken and I'm no closer to a solution.  Very frustrated. Any help appreciated - greatly.

    Puddleglum.

    = Component relevant code containing spark list = Output from the console displayed in the form of comments

    [Bindable]

    public var itemArray:ArrayCollection = new ArrayCollection();

    [Bindable]

    private var gap: uint = 6;

    private var file:File;

    private var fileStream:FileStream;

    private var toolXML:XML;

    protected function creationCompleteHandler (): void

    {

    trace ("FUNCTION MyLibraryToolsPanel creationCompleteHandler()");

    file = parentApplication.getFileOperations () .getToolFilepath ();

    trace ("file:" + file.url);

    }

    private void loadTool(xmlFilepath:File,_xmlFilename:String):void

    {

    trace ("MyLibraryToolsPanel FUNCTION loadTool ("+ xmlFilepath.url +"," + xmlFilename + ")"); / / MyLibraryToolsPanel FUNCTION loadTool (app-storage: / Tools/User000000, 000000Tool1371827886983.xml)

    var xmlFile:File = xmlFilepath.resolvePath (xmlFilename);

    If (xmlFile.Exists)

    {

    fileStream = new FileStream();

    fileStream.open (xmlFile, FileMode.READ);

    toolXML = XML (fileStream.readUTFBytes (fileStream.bytesAvailable));

    fileStream.close ();

    var obj:Object = new Object();

    obj.xmlFilepath = xmlFilepath;

    obj.xmlFilename = xmlFilename;

    obj.XML = toolXML;

    trace ("obj:" + obj); / / obj: [object Object]

    trace ("FUNCTION MyLibraryToolsPanel loadTool\n"+obj.xml.toXMLString ());   / / returns the XML when uncommented

    itemArray.addItemAt (obj, itemArray.length);

    }

    on the other

    {

    trace ("MyLibraryToolsPanel FUNCTION loadTool XMLFILE does NOT EXIST");  / / This will trigger never

    }

    < s:List id = "itemList".

    useVirtualLayout = "false".

    Changing = "listChangingHandler (Event)" "

    Click = "Event.stopPropagation (); »

    touchTap = "event.stopPropagation (); »

    scrollSnappingMode = "tip".

    Height = "height of {}.

    interactionMode = 'touch '.

    explicitHeight = "{height}".

    dropEnabled = "false".

    dragEnabled = "false".

    dragMoveEnabled = "false".

    allowMultipleSelection = "false".

    alternatingItemColors = "[0xe6e6fa, 0xF0F8ff].

    dataProvider = "{itemArray}.

    itemRenderer = "converters. MyLibraryToolsPanelListItemRenderer ".

    horizontalScrollPolicy = "off".

    verticalScrollPolicy = "on".

    borderVisible = "false".

    visible = "{ItemArray.Length > 0} '"

    includeInLayout = "{itemList.visible}" "

    Width = "100%" >

    < s:layout >

    < s:VerticalLayout id = "itemListVerticalLayout" variableRowHeight = "false" gap = "0" / >

    < / s:layout >

    < / s:List >

    < s:Label id = "noToolsMsg" text = "no tool saved available." horizontalCenter = "0" verticalAlign = "middle" red = "0" visible = "{itemArray.length == 0} ' includeInLayout ="{noToolsMsg.visible}"/ >"

    < / s:Group >

    = Renderer code that contains relevant sections =.

    protected function creationCompleteHandler (): void

    {

    trace ("FUNCTION MyLibraryToolsPanelListItemRenderer creationCompleteHandler"); / / MyLibraryToolsPanelListItemRenderer FUNCTION creationCompleteHandler

    trace ("data:" + data);  / / data: null

    dataProxy = new ObjectProxy (data);

    ...

    Looks like I'll change to use instead of the creationComplete dataChange in all my custom converters.  I can't believe I lost 2 days and more hair because of something so trivial as multithreading.

    Really... You cannot MAKE something before that data is passed and the rendering engine is INSTANTIATED ONLY when an element is added to the dataProvider... Then... the procedure.   Data added to dataProvider-online Insantiate WITH object relevant-Online Layout/rendering/upgrade to day-online THEN creationComplete();   Duh.  creationComplete is a misnomer if you don't even have the relevant data that you use to CREATE the instance.

    No doubt this would be avoided if the BUG that places the dummy rendering engine IN the dataProvider have been set, but as it is - not, been resolved, I have no choice but to disable put in virtual.   Sage, grumble, winge winge winge-grouse.

  • Concatenation of data with the GROUP BY clause

    Hi again!

    Following my previous thread...
    Re: Need help with RANK() on data ZERO

    I tried to apply the GROUP BY clause instead of preforming my query with RANK() to manage records NULL... I have a scenario where I also need to concatenate data from several lines.

    CREATE TABLE T_EMP (NUMBER OF EMP_NO, NAME VARCHAR2 (20));
    INSERT INTO T_EMP VALUES (1001, 'MARK');
    INSERT INTO T_EMP VALUES (1002, 'DAVID');
    INSERT INTO T_EMP VALUES (1003, "SHAUN");
    INSERT INTO T_EMP VALUES (1004, "JILL");

    CREATE TABLE T_EMP_DEPT (NUMBER OF EMP_NO, DEPT_NO NUMBER);
    INSERT INTO T_EMP_DEPT VALUES (1001, 10);
    INSERT INTO T_EMP_DEPT VALUES (1001, 20);
    INSERT INTO T_EMP_DEPT VALUES (1002, 10);
    INSERT INTO T_EMP_DEPT VALUES (1002, 20);
    INSERT INTO T_EMP_DEPT VALUES (1002, 30);
    INSERT INTO T_EMP_DEPT VALUES (1003, 20);
    INSERT INTO T_EMP_DEPT VALUES (1003, 30);
    INSERT INTO T_EMP_DEPT VALUES (1004, 10);

    CREATE TABLE T_EMP_VISITS (NUMBER OF EMP_NO, DEPT_NO NUMBER, VISITED DATE);
    INSERT INTO T_EMP_VISITS VALUES (1001, 10, 1 JAN 2009');
    INSERT INTO T_EMP_VISITS VALUES (1002, 10, 1 JAN 2009');
    INSERT INTO T_EMP_VISITS VALUES (1002, 30, 11 APR 2009');
    INSERT INTO T_EMP_VISITS VALUES (1003, 20, 3 MAY 2009');
    INSERT INTO T_EMP_VISITS VALUES (1003, 30: 14 FEB 2009');
    COMMIT;

    I have a T_EMP master table that stores the name and number of the emp. Each emp is required to visit some departments. This mapping is stored in the T_EMP_DEPT table. An employee can visit one or more departments. T_EMP_VISITS table stores the dates where the employee visited the services required. I need to view the report which should show when an employee all completed visits, which is the maximum date when it finished to visit all departments. If he did not visit any of the report should display date max, otherwise NULL. I was able to do using GROUP BY such proposed by Salim, but how do I show a list separated by commas of the services required for an employee in the same query.

    SELECT
    EMP_NO,
    NAME,
    MAX (DEPT_NO) KEEP (DENSE_RANK LAST ORDER BY VISITED) MAX_DEPT_NO,.
    MAX (VISITED) KEEP (DENSE_RANK LAST ORDER PER VISIT) VISITS_COMP
    DE)
    SELECT
    T_EMP. EMP_NO,
    NAME,
    T_EMP_DEPT. DEPT_NO,
    VISITED
    OF T_EMP
    LEFT OUTER JOIN T_EMP_DEPT
    ON T_EMP. EMP_NO = T_EMP_DEPT. EMP_NO
    LEFT OUTER JOIN T_EMP_VISITS
    ON T_EMP_DEPT. EMP_NO = T_EMP_VISITS. EMP_NO
    AND T_EMP_DEPT. DEPT_NO = T_EMP_VISITS. DEPT_NO)
    GROUP EMP_NO, NAME;

    Output
    EMP_NO NAME MAX_DEPT_NO VISITS_COMP
    1001 MARK 20
    1002 DAVID 20
    1003 SHAUN 20 3 MAY 09
    JILL 1004

    Power required
    EMP_NO NAME REQ_DEPTS MAX_DEPT_NO VISITS_COMP
    1001 MARC 20 10.20
    1002 DAVID 10,20,30 20
    1003 SHAUN 20,30 20 3 MAY 09
    JILL 10 1004

    Can we do this in a single query?

    Hello

    user512647 wrote:
    ... Sanjay
    The query you provided that stragg() use seems to work but my requirement is not in the result set. I don't know how to use stragg with
    MAX (DEPT_NO) KEEP (DENSE_RANK LAST ORDER BY VISITED) MAX_DEPT_NO,.
    MAX (VISITED) KEEP (DENSE_RANK LAST ORDER PER VISIT) VISITS_COMP
    I need more, these two columns these gives me the date when they have completed all visits. If they missed any Department then the result must be NULL in the VISITS_COMP field.

    Just add them to the SELECT clause:

    SELECT    t_emp.emp_no,
           name,
           STRAGG (t_emp_dept.dept_no)     AS deptno,
           MAX (t_emp_dept.dept_no) KEEP (DENSE_RANK LAST ORDER BY visited)
                                      AS max_dept_no,
           MAX (visited)                      AS visits_comp
    FROM             t_emp
    LEFT OUTER JOIN      t_emp_dept     ON   t_emp.emp_no     = t_emp_dept.emp_no
    LEFT OUTER JOIN      t_emp_visits     ON   t_emp_dept.emp_no     = t_emp_visits.emp_no
                                 AND  t_emp_dept.dept_no = t_emp_visits.dept_no
    GROUP BY  t_emp.emp_no
    ,            name
    ;
    

    The column called visit_comp is simply the last visited, regardless of how the employee visited departments.
    If you want to have the NULL value if the employee has not yet visited all 3 departments:

    ...       CASE
              WHEN  COUNT (DISTINCT t_emp_dept.dept_no) = 3
              THEN  MAX (visited)
           END                    AS visits_comp
    

    The 'magic number' 3 is the total number of departments.
    If you want to understand the correct value of that at the time of the execution of the query, replace the code literal 3 hard with a scalar subquery.

    Note that 'KEEP MAX (x) (DENSE_RANK OVER LAST SERVICE BY x)' (where the exact same column is used as an argument and that the ORDER BY column) is just "MAX (x)".

  • Almost any time I use the usb climb on my computer (with flash or ipod players) it freezes and crashes windows. I am up to date with drivers with Microsoft and HP, what can I do to stop this?

    I have no problem with the usb drives but in recent times, this happens all the time.  I can go into Device Manager and uninstall my usb drivers and allows that, sometimes, but it's a temporary solution.  I am up to date with drivers with Microsoft and HP, what can I do to stop this?

    Hello

    You can try to use your boot USB storage device and check if that helps. A clean boot helps eliminate software conflicts.

    The following link has steps showing how to perform the clean boot. http://support.Microsoft.com/kb/929135  
    (1) perform the clean boot
    i. Click Start, type msconfig in the search box and press ENTER.
    If you are prompted for an administrator password or a confirmation, type the password, or click on continue.
    II. in the general tab, click Selective startup.
    III. under Selective startup, clear the check box load startup items.
    IV. click the Services tab, select the hide all Microsoft Services check box and then click Disable all.
    v. click OK.
    VI. When you are prompted, click on restart.
    VII. after the computer starts, check if the problem is resolved.

    (2) enable half the services
    (3) determine whether the problem returns
    (4) enable half of the startup items
    (5) determine if the problem returns
    (6) repeat the steps above until you find out which program or service is causing the issue
     
    After you determine the startup item or the service that is causing the problem, contact the manufacturer of the program to determine if the problem can be solved. Or, run the System Configuration utility, and then click to clear the check box of the element of the problem.
     
    Note: Please make sure that the computer is configured to start as usual by following step 7 of article http://support.microsoft.com/kb/929135 .

    Reset the computer to start as usual

    When you are finished troubleshooting, follow these steps to reset the computer to start as usual:
    (i) click Start, type msconfig.exe in the start search box and press ENTER. If you are prompted for an administrator password or for confirmation, type your password, or click on continue.
    (II) on the general tab, click the Normal startup option, and then click OK.

  • Error: The lines of data with unmapped dimensions exist for period "1 April 2014".

    Expert Hi

    The below error when I click on the button Execute in order to load data in the area of data loading in 11.1.2.3 workspace. Actually, I already put in the tabs global mapping (add records of 12 months), mapping of Application (add records of 12 months) and map sources (add a month "1 April 2014' as the name of period with Type = Explicit mapping") in the service of the period mapping. What else should I check to fix this? Thank you.

    2014-04-29 06:10:35, 624 [AIF] INFO: beginning of the process FDMEE, process ID: 56
    2014-04-29 06:10:35, 625 [AIF] INFO: recording of the FDMEE level: 4
    2014-04-29 06:10:35, 625 [AIF] INFO: FDMEE log file: null\outbox\logs\AAES_56.log
    2014-04-29 06:10:35, 625 [AIF] INFO: user: admin
    2014-04-29 06:10:35, 625 [AIF] INFO: place: AAESLocation (Partitionkey:2)
    2014-04-29 06:10:35, 626 [AIF] INFO: period name: Apr 1, 2014 (period key: 4/1/14-12:00 AM)
    2014-04-29 06:10:35, 627 [AIF] INFO: category name: AAESGCM (category key: 2)
    2014-04-29 06:10:35, 627 [AIF] INFO: name rule: AAESDLR (rule ID:7)
    2014-04-29 06:10:37, 504 [AIF] INFO: Jython Version: 2.5.1 (Release_2_5_1:6813, September 26 2009, 13:47:54)
    [JRockit (R) Oracle (Oracle Corporation)]
    2014-04-29 06:10:37, 504 [AIF] INFO: Java platform: java1.6.0_37
    2014-04-29 06:10:39, 364 INFO [AIF]: - START IMPORT STEP -
    2014-04-29 06:10:45, 727 INFO [AIF]:
    Import of Source data for the period "1 April 2014".
    2014-04-29 06:10:45, 742 INFO [AIF]:
    Import data from Source for the book "ABC_LEDGER".
    2014-04-29 06:10:45, 765 INFO [AIF]: monetary data lines imported from Source: 12
    2014-04-29 06:10:45, 783 [AIF] INFO: Total of lines of data from the Source: 12
    2014-04-29 06:10:46, 270 INFO [AIF]:
    Map data for period "1 April 2014".
    2014-04-29 06:10:46, 277 [AIF] INFO:
    Treatment of the column mappings 'ACCOUNT '.
    2014-04-29 06:10:46, 280 INFO [AIF]: data rows updated EXPLICIT mapping rule: 12
    2014-04-29 06:10:46, 280 INFO [AIF]:
    Treatment of the "ENTITY" column mappings
    2014-04-29 06:10:46, 281 [AIF] INFO: rows of data updates to EXPLICIT mapping rule: 12
    2014-04-29 06:10:46, 281 [AIF] INFO:
    Treatment of the column mappings "UD1.
    2014-04-29 06:10:46, 282 [AIF] INFO: rows of data updates to EXPLICIT mapping rule: 12
    2014-04-29 06:10:46, 282 [AIF] INFO:
    Treatment of the column mappings "node2".
    2014-04-29 06:10:46, 283 [AIF] INFO: rows of data updates to EXPLICIT mapping rule: 12
    2014-04-29 06:10:46, 312 [AIF] INFO:
    Scene for period data "1 April 2014".
    2014-04-29 06:10:46, 315 [AIF] INFO: number of deleted lines of TDATAMAPSEG: 171
    2014-04-29 06:10:46, 321 [AIF] INFO: number of lines inserted in TDATAMAPSEG: 171
    2014-04-29 06:10:46, INFO 324 [AIF]: number of deleted lines of TDATAMAP_T: 171
    2014-04-29 06:10:46, 325 [AIF] INFO: number of deleted lines of TDATASEG: 12
    2014-04-29 06:10:46, 331 [AIF] INFO: number of lines inserted in TDATASEG: 12
    2014-04-29 06:10:46, 332 [AIF] INFO: number of deleted lines of TDATASEG_T: 12
    2014-04-29 06:10:46, 366 [AIF] INFO: - END IMPORT STEP -
    2014-04-29 06:10:46, 408 [AIF] INFO: - START NEXT STEP -
    2014-04-29 06:10:46, 462 [AIF] INFO:
    Validate the data maps for the period "1 April 2014".
    2014-04-29 06:10:46, 473 INFO [AIF]: data rows marked as invalid: 12
    2014-04-29 06:10:46, ERROR 473 [AIF]: error: the lines of data with unmapped dimensions exist for period "1 April 2014".
    2014-04-29 06:10:46, 476 [AIF] INFO: Total lines of data available for export to the target: 0
    2014-04-29 06:10:46, 478 FATAL [AIF]: error in CommMap.validateData
    Traceback (most recent call changed):
    Folder "< string >", line 2348 in validateData
    RuntimeError: [u "error: the lines of data with unmapped dimensions exist for period" 1 April 2014' ""]

    2014-04-29 06:10:46, 551 FATAL [AIF]: COMM error validating data
    2014-04-29 06:10:46, 556 INFO [AIF]: end process FDMEE, process ID: 56

    Thanks to all you guys

    This problem is solved after I maped all dimensions in order of loading the data. I traced only Entity, account, Custom1 and Custom2 at first because there is no source map Custom3, Custom4 and PIC. After doing the mapping for Custom3, Custom4 and PKI, the problem is resolved. This is why all dimensions should be mapped here.

  • How to read the data with different XML schemas within the unique connection?

    • I have Oracle database 11g
    • I access it via JDBC: Slim, version 11.2.0.3, same as xdb.
    • I have several tables, each has an XMLType column, all based on patterns.
    • There are three XML schemas different registered in the DB
    • Maybe I need to read the XML data in multiple tables.
    • If all the XMLTypes have the same XML schema, there is no problem,
    • If patterns are different, the second reading will throw BindXMLException.
    • If I reset the connection between the readings of the XMLType column with different schemas, it works.

    The question is: How can I configure the driver, or the connection to be able to read the data with different XML schemas without resetting the connection (which is expensive).

    Code to get data from XMLType is the implementation of case study:

     1   ResultSet resultSet = statement.executeQuery( sql ) ; 
    2   String result = null ;
    3    while(resultSet.next()) {
    4   SQLXML sqlxml = resultSet.getSQLXML(1) ;
    5   result = sqlxml.getString() ;
    6   sqlxml.free();
    7   }
    8   resultSet.close();
    9    return result ;

    It turns out, that I needed to serialize the XML on the server and read it as BLOB. Like this:

     1    final Statement statement = connection.createStatement() ;  2    final String sql = String.format("select xmlserialize(content xml_content_column as blob encoding 'UTF-8') from %s where key='%s'", table, key ) ;  3   ResultSet resultSet = statement.executeQuery( sql ) ;  4   String result = null ;  5    while(resultSet.next()) {  6   Blob blob = resultSet.getBlob( 1 );  7   InputStream inputStream = blob.getBinaryStream();  8   result = new Scanner( inputStream ).useDelimiter( "\\A" ).next();  9   inputStream.close(); 10   blob.free(); 11   } 12   resultSet.close(); 13   statement.close(); 14  15   System.out.println( result ); 16    return result ; 17
    

    Then it works. Still, can't get it work with XMLType in resultset. On the customer XML unwrapping explodes trying to pass to another XML schema. JDBC/XDB problem?

  • management of the PDBs with Oracle Enterprise Manager Express

    Hello

    is 12 c, possible to manage the PDBs with Oracle Enterprise Manager Express?

    How to find the URL?

    When I run select dbms_xdb_config.gethttpsport (double); on a PDB file, it returns no rows.

    More tutorials

    https://Apex.Oracle.com/pls/Apex/f?p=44785:24:4833387311898:P24_CONTENT_ID, P24_PREV_PAGE:6282, 24

    I have not seen.

    Thank you.

    is 12 c, possible to manage the PDBs with Oracle Enterprise Manager Express?

    Yes - but you will not have the COMPLETE management functionality. See the FAQ doc:

    http://www.Oracle.com/technetwork/database/manageability/emx-CDB-1965987.html

    Can I Plug and unplug the PDBs with EM Express?

    No, EM Express does not all CBD management capabilities in DB 12.1.0.1.0.

    Review ALL sections of doc

    How to find the URL?

    When I run select dbms_xdb_config.gethttpsport (double); on a PDB file, it returns no rows.

    Have you set up EM and set the port to the PDB? Again - see the FAQ doc

    http://www.Oracle.com/technetwork/database/manageability/emx-intro-1965965.html#A5

    How can I configure EM Express on CBD and APB?

    Users can configure EM Express at the root and container of the PDB, with each container using another HTTP/HTTPS port.  When it is connected to the root container, the information displayed are the entire database, including all PDB files.  When it is connected to a PDB file, displayed information is limited to the PDB data.  For more information, see the section on EM Express on the CBD.

    http://www.Oracle.com/technetwork/database/manageability/emx-intro-1965965.html

  • Throw the records with null values columns

    Hi all.

    Anyone know how to dispose of records containing null column values?

    In the target table, I have the set Null option? with "N"... then the process sqlldr load some records. I need to load all of this without the records with null column values. by result, if the field X is null then load the file.

    Kind regards.

    Published by: ASzo on 05/06/2013 12:37

    Published by: ASzo on 05/06/2013 12:38

    Published by: ASzo on 05/06/2013 12:42

    load data
    INFILE...
    in the table...
    When x! = ''
    fields completed by...
    (x ...)

  • Question about mySQL Date and NULL value

    I am using phpMyAdmin for a simple mySQL database. Am a little confused here, as I have two date fields that, as far as I can tell, are the same - ticked i.e. Date, with a default of null with the NULL value.

    If I update a record, but let both of these fields is empty, it remains empty, and the other is filled with the 0000-00-00 (and displays 30 November - 0001 Web site.)

    Pointers on why two supposedly identical fields are different?

    Thank you.

    It must be in the script.

    Is it possible to copy and paste the script here. The script that causes the problem is the script Insert or Update script. A list of the fields in the table will be also useful.

  • Several files of data with autoextend ON &amp; OFF

    Hello

    Is there any negative impact, and if I add a new data file to the storage space with auto-prolonger ON, while the old data file for the same storage space is OFF.

    Here is my scenario:
    I have 3 groups of disks in my ASM (DATA1, DATA2, and DATA3) instance.

    Now, all my tablespace's datafiles of Data1 with auto - THEY extend. (It was a mitake creating tablespace). To distribute the disk space, I intend to add a new data files of DATA 2 & 3 for half of the storage space.
    And for these storage spaces, I'll turn off the auto-prolonger for old data files and marketing the autoextend for new data files.

    The idea is, Oracle will begin to load the data for the new ASM disk groups in the future and I'll get the appropriate disk space distribution. Is there a problem with this approach? (In particular, when one of my datafile autoextend is DISABLED and other is lit).

    Your advice is appreciated!

    Kind regards

    In fact, it is a valid strategy to ensure that some datafile (s) in some single user grow while others stay the same size.

    Correctly, several data files must have been created earlier so that data would have been divided on all files. However, you can always add new data with AUTOEXTEND ON files and set the old files of data AUTOEXTEND off at any time (and switch ON and OFF as needed if you need to dynamically manage the use of the space of the file system, when you move data to new file systems files).

    Is there a negative impact

    N °

    Hemant K Collette

  • Display different data with CSV

    Hello

    How can I build a query to separate the data with csv, but show me only the distinct records

    SQL>  SELECT t1.cd_usuario||','||t1.cd_usuario_cc usuarios
      2    FROM sibtb_usuario_email t1;
     
    ukagisle,udoaldir
    ukagisle,udoaldir
    ukagisle,udoaldir
    upisergi,udoaldir,ubaemers
    ukagisle,udoaldir
    ukagisle,udoaldir
    ukagisle,udoaldir
    ukagisle,udoaldir
    ukagisle,udoaldir
    ukagisle,udoaldir
    ukagisle,udoaldir
    ukagisle,udoaldir
    usisheil,ureanacl,usomaris
    ureanacl,
    usisheil,udoaldir,ujuhelio,ujujulio,ufinilo
    I woudld like show only, each record in each line
        ukagisle
        udoaldir
        upisergi
        ukagisle
        usisheil
        ureanacl
        usomaris
        ujuhelio
        ujujulio
        ufinilo
       
    Is it possible to do so, by using ORACLE 9.2.02

    TIA

    Like this?

    With T as(
    SELECT distinct t1.cd_usuario||','||t1.cd_usuario_cc||',' usuarios
    FROM sibtb_usuario_email t1
    )
    select distinct usuarios from (
         select
         case when no=1 then
              substr(usuarios, 1,instr(usuarios,',')-1)
         else
              substr(usuarios,instr(usuarios,',',1,no-1)+1,(instr(usuarios,',',1,no)-instr(usuarios,',',1,no-1))-1)
         end usuarios
         from t
         cross join
         (
              select level no
              from (select max((length(usuarios)-length(replace(usuarios,',')))+1) len from t)
              connect by level <= len
         )
    )
    where trim(usuarios) is not null
    /
    
    USUARIOS
    ---------------------------------------------
    usisheil
    ujujulio
    upisergi
    udoaldir
    ureanacl
    ukagisle
    ubaemers
    usomaris
    ujuhelio
    ufinilo
    
    10 rows selected.
    
    Elapsed: 00:00:00.07
    

    HTH,
    Prazy

    Published by: Prazy on April 28, 2010 18:33
    In SQL format... Looked clumsy *.<>

  • Need help with Dell Storage Manager is not not able to connect to the PS EQL group

    I have recently updated our data collector and Enterprise Manager customer to Dell Storage Manager R2 2016 [Build: 16.2.1.228]. With this new version, Dell has added the ability to connect to Equallogic PS berries, as long as they are on the right firmware.

    I have two paintings of PS I want to add to the Storage Manager. The first is a picture of PS4110 in our M1000E chassis. I was able to add this PS Group for Storage Manager. But when I try to add the PS6210X, I get an error that says: "cannot add the PS Group '. No other information is provided. The two paintings are on the same version of firmware 7.17. I ping the problem array of server that runs the data collector. If Java is installed, I could even access the web gui of this table using IE. Two of these paintings IP management are on the same subnet.

    What Miss me?

    I posted this on the forum Compellent and did not have all the answers, so I write it here as well.

    Hello

    When you say you "disabled", what exactly do you have?    I suggest clearing on the banner.

    I check to see if there are any other suggestions for this problem.

    Kind regards

    Don

Maybe you are looking for