making line for the current line on the expansion table

Hello

I use JDeveloper 11.1.1.5.0

I have a table with facet detailstamp enabled. Each line is to have an arrow icon to expand this line.

My problem is that line expand action seem to work independently with the selection of rows in the table. When I develop a line, it does not only rank as current. I tried with the code below to make the line of the current table, but could not do.
    public void makeCurrentRow(RowDisclosureEvent rowDisclosureEvent) {
        usersTable = (RichTable)(rowDisclosureEvent.getSource());        //usersTable is the binding of the table on the page
        RowKeySet discloseRowKeySet = usersTable.getDisclosedRowKeys();
        usersTable.setSelectedRowKeys(discloseRowKeySet);

        AdfFacesContext.getCurrentInstance().addPartialTarget(usersTable);
    }
Please let me how can know I do the table row under the current name on the expansion of this line.

Kind regards
Fox

I implemented a sample here http://tompeez.wordpress.com/2013/04/12/make-disclosed-row-the-current-row-when-using-a-detail-facet-of-a-table/

Timo

Tags: Java

Similar Questions

  • How to take partial dump using EXP/IMP in oracle only for the main tables

    Hi all

    select*from v$version;
    
    Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Prod
    PL/SQL Release 10.2.0.1.0 - Production
    "CORE    10.2.0.1.0    Production"
    TNS for 32-bit Windows: Version 10.2.0.1.0 - Production
    NLSRTL Version 10.2.0.1.0 - Production
    

    I have about 500 huge data main tables in my database of pre production. I have an environment to test with the same structure of old masters. This test environment have already old copy of main tables production. I take the dump file from pre production environment with data from last week. old data from the main table are not necessary that these data are already available in my test environment. And also I don't need to take all the tables of pre production. only the main tables have to do with last week data.

    How can I take partial data masters pre prodcution database tables?  and how do I import only the new record in the test database.

    I use orders EXP and IMP. But I don't see the option to take partial data. Please advice.

    Hello

    For the first part of it - the paintings of masters just want to - use datapump with a request to just extract the tables - see example below (you're on v10, so it is possible)

    Oracle DBA Blog 2.0: expdp dynamic list of tables

    However - you should be able to get a list of master tables in a single select statement - is it possible?

    For the second part - are you able to qrite a query live each main table for you show the changed rows? If you can not write a query to do this, then you won't be able to use datapump to extract only changed lines.

    Normally I would just extract all the paintings of masters completely and refresh all...

    See you soon,.

    Rich

  • With the help of AfterInsertTrigger for the audit table

    Hi all

    I use after Insert trigger to re - write the record goes to the base table in the corresponding table. The statement from the two tables are the same, except that the table has a column more - audit_time. Here is how I defined the trigger:

    Create or replace trigger dup_rec after insert on table_A for each line
    begin PK_comm.create_audit_rec (table_A, table_b,: new.) ROWID); end;


    In the PK_comm package, I defined the procedure as 'PRAGMA AUTONOMOUS_TRANSACTION' create_audit_rec as follows:


    Procedure create_audit_rec (baseTable IN VARCHAR2, auditTable IN VARCHAR2, cRowId IN VARCHAR2)
    IS
    PRAGME AUTONOMOUS_TRANSACTION;

    Insert into auditTable (audit_time, col1, col2, col3)
    Select systimestamp, col1, col2, col3
    from baseTable
    where ROWID = chartorowid ('v_rowID');

    It seems that registration is NOT added to the audit table when I tried to insert a record in the database table. Is it possible that integration into baseTable have not completed when the trigger is activated? If Yes, what is the work around?

    Thanks for your help,

    Mike

    Packages have session scope. Thus, each session will have a different instance of the collection.

    If you have multiple tables, you must in general, several collections. If you can guarantee that a single table would never change during the scope of a single statement, you could get away with a single collection. But as soon as a trigger on table A changes data in table B, you would need different collections. For most applications, it is much easier to simply create separate collections. I guess you can also create a collection of objects and folders that have a table name and a ROWID as well and add the logic to process only the associated with the table ROWID that you are interested in.

    Justin

  • Check for the empty table row before adding the date

    On the form below, when I click on the green button (extreme right) plus a new row in the table is created with today's date. the user can then enter more text to the right of the date. Problem is when the form is saved and reopened, the text that the user entered is removed and today new is added because it is in the intialize event. How do I script to check and make sure that each dated line is empty before you add today's date?

    https://Acrobat.com/#d=qTINfyoXA-U6cDxOGgcSEw

    Thank you

    ~ Gift

    Hi Don,

    One possibility would be to use the box caption of the textfield for the date and leave the value part free for the user to enter their data:

    if (xfa.resolveNode("this.caption.value.#text").value === "") {
              this.caption.value.text = util.printd("[mm/dd/yy] ", new Date() );
    }
    

    See here: https://acrobat.com/#d=VjJ-YsXLKmV6QU84JrAAIw.

    Hope that helps,

    Niall

  • How to get the metadata for the selected tables of a schema

    Hello

    I need the metadata for the tables selected for an activity. The list of the table continues to change now. I get the list of tables just before the activity.

    What I need is to know how to put the list of tables in the sub query dynamicallly

    +++++++++++
    exec dbms_metadata.set_transform_param (DBMS_METADATA. False SESSION_TRANSFORM, 'SEGMENT_ATTRIBUTES',);
    exec dbms_metadata.set_transform_param (DBMS_METADATA. False SESSION_TRANSFORM, 'STORAGE',);
    exec dbms_metadata.set_transform_param (DBMS_METADATA. SESSION_TRANSFORM, 'TABLESPACE', TRUE);

    Select DBMS_METADATA. GET_DDL ('TABLE', '&') from user_tables where rownum < 2;
    +++++++++++

    What I need is something of the form where table_name in (< table1 >, < table2 >, < table3 >,..., etc.) in the query above, so that i access all the metadata
    He doesn't hit me how I can write this query to single line. Can someone help here?

    Regds,
    Malika

    Hello

    try using the name of the column to user_tables and owner of the table in the "DBMS_METADATA. GET_DDL' and the reel to reel for some files and run this file in turn exit.

    coil
    Select ' select DBMS_METADATA. GET_DDL ("' TABLE," ' | table_name |', "' ) double;' from user_tables
    where ;
    spool off;

    @some_file - trying to spool output too.

    -Pavan Kumar N

  • Setting or drop the synonym of synonym for the new table access

    Hello

    Using oracle 11.2.0.3 and we have the suite requirement.

    New, we build a summary table to replace an older.

    Currently reports access a synonymn which in turn accesses the table of old_summary.

    Once built, we would like to change the synonym to point to the new table, ideally with no interruption of service.

    The only thing that could potentially access these synonyms when change is synonymous with points table select stattements.

    Pu questions.

    (1) can you change table points synonym of without deletion and recreation?


    (2) what happens if have a sysnonym of race and drop of select statement to re-create.  It runs until the end sysnonym fact existed early in the race?

    Thank you

    1)  Can you change table synonym points to without dropping and recreating?
    

    He has not ALTER statement for synonyms. As I said the other answering machine use CREATE or REPLACE.

    2)  What happens if have a select statment running and drop sysnonym the recreate.  Does it run to completion fact sysnonym existed at start of running?
    

    All object references are resolved BEFORE the implementation phase. Once a statement begins executing, it is immaterial that the same is a synonym.

  • Look for the partition for the fact table

    Oracle version: Oracle 10.2

    I have a table of facts with daily partitions.
    I'm insertion of test data in this table for the old date 20100101.

    I am able to insert that record into this table as below
    insert into fact_table values (20100101,123,456);

    However I noticed that the partition for this date does not exist in the table (all_tab_partitions) more I'm not able to select the data using

    Select * from facT_table partition (d_20100101)

    but I am able to extract the data using

    Select * from facT_table where date_id = 20100101

    could you get it some please let me know how to find the partition in which these data could be inserted
    and if the partition date 20100101 is not present so why put so that the date doesn't work?

    user507531 wrote:
    However I noticed that the partition for this date does not exist in the table (all_tab_partitions) more I'm not able to select the data using

    Select * from facT_table partition (d_20100101)

    Wrong approach.

    but I am able to extract the data using

    Select * from facT_table where date_id = 20100101

    Correct approach.

    could you get it some please let me know how to find the partition in which these data could be inserted
    and if the partition date 20100101 is not present so why put so that the date doesn't work?

    Who says that the date is invalid. ? It is a partition of the range - which means that each partition covers a range. And if you bothered to read in the SQL Reference Guide on how the definition of a partition of the range, you will notice that each partition is defined with the value end of the range it covers. There is no starting value - as the end of the previous partition is the "+ border +" between this and the previous partition.

    I suggest that before you use a database feature first become familiar you with it. Another evil to use and making assumptions wrong about it, more than likely outcomes.

  • Best app to make a custom template for the versatile tables?

    I'm new as a result of creative and wonder what the best app is to make a custom template to view and edit tables such as Budgets and the Phases of the project for my clients in new design. Most of the models I find online are dull spreadsheets.

    I want to:

    • create my own templates with my own brand image
    • Edit and adjust the information in the tables for the Phases and budget proposals.
      • import data from spreadsheets or directly enter custom data
    • export to PDF or JPG for quick sharing

    It is better to make of spreadsheets and import them into my models? If so what is the most compatible adobe for this software? I take courses at Lynda.com and would like to know where to focus my hours of learning.

    I appreciate any help!

    Thank you

    Gabe

    If you want to make a spreadsheet, you must use the spreadsheet software as ms excellent.  Adobe does not have the spreadsheet software.

    and you could google that if you want something free or other choices.

  • What is advised to collect statistics for the huge tables?

    We have a staging database, some tables are huge, hundreds GB in size.  Auto stats tasks are performed, but sometimes it will miss deadlines.

    We would like to know the best practices or tips.

    Thank you.

    Improvement of the efficiency of the collection of statistics can be achieved with:

    1. Parallelism using
    2. Additional statistics

    Parallelism using

    Parallelism can be used in many ways for the collection of statistics

    1. Parallelism object intra
    2. Internal parallelism of the object
    3. Inner and Intra object jointly parallelism

    Parallelism object intra

    The DBMS_STATS package contains the DEGREE parameter. This setting controls the intra parallelism, it controls the number of parallel processes to gather statistics. By default, this parameter has the value is equal to 1. You can increase it by using the DBMS_STATS.SET_PARAM procedure. If you do not set this number, you can allow oracle to determine the optimal number of parallel processes that will be used to collect the statistics. It can be achieved if you set the DEGREE with the DBMS_STATS. Value AUTO_DEGREE.

    Internal parallelism of the object

    If you have the 11.2.0.2 version of Oracle database you can set SIMULTANEOUS preferences that are responsible for the collection of statistics, preferably. When there is TRUE value at the same TIME, Oracle uses the Scheduler and Advanced Queuing to simultaneously manage several jobs statistics. The number of parallel jobs is controlled by the JOB_QUEUE_PROCESSES parameter. This parameter must be equal to two times a number of your processor cores (if you have two CPU with 8 cores of each, then the JOB_QUEUE_PROCESSES parameter must be equal to 2 (CPU) x 8 (cores) x 2 = 32). You must set this parameter at the level of the system (ALTER SYSTEM SET...).

    Additional statistics

    This best option corresponds to a partitioned table. If the INCREMENTAL for a partitioned table parameter is set to TRUE and the DBMS_STATS. GATHER_TABLE_STATS GRANULARITY setting is set to GLOBAL and the parameter of DBMS_STATS ESTIMATE_PERCENT. GATHER_TABLE_STATS is set to AUTO_SAMPLE_SIZE, Oracle will scan only the partitions that have changes.

    For more information, read this document and DBMS_STATS

  • Purge of work for the inner table of wwv_flow_file_objects$

    Hi all

    Can anyone tell me is there any internal apex job that deletes the data in table $ wwv_flow_file_object?  I see the size of this table continues to increase.

    I'm planing to create a job manually, if it is not there.

    I use Oracle Apex 4.2.

    ZAPEX wrote:

    Thanks for sharing the information.

    I checked the setting of the Instance. Job is enabled and is set to delete files every 14 days, but when I checked wwv_flow_files it contains files that was created on 2013.

    Can you let me know what can be the reason? Is it possible that the work is broken?

    It is more likely that he is not serving all types of files. It would be only broken if the retained files are of types listed in the documentation. You need to determine what these files are and how and why they have been downloaded.

    You have applications that download files and not to remove / move them to a table of application? If Yes, then by editing these applications to store and manage files in the user tables rather than system tables is a better solution than blindly trying to purge everything.

  • Sinlge select query in the diff for the same table (same Structure) diagrams

    Scenario:

    Table XYZ is created in detail a.
    After a year, the old data of the previous year could be moved to another schema. However in the other schema of the same table name would be used.

    For example

    A schema contains XYZ table with data from the year 2012
    Schema B contains XYZ table with data for the year 2011
    Table XYZ in the two schemas have an identical structure.

    So we can draw a single select query to read the data from the tables in an effective way.
    For example select * from XYZ so including date between October 15, 2011 to March 15, 2012.
    However, the data resides in 2 different schema altogether.


    Creating a view is an option.
    But my problem, there are ORM (Hibernate or Eclipse Top Link) layer between the application and the database.
    If the queries would be constituted by the ORM layer and are not generated by hand.
    So I can't use the view.
    So is there any option that would allow me to use only query on different scheme?

    970773 wrote:
    Scenario:

    Table XYZ is created in detail a.
    After a year, the old data of the previous year could be moved to another schema. However in the other schema of the same table name would be used.

    For example

    A schema contains XYZ table with data from the year 2012
    Schema B contains XYZ table with data for the year 2011
    Table XYZ in the two schemas have an identical structure.

    So we can draw a single select query to read the data from the tables in an effective way.
    For example select * from XYZ so including date between October 15, 2011 to March 15, 2012.
    However, the data resides in 2 different schema altogether.

    Creating a view is an option.
    But my problem, there are ORM (Hibernate or Eclipse Top Link) layer between the application and the database.
    If the queries would be constituted by the ORM layer and are not generated by hand.
    So I can't use the view.

    Why not make the ORM as below?

    SELECT * FROM VIEW_BOTH;
    -VIEW_BOTH is a real VIEW of Oracle

  • Size for the imported table descrepancy

    Version: 11.2
    OS: RHEL 5.6

    I imported a large table with (450 columns). I'm a little confused about the size of this table
    Import: Release 11.2.0.1.0 - Production on Wed Aug 1 14:08:42 2012
    
    Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.
    ;;;
    Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    Master table "SYS"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
    Starting "SYS"."SYS_IMPORT_FULL_01":  userid="/******** AS SYSDBA" DIRECTORY=dpump2 DUMPFILE=sku_dtl_%U.dmp LOGFILE=impdp_TST.log REMAP_SCHEMA=WMTRX:testusr REMAP_TABLESPACE=WMTRX_TS:TESTUSR_DATA1 PARALLEL=4
    Processing object type TABLE_EXPORT/TABLE/TABLE
    Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
    . . imported "TESTUSR"."SKU_DTL"                         7.311 GB 9502189 rows         <-------------------------------------------
    Processing object type TABLE_EXPORT/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT
    Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
    Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
    .
    .
    Although it is said 7,311 gb when importing, which is not reflected in DBA_SEGMENTS

    SQL> select sum(bytes/1024/1024/1024) from dba_segments where owner = 'TESTUSR' AND SEGMENT_NAME = 'SKU_DTL' AND SEGMENT_TYPE = 'TABLE';
    
    SUM(BYTES/1024/1024/1024)
    -------------------------
                   2.28027344
    
    
    -- Verifying the row count shown in the import log
    
    SQL> SELECT COUNT(*) FROM TESTUSR.SKU_DTL;
    
      COUNT(*)
    ----------
       9502189
    Why is there a difference of 5 GB?

    -Info on indexes created for this table (if this info is of no use)
    The total combined size of index into this array is 12 GB (hope the query below is correct)
    SQL> select sum (bytes/1024/1024) from dba_segments
    where segment_name in
    (
    select index_name from dba_indexes
    where owner = 'TESTUSR' and table_name = 'SKU_DTL'
    ) and segment_type = 'INDEX'  2    3    4    5    6  ;
    
    SUM(BYTES/1024/1024)
    --------------------
                12670.75

    The size is reported in the dump file is the amount of space the data took in the dumpfile. This does not necessarily mean that it is how much space it will discuss when they are imported.

    One reason for this is that if the tablespace in that it is written is compressed or not. If the target tablespace is compressed, then once the import is complete, it will be much smaller than what has been written for the dumpfile.

    I hope this helps.

    Dean

  • How to create indexes for the great table of telecom

    Hello

    I'm working on DB 10 G on a 5 REHL for telecommunications company with more than 1 million saved per day, we need speed up the result of the query.
    We know, there are several types of INDEX and I need professional advice to create a suitable,

    many of our requests depend on the MSID (the MAC address of the Modem) column,
       
    Name           Null Type         
    -------------- ---- ------------ 
    STREAMNUMBER        NUMBER(9)    
    MSID                VARCHAR2(20) 
    USERNAME            VARCHAR2(20) 
    DOMAIN              VARCHAR2(20) 
    USERIP              VARCHAR2(16) 
    CORRELATION_ID      VARCHAR2(64) 
    ACCOUNTREASON       NUMBER(3)    
    STARTTIME           VARCHAR2(14) 
    PRIORTIME           VARCHAR2(14) 
    CURTIME             VARCHAR2(14) 
    SESSIONTIME         NUMBER(9)    
    SESSIONVOLUME       NUMBER(9)    
    .
    .
    .
    Please help,

    Hello

    first of all, think of all these SQL for the subquery on MAX (fairy) with analytical functions, the examples given on AskTom of rewriting: http://asktom.oracle.com/pls/asktom/f?p=100:11:0:P11_QUESTION_ID:9843506698920

    So I'll start with a normal index the MSID (but I think you already have), but provide a compression on the column, I think the same value MSID exist several times in the table. If the performance is not satisfactory or that the plan is not the best, then an option to include more columns can help you.

    I think that the first part of the answer will bring more gain.

    Herald tiomela
    http://htendam.WordPress.com

  • AF:table columns are required for the entire table space.

    Hello

    I use a certain controls fast query table. Some tables have only one or two columns displaying data.
    It is necessary that in such a case, avoid any white space after the columns.

    I tried setting for each column width to 50%, but it gave no results required in firefox 4.0

    I use JDeveloper 11.1.1.4.0.

    The only option that seems to work is to specify the width of the column in pixels. It's the last thing I like to do.

    Is there a styleClass or any other better alternative?

    Concerning

    for the game of table the styleClass as "AFStretchWidth" - which would ensure that it occupies the entire width.
    and use columnStretching as required.

    Sample:


    fetchSize = "#{bindings." Departments.rangeSize}.
    emptyText = "#{bindings." Departments.Viewable? "{'No data to display.': 'Access Denied.'}".
    var = "row" value = "#{bindings." Departments.collectionModel}' rowBandingInterval = '0 '.
    selectionListener = "#{bindings." Departments.collectionModel.makeCurrent}" columnStretching ="last"
    'unique' = rowSelection styleClass = "AFStretchWidth" >

    Thank you
    Nini

  • Update for the tree table problem

    Hello
    I use jdev tudio Edition Version 11.1.1.4.0
    I used the table tree to display my hirarchical data. I'm Bulding object for the purpose of list of floor.
    I created pojo data control. I am trying to drag and drop the level 1 of bulding1 on the bulding2 so it will be added to the bulding2. for that I packed a pojo addfloor() method and I add object floor in to list of floor of the bulding2 object. but the picture of the tree will not show the list update for bulding2.

    My question is how update the table of the tree so that it displays the data added to pojo on UI?

    I created the data from pojo under control
    package model;

    import java.util.ArrayList;
    import java.util.List;
    public class {Enterprise
    private list < Bulding > lstbuldingFor = new ArrayList < Bulding > ();
    Public Enterprise() {}
    Super();
    Bulding objbulding = new Bulding();
    objbulding.setBuldingName ("Bulding1");
    List < floor > lstfloor = new ArrayList < floor > ();
    Floor objfloor is new Floor();.
    objfloor.setFloorname ("Floor1");
    lstfloor. Add (objfloor);
    Floor objfloor2 is new Floor();.
    objfloor2.setFloorname ("Floor2");
    lstfloor. Add (objfloor2);
    objbulding.setLstfloor (lstfloor);
    lstbuldingFor.add (objbulding);
    Bulding objbulding2 = new Bulding();
    objbulding2.setBuldingName ("Bulding2");
    List < floor > lstfloor2 = new ArrayList < floor > ();
    Floor objfloor3 is new Floor();.
    objfloor3.setFloorname ("Floor3");
    lstfloor2. Add (objfloor3);
    Floor objfloor4 is new Floor();.
    objfloor4.setFloorname ("Floor4");
    lstfloor2. Add (objfloor4);
    objbulding2.setLstfloor (lstfloor2);
    lstbuldingFor.add (objbulding2);
    }

    {public adddFloor Sub (floor objfloor, String buldingName)

    System.out.println ("buldingName" + buldingName);
    System.out.println ("floor" + objfloor.getFloorname ());
    for (Bulding objBld:lstbuldingFor) {}

    {if (objBld.getBuldingName (.equalsIgnoreCase (buldingName)))}
    List < floor > objtempflr = new ArrayList < floor > ();
    System.out.println ("had correspondence");
    objtempflr = objBld.getLstfloor ();
    objtempflr. Add (objfloor);
    objBld.setLstfloor (objtempflr);
    break;
    }
    }

    getLstbuldingFor();
    }
    {} public void setLstbuldingFor (list < Bulding > lstbuldingFor)
    this.lstbuldingFor = lstbuldingFor;
    }

    public list < Bulding > getLstbuldingFor() {}
    for (Bulding objbld:lstbuldingFor) {}
    System.out.println ("@" + objbld.getBuldingName ());
    for (floor objflr:objbld.getLstfloor()) {}
    System.out.println ("@" + objflr.getFloorname ());
    }
    }
    Return lstbuldingFor;
    }
    }


    I'll call adddFloor() of the user interface to add the word to the bulding.

    Hello

    Make sure that you re - run the iterator that fills the tree. You make a change in the control of data, which is not immediately reflected in the link layer

    Frank

Maybe you are looking for

  • Whenever I have download Firefox and then try to run it, I get a message "file corrupted".

    After downloading Firefox 6, I tried to run Firefox 6 on three different computers and whenever I get a message "the file is corrupt. I tried on different computers because I thought that maybe he had a problem with the computer on which I downloaded

  • How to connect a 8600 fax wireless jet desktop printer

    How can I connect my 8600 office jet all in one fax to the Wi - Fi connection

  • parse xml timestamp labview in javascript

    Hi, im reading XML files created from labview7.1 with javascript and it works fine, the only problem is that I don't know how to parse the timestamp. For DS ... Date of Eximination 4 0 0 -916291696 0 ... should be the date of 2011-01-24 and ... Date

  • hard drives are not displayed except c:

    I was with win xp SP3 before the power failure. After a power failure and strongly fluctuating I fell from my PC and after when I turned it on, I think all good except local disk local drives except c: do not appear in my computer. I tried to see in

  • DVD shows only not files

    The DVD files that I burned on DVD-R disc are not displayed on the computer. The computer says that the disc has 129 MB of free space, which of course means that the files are there. But when I go to view the files, or attempt to open Autoplay, so I