Data primary bucket

I was wondering if it is possible to get only the entries in the primary bucket after calling the PartitionListener.afterPrimary method? I noticed that the afterPrimary has a bucketId parameter passed. The bucketId can be used to get the entries in the bucket? Should what API I use for this?

You can use a function to iterate over the keys in your bucket.

The bucketId which is passed can be used to perform a function as follows:

Adjust the filter = Collections.singleton (bucketId);

FunctionService.onRegion (pr) .withFilter (filter). Run (new PrimaryDataIterator()) .getResult ();

The function itself can be defined as follows:

class PrimaryDataIterator extends FunctionAdapter {}

This will make the function running on primary bucket

@Override

public boolean optimizedForWrite() {}

Returns true;

}

iterate over the data

public void execute (FunctionContext context) {}

RFC RegionFunctionContext = (RegionFunctionContext) context;

Region r = PartitionRegionHelper.getLocalDataForContext (rfc);

you have access to the keys in your primary bucket

}

}

Tags: VMware

Similar Questions

  • Test for local primary bucket

    I'm looking for if the primary bucket is hosted locally for an entry in a partitioned area. I work with large objects in my partitioned area and I want to optimize operations on these items by working only with locally hosted copies.

    You can use PartitionRegionHelper.getLocalPrimaryData. It returns a region that contains only the entries that are found in the local primary buckets. Sometimes I also use PartitionRegionHelper.getPrimaryMemberForKey for the same purpose and then check if the returned member is this member.

  • Primary buckets in the partitioned area

    I wonder about the behavior of primary compartments in a partitioned area, specifically I would like to know if primary buckets can exist on more than one member at any time in the same region partitioned? In other words, are primary buckets just arbitray buckets in a region who are referred to as the primary name, but which may exist on any member or are they only exist on a member with other members who hold secondary buckets?

    Primary buckets can be present on any node. There is no notion of the head node as such for example, if you have 40 buckets and 4 nodes each node will host 10 primary buckets. Buckets primary jare ust arbitray buckets in a region that are designated as primary and cannot exist on any member.

    Hope this helps,

    Yogesh-

  • Split into buckets of date

    I'm trying to divide the following data into buckets date calculated from the date of the last. The intention is to know what was the amount in circulation at the end of a given period.

    create table products_quality)
    prod_id as varchar2 (10),
    qty_outstanding number (5),
    date of qty_date
    );


    insert into products_quality values (912649,300,to_date (' 8/12/2003 ',' mm/dd/yyyy'));
    insert into products_quality values (912649,600,to_date (' 10/20/2005 ',' mm/dd/yyyy'));
    insert into products_quality values (912649,450,to_date (' 24/11/2006 ',' mm/dd/yyyy'));
    insert into products_quality values (912649,700,to_date (' 1/28/2008 ',' mm/dd/yyyy'));
    insert into products_quality values (912649,650,to_date (' 2/17/2008 ',' mm/dd/yyyy'));
    insert into products_quality values (912649,700,to_date (' 4/7/2008 ',' mm/dd/yyyy'));
    insert into products_quality values (912649,800,to_date (' 7/16/2008 ',' mm/dd/yyyy'));
    insert into products_quality values (912649,900,to_date (' 7/31/2008 ',' mm/dd/yyyy'));
    insert into products_quality values (912649,950,to_date (' 28/12/2008 ',' mm/dd/yyyy'));
    insert into products_quality values (912649,1000,to_date (' 1/1/2009 ',' mm/dd/yyyy'));
    insert into products_quality values (912649,1100,to_date (' 1/17/2009 ',' mm/dd/yyyy'));
    insert into products_quality values (912649,1500,to_date (' 2/1/2009 ',' mm/dd/yyyy'));

    The result will be group by prod_id and show the last qty_outstanding within each of the compartments date. Please note, if there is no data in this bucket, then the data of the next bucket is chosen (e.g. in a bucket (31-90 days) below.) The last date is the base date and is not counted in the bucket

    Prod_id | qty_date | 1 - 30days. 31-90 days | 91 - 180days. 180 - 365days. 366 - 730days | 731 - 1095days | 1096 - 1825days
    912649 | ' 01/02/2009 ' | 1100 | 1000 | 900 | 900 | 700 | 700 | 450. 600. 300


    Can we write a simple SQL to do?

    Thank you
    select  prod_id,
            max(max_qty_date) qty_date,
            max(case when bucket = 1 then qty_outstanding end) keep(dense_rank last order by case bucket when 1 then qty_date end nulls first) "1-30days",
            max(case when bucket = 2 then qty_outstanding end) keep(dense_rank last order by case bucket when 2 then qty_date end nulls first) "31-90days",
            max(case when bucket = 3 then qty_outstanding end) keep(dense_rank last order by case bucket when 3 then qty_date end nulls first) "91-180days",
            max(case when bucket = 4 then qty_outstanding end) keep(dense_rank last order by case bucket when 4 then qty_date end nulls first) "181-365days",
            max(case when bucket = 5 then qty_outstanding end) keep(dense_rank last order by case bucket when 5 then qty_date end nulls first) "366-730days",
            max(case when bucket = 6 then qty_outstanding end) keep(dense_rank last order by case bucket when 6 then qty_date end nulls first) "731-1095days",
            max(case when bucket = 7 then qty_outstanding end) keep(dense_rank last order by case bucket when 7 then qty_date end nulls first) "1096-1825days"
      from  (
             select  prod_id,
                     qty_outstanding,
                     qty_date,
                     max(qty_date) over(partition by prod_id) max_qty_date,
                     case
                       when max(qty_date) over(partition by prod_id) - qty_date = 0 then 0
                       when max(qty_date) over(partition by prod_id) - qty_date <= 30 then 1
                       when max(qty_date) over(partition by prod_id) - qty_date <= 90 then 2
                       when max(qty_date) over(partition by prod_id) - qty_date <= 180 then 3
                       when max(qty_date) over(partition by prod_id) - qty_date <= 365 then 4
                       when max(qty_date) over(partition by prod_id) - qty_date <= 730 then 5
                       when max(qty_date) over(partition by prod_id) - qty_date <= 1095 then 6
                       when max(qty_date) over(partition by prod_id) - qty_date <= 1825 then 7
                     end bucket
               from  products_quality
            )
      where bucket is not null
      group by prod_id
    /  
    
    PROD_ID    QTY_DATE    1-30days  31-90days 91-180days 181-365days 366-730days 731-1095days 1096-1825days
    ---------- --------- ---------- ---------- ---------- ----------- ----------- ------------ -------------
    912649     01-FEB-09       1100       1000                    900         700          450           600
    

    And I don't know how you get your results:

    SQL> select  prod_id,
      2          qty_outstanding,
      3          qty_date,
      4          max(qty_date) over(partition by prod_id) - qty_date days_passed
      5    from  products_quality
      6    order by qty_date desc
      7  /
    
    PROD_ID    QTY_OUTSTANDING QTY_DATE  DAYS_PASSED
    ---------- --------------- --------- -----------
    912649                1500 01-FEB-09           0
    912649                1100 17-JAN-09          15
    912649                1000 01-JAN-09          31
    912649                 950 28-DEC-08          35
    912649                 900 31-JUL-08         185
    912649                 800 16-JUL-08         200
    912649                 700 07-APR-08         300
    912649                 650 17-FEB-08         350
    912649                 700 28-JAN-08         370
    912649                 450 24-NOV-06         800
    912649                 600 20-OCT-05        1200
    
    PROD_ID    QTY_OUTSTANDING QTY_DATE  DAYS_PASSED
    ---------- --------------- --------- -----------
    912649                 300 12-AUG-03        2000
    
    12 rows selected.
    
    SQL> 
    

    As you can see, there is no record with 91 to 180 days days_passed, and you show of 300 as the remaining quantity for 1096-1825 days, while 300 is the remaining quantity for 12 August 03, which is 2000 days apart and therfore is outside the bucket day 1096-1825.

    SY.

  • Get data from a member of the region partitioned customer

    Hello

    I have a partitioned area, this region is in all 3 members, is it possible to get data from this region but only from 1 member of the customer, if so how?

    What is your definition of the region on the server? The three members have given? You set redundant copies? In addition, if the optimizeForWrite() of the service method returns false, the function will run the smallest number of nodes.

    Follow these steps:

    The redundant copies of the value = 1

    Returns true for optimizeForWrite()

    Make sure that all members contain primary buckets

    Perform function like you do above

    Barry

  • derivation of column using the logic of date

    Hello

    I have the tables below where I need the output as below

    main_table

    MI_ACC_IDENTIFIERENT_PST_DTE
    1112 SEP-14 00.00.00
    1112 OCTOBER 14 00.00.00

    Table_reference

    MI_ACC_IDENTIFIERCHARGE_START_DATECHARGE_END_DATE
    111ST AUGUST 14 00.00.0031 AUGUST 14 00.00.00
    1101 OCT-14 00.00.0030 SEP-14 00.00.00
    111ST OCTOBER 14 00.00.0031 OCTOBER 14 00.00.00

    bucket

    INPUT_REFPROD_BUCKET_IDMI_ACC_IDENTIFIERACT_START_DATEACT_END_DATE
    9991478119 JULY 07 00.00.00JUNE 4 08 00.00.00
    99914771101 OCT-14 00.00.0031 OCTOBER 14 00.00.00

    output

    MI_ACC_IDENTIFIERENT_PST_DTECHARGE_START_DATECHARGE_END_DATE
    1112 SEP-14 00.00.001ST AUGUST 14 00.00.0031 AUGUST 14 00.00.00
    1112 OCTOBER 14 00.00.0001 OCT-14 00.00.0030 SEP-14 00.00.00

    Requirement

    01. I need to check the ent_pst_dte of main_table between dates of table_reference and take into account the previous dates

    02 ent_pst_dte of the table main must firstly between dates of bucket table into that he will fall in 1477 (prod_bucket_id), then only it should check if between dates of table_reference

    There may be multiple records in the bucket with id unique input_ref whenever the dates change the prod_bucket_id changes so I refer are based on columns input_ref and mi_acc_indentifier

    Queries:

    CREATE TABLE 'MAIN_TABLE '.

    ("MI_ACC_IDENTIFIER" NUMBER (10,0) NOT NULL ACTIVATE,)

    DATE OF THE 'ENT_PST_DTE '.

    );

    Insert into main_table (MI_ACC_IDENTIFIER, ENT_PST_DTE) values (11, to_date (12-SEP14 00.00.00','DD-MON-RR HH24.MI.)) SS'));

    Insert into main_table (MI_ACC_IDENTIFIER, ENT_PST_DTE) values (11, to_date (October 12, 14 00.00.00','DD-MON-RR HH24.MI.)) SS'));

    CREATE TABLE 'TABLE_REFERENCE.

    ("MI_ACC_IDENTIFIER" NUMBER (10,0) NOT NULL ACTIVATE,)

    DATE OF THE "CHARGE_START_DATE."

    DATE OF THE 'CHARGE_END_DATE '.

    );

    Insert into table_reference (MI_ACC_IDENTIFIER, CHARGE_START_DATE, CHARGE_END_DATE) values (11, to_date (1 August 14 00.00.00','DD-MON-RR HH24.MI.)) SS'), to_date (August 31, 14 00.00.00','DD-MON-RR HH24.MI.) SS'));

    Insert into table_reference (MI_ACC_IDENTIFIER, CHARGE_START_DATE, CHARGE_END_DATE) values (11, to_date ('01 - SEP - 14 00.00.00','DD-MON-RR HH24.MI.)) SS'), to_date (30-OCT-14 00.00.00','DD-MON-RR HH24.MI.) SS'));

    Insert into table_reference (MI_ACC_IDENTIFIER, CHARGE_START_DATE, CHARGE_END_DATE) values (11, to_date (1 October 14 00.00.00','DD-MON-RR HH24.MI.)) SS'), to_date (31 October 14 00.00.00','DD-MON-RR HH24.MI.) SS'));

    CREATE TABLE 'BUCKET '.

    (VARCHAR2 (10 BYTE) "INPUT_REF",

    VARCHAR2 (10 BYTE) "PROD_BUCKET_ID."

    NUMBER (10,0) "MI_ACC_IDENTIFIER."

    DATE OF THE "ACT_START_DATE."

    DATE OF THE 'ACT_END_DATE '.

    );

    Insert in bucket (INPUT_REF, PROD_BUCKET_ID, MI_ACC_IDENTIFIER, ACT_START_DATE, ACT_END_DATE) values ('999 ', ' 1478', 11, to_date (9 July 07 00.00.00','DD-MON-RR HH24.MI.)) To_date SS'), (4 June 08 00.00.00','DD-MON-RR HH24.MI.) SS'));

    Insert in bucket (INPUT_REF, PROD_BUCKET_ID, MI_ACC_IDENTIFIER, ACT_START_DATE, ACT_END_DATE) values ('999 ', ' 1477', 11, to_date ('01 - SEP - 14 00.00.00','DD-MON-RR HH24.MI.)) SS'), to_date (31 October 14 00.00.00','DD-MON-RR HH24.MI.) SS'));

    Your condition isn't all Claire - you want to find the lines of main_table where the date in a date range of bucket; then find the corresponding line of table_reference, but note the dates of beginning and end of the period last ? Something like that, maybe?

    WITH ref_plus AS)

    SELECT r.ml_acc_identifier,

    r.charge_start_date,

    r.charge_end_date,

    LAG (r.charge_start_date) OVER (partition by order of r.charge_start_date r.ml_acc_identifier) prev_start_date,.

    LAG (r.charge_end_date) OVER (partition by order of r.charge_end_date r.ml_acc_identifier) prev_end_date

    From table_reference r)

    SELECT m.ml_acc_identifier,

    m.ent_pst_dte,

    r.prev_start_date charge_start_date,

    r.prev_end_date charge_end_date

    OF main_table m

    INNER JOIN bucket b

    ON b.ml_acc_identifier = m.ml_acc_identifier

    AND m.ent_pst_dte BETWEEN b.act_start_date AND b.act_end_date

    INNER JOIN ref_plus r

    ON r.ml_acc_identifier = m.ml_acc_identifier

    AND m.ent_pst_dte BETWEEN r.charge_start_date AND r.charge_end_date

  • Display the data in dimension without a recording made

    To explain my requirement, lets say I have 3 tables
    City (Dimension presents all the information of city)
    Day (Date Dimension with all dates in recent years (past and future))
    Sales (fact table showing all measures of sales, multiple records for a day and city)

    An example query to retrieve the data I need
    Select the amount of city.name, day.name, SUM (sales.amount)
    day, city, sales
    where city.code = sales.city_code
    and day.date_value = sales.transaction_date
    City.name group, day.name

    Now if there is no transaction of sales for a particular day and the city, still I want to show day and the city with amount 0

    The solution that I can think of is a Cartesian of the city and day tables use it as a point of view online and then perform a join with the sales table.

    Is there a better way to do this, using a simple query logic or any advanced functionality?

    All entries on the above would be a great help.

    Thank you
    Murielle

    Published by: user12033529 on Sep 12, 2010 06:57

    Hi, Murielle,.

    Welcome to the forum!

    Whenever you have a proiblem, please post your version of Oracle (e.g. 10.2.0.3.0); Some examples of data in a form people can re-create the problem (for example

    CREATE TABLE     city
    (     city_id          NUMBER (6)     PRIMARY KEY
    ,     city_name     VARCHAR2 (20)     NOT NULL
    );
    
    INSERT INTO CITY (city_id, city_name) VALUES (1, 'Asheville');
    INSERT INTO CITY (city_id, city_name) VALUES (2, 'Bryson City');
    INSERT INTO CITY (city_id, city_name) VALUES (3, 'Cherokee');
    
    CREATE TABLE     day
    (     dt     DATE     PRIMARY KEY
    );
    
    INSERT INTO day (dt)
    SELECT      DATE '2010-09-10' + LEVEL
    FROM     dual
    CONNECT BY     LEVEL     <= 3;
    
    CREATE TABLE     sales
    (     sales_id     NUMBER (6)     PRIMARY KEY
    ,     city_id          NUMBER (6)
    ,     sales_dt     DATE
    ,     amount          NUMBER (9)
    );
    
    INSERT INTO sales (sales_id, city_id, sales_dt, amount) VALUES (11, 1, DATE '2010-09-01',  10);
    INSERT INTO sales (sales_id, city_id, sales_dt, amount) VALUES (12, 1, DATE '2010-09-11',  20);
    INSERT INTO sales (sales_id, city_id, sales_dt, amount) VALUES (13, 1, DATE '2010-09-12',  40);
    INSERT INTO sales (sales_id, city_id, sales_dt, amount) VALUES (14, 1, DATE '2010-09-12',  80);
    INSERT INTO sales (sales_id, city_id, sales_dt, amount) VALUES (15, 2, DATE '2010-09-11', 160);
    INSERT INTO sales (sales_id, city_id, sales_dt, amount) VALUES (16, 2, DATE '2010-09-11', 320);
    COMMIT;
    

    ) and the expected results of these data (for example

    CITY_NAME            DT          TOTAL_AMOUNT
    -------------------- ----------- ------------
    Asheville            11-Sep-2010           20
    Asheville            12-Sep-2010          120
    Asheville            13-Sep-2010            0
    Bryson City          11-Sep-2010          480
    Bryson City          12-Sep-2010            0
    Bryson City          13-Sep-2010            0
    Cherokee             11-Sep-2010            0
    Cherokee             12-Sep-2010            0
    Cherokee             13-Sep-2010            0
    

    ).

    user12033529 wrote:
    ... The solution that I can think of is a Cartesian of the city and day tables use it as a point of view online and then perform a join with the sales table.

    Is there a better way to do this, using a simple query logic or any advanced functionality?

    For this particular problem, I think it is the best way.
    There is another way: SELECT... PARTITION OF feature (also known as query interior partitions ).
    For example:

    WITH     all_cities     AS
    (
         SELECT     c.city_name
         ,     s.sales_dt
         ,     s.amount
         FROM               sales     s     PARTITION BY (city_id)
         RIGHT OUTER JOIN     city     c     ON     s.city_id     = c.city_id
    )
    SELECT     a.city_name
    ,     d.dt
    ,     NVL ( SUM (a.amount)
             , 0
             )          AS total_amount
    FROM               all_cities     a     PARTITION BY (city_name, sales_dt)
    RIGHT OUTER JOIN     day          d     ON     a.sales_dt     = d.dt
    GROUP BY     a.city_name
    ,          d.dt
    ORDER BY     a.city_name
    ,          d.dt
    ;
    

    It seems that you should be able to do this without a subquery, but I don't know how.

    Partitioning of the application is really useful when you want to include only a few dimensions that are actually present in the fact table. For example, if you want to show only the cities that actually took sale (in other words, in this example, if you do not want to include Cherokee), but you don't want that all the dates in the table of the day for each city.

    Partitioning of the query was new in Oracle 10. If you have an older version, I think that a cross join, as you initially suggested, is the only reasonable way.

    Published by: Frank Kulash, Sep 12, 2010 13:14

  • Trying to consolidate several metrics to date in a simple summary view

    I work with prospects and campaigns and try to provide an analysis of the contribution, the treatment and the output of the led associated with a campaign. (There is some analysis of the box, but not what I'm looking for). The intention is to measure the key steps and their associated dates and bucket these events in the fiscal month that they occur. For example: Lead.Create, Lead.Accepted Date, Lead.Converted Date, and Date of Opportunity.Close Date.

    (I use case statements to identify the fiscal month for which the date has occurred when it is not available for use).

    The display of the database is something like this:

    Drive IDS. Create months | Accepted the month | Conversion of the month | Oppty end month
    12345 | 01. 02. 03. 03
    12346. 01. 01. 01. 02
    12347. 02. 02.

    The goal is to create a view in the meaning of this:

    Drive action. M01 | M02 | M03 |     ...
    Created | 2. 1. 0 |
    Accepted | 1. 2. 0 |
    Convert | 1. 0 | 1.
    Oppty closed | 0 | 0 | 1.

    I can create the database, but I am struggling to present in a single clean table format as shown above.

    Any suggestion on how to do this or maybe another approach?

    Thank you.

    Try a thing.
    create a PivotTable and pull the month in the column and created the scope of the measures. Put the rest in excluded
    Set the SUM aggregation rule.
    Like wise create 3 tables to pivot more, use the same design as above, but this time put months in column, accepted measures and in remaining excluded for the second pivot,
    Converted to measures for the hinge of the third and finally body closed in measures for the 4th.
    When you run the report, you should get the screen as desired, but there are empty lines between these 4 views.

  • Query on the organized Table (IOT) Index sorts unnecessarily data

    I created hist3 to the table as follows:

    create table hist3)
    reference date,
    palette varchar2 (6).
    Artikel varchar2 (10),
    Menge number (10),
    status varchar2 (4).
    VARCHAR2 (20) text.
    VARCHAR2 (40) data.
    primary key constraint hist3_pk (reference, palette, artikel)
    )
    index of the Organization;

    The table being an IOT, I expect that the retrieval of rows in the same order as in the primary key must be very fast.

    This is true for the following query:

    SQL > select * from hist3 by reference;

    -----------------------------------------------------------------------------
    | ID | Operation | Name | Lines | Bytes | Cost (% CPU). Time |
    -----------------------------------------------------------------------------
    | 0 | SELECT STATEMENT | 1000K | 82 M | 3432 (1) | 00:00:42 |
    | 1. INDEX SCAN FULL | HIST3_PK | 1000K | 82 M | 3432 (1) | 00:00:42 |
    -----------------------------------------------------------------------------

    But if I add the following column of the primary key as a criterion of the order, the query becomes very slow.
    SQL > select * from hist3 by reference, palette;

    ------------------------------------------------------------------------------------------
    | ID | Operation | Name | Lines | Bytes | TempSpc | Cost (% CPU). Time |
    ------------------------------------------------------------------------------------------
    | 0 | SELECT STATEMENT | 1000K | 82 M | 22523 (1) | 00:04:31 |
    | 1. SORT ORDER BY | 1000K | 82 M | 200 M | 22523 (1) | 00:04:31 |
    | 2. FULL RESTRICTED INDEX SCAN FAST | HIST3_PK | 1000K | 82 M | 2524 (2) | 00:00:31 |
    ------------------------------------------------------------------------------------------

    If I look at the execution plan, I don't understand why a SORT statement should be needed, as data already take the IOT in the order requested.

    Any thoughts?
    Thomas

    There are various ways how Oracle sorts VARCHARs.
    When you create an index on a VARCHAR column, sort order is binary.
    Try ' alter session set nls_sort = "BINARY" "and run your query."

  • Double games of results returned on stored procedure call

    Hello

    I have a stored procedure created Java and called using the Spring JDBC using StoredProcedure class, stored procedure returns duplicate rows, is this a known problem?

    When I run the stored procedure even in DBVizualiser it not show correctly.

    The class below is used to execute the stored procedure:

    public class CustomerSearchProcedureRunner extends StoredProcedure {
     public CustomerSearchProcedureRunner(JdbcTemplate jdbcTemplate) {
      super();
      this.setJdbcTemplate(jdbcTemplate);
      this.declareParameter(new SqlReturnResultSet(RETURN_RESULTS, new CustomerRowMapper()));
      this.declareParameter(new SqlParameter(CUST_SP_IN_PARAM, Types.VARCHAR));
      this.setSql("{CALL INSURANCE.SEARCHCUSTOMER (?) ON ALL}");
      this.setSqlReadyForUse(true);
      this.compile();
     }
    }
    


    and Java Stored Procedure that runs SQLFire is given below:

    public class CustomerSearchProcedure {
     
     private static final String DOLLAR = "\\$";
     private static final String COLON = ":";
     private static final String CUST_NAME = "CUST_NAME";
     private static final String CUST_NO = "CUST_NO";
     private static final String GENDER = "GENDER";
     
     
     public static void searchCustomer (String customers, ResultSet[] outResults,
       ProcedureExecutionContext context) throws SQLException {
      StringBuilder sql = new StringBuilder();
      StringBuilder whereCondt = new StringBuilder();
      String[] tokens = new String[]{};
      
      if (customers != null && customers.trim().length() > 0) {
       tokens = customers.split(DOLLAR);
      }
      
      sql.append("<global>SELECT * FROM INSURANCE.CUSTOMERS ");
      whereCondt.append("WHERE CUST_PRIMARY IN ('Y', 'N') ");
      // Apply dynamic where condt
      for (int i=0; i < tokens.length; i++ ) {
       String token = tokens[i];
       if (token.startsWith(CUST_NO)) {
        if (whereCondt.length() > 0) {
         whereCondt.append(" AND ");
        }
        whereCondt.append("CUST_NO = " + token.substring(token.indexOf(COLON)+1));
       }
       if (token.startsWith(CUST_NAME))  {
        if (whereCondt.length() > 0) {
         whereCondt.append(" AND ");
        }
        whereCondt.append("CUST_NAME LIKE '"+ token.substring(token.indexOf(COLON)+1).trim() + "%'");
       }
       if (token.startsWith(GENDER)) {
        if (whereCondt.length() > 0) {
         whereCondt.append(" AND ");
        }
        whereCondt.append("GENDER ='"+ token.substring(token.indexOf(COLON)+1).trim() + "'");
       }
      } //End of for
      
      if (whereCondt.length() > 0) {
       sql.append(whereCondt.toString());
      }
      
      Connection cxn = context.getConnection();
      Statement stmt = cxn.createStatement();
      ResultSet rs = stmt.executeQuery(sql.toString());
      outResults[0] = rs;
     } //END OF METHOD
    }
    

    A correction preceding: "for the case on the information in the TABLE of the DataSet to be targeted on each node is also sent for the tag requests will target only this dataset (and avoids duplicates).»

    should read "in the case of on TABLE query Tags will only target the local primary data on the node for tables partitioned, while for replicated tables, it is sent to only one of the lines (and so avoids duplicates in both cases).» WHERE clause to TABLE is not used for cutting data only for the size of the set of nodes to the target.

    The tag prunes yet the query to all of the local primary buckets in all cases (i.e. which WE ALL and on GROUPS of SERVERS) so the comment about and equivalent was incorrect. However, this will always be looking for data in duplicate for replicated tables and TABLE is the only way to avoid it for now.

  • Vista SP2: create an additional partition / merge unallocated space

    I looked around the forums but I can't seem to find an appropriate response.

    I recently reinstalled an operating system on my computer (HARD drive Exchange).  This time, I decided to stay with the Vista original rather than go to the downgrade to XP, I first time round.

    -Online is my goal is to create a partition only for my data.

    I don't never partitioned a drive myself so far.  So far, I did not restore all the data.

    I found a strange configuration on the HARD disk, which is now 500 GB:

    C: "Vista" (of course the windows and other program files, the page file, etc.). About half of the total disk space, with a lot more free space.

    E: 'Data' ("primary partition"). About half of the disk space total but container recovery tools only 4 GB of occupation.

    A third * no * capacity volume 1.46 GB with 100% free space.  It is marked 'healthy (EISA) configuration.

    My plan had been to shrink C and E substantially, in particular E (up to about 6 GB, which gives 33% free space, since it is only recovery tools - on my machine Win 8 I have a recovery partition, but it is very small, as expected).

    I shrink both, but Vista only authorized maximum withdrawal of 50%.  I found myself with two spaces unallocated.  I formatted a (now called D) and now, it is marked 'in good condition (logical drive).  He is also NTFS.

    I've not yet formatted the other unallocated space.

    So, my questions:-

    1. What is this labeled 1.46 GB partition?  Why is that?  (Never seen something like that).

    2. why can I not shrink the existing partitions more?  C even still contains 69% free space.

    3. how to merge both new partitions?

    4. Why is the new partition D not a main drive and what question?  (I read somewhere in Vista you can have 3 primary disks.)  Is this what volume unlabelled 1.46 GB as a primary transmission?

    => I would like ideally to a single disk of data about 65 to 75% of my 500 GB HARD drive.  For now the D drive and newly formatted (68.1 GB) and the unallocated space (115.9 GB) add up to only 184 GB, significantly less than half.

    Thanks for any help.

    I'm no expert on the partitioning of the disks as each version of Windows introduced new types of structures of disk and partitions. Each of the different rules depending on the type and the version of Windows.

    I have three partitions. Data and one for Windows. One for beta versions of Windows. And one for the installation files (if it survives reformatting). I copy my data to another partition if resettlement (but it's been 6 years and 7 months I finally reinstalled Windows - what ordinary hassles are you going to for once an event of the Decade).

    Windows is designed to separate the data on a disk. Many of its features are designed to make several drives appear as a single drive.

    In the worst case scenario for your scenario, Windows will work, but your data is lost compared to the lost data, but nothing works. An improvement without consequence.

  • is this problem with newspapers to 5

    Salute to the primary side, I have 3 logfile and after a few days, my day also 3 redo log files... a few days standby does not,., at the moment I can build good incremental backup... to order question time

    recover the database noredo;

    Adjust your command file according to your expectation by scenarios

    After create new redo log files;

    Step 10: On the eve... Create the standby Redo Log file

    For example (departure for the location, the group number and size)

    SQL > alter database add thread 1 standby logfile group x "remake the log location" MB in size;

    SQL > alter database add standby logfile thread 1 Group 5 ' / oracle/oradata01/test1/redo05.log' size

    52428800;

    SQL > alter database add standby logfile thread 1 Group 6 ' / oracle/oradata01/test1/redo06.log' size

    52428800;

    Step 11: for help... If there is inconsistency in the location of the data between the primary and standby files,

    need to update the control file, follow these steps

    SQL > Alter system set standby_file_management = manual;

    SQL > alter database recover managed standby database cancel;

    SQL > shutdown immediate;

    SQL > startup mount

    SQL > alter database rename datafile ' data primary file location ' to 'sleep datafile rental;

    I followed these steps, I got 5 again connects now... but my prod only 3 newspaper are there questions

    ??

    Hello;

    Looks like you are missing at least one day before newspaper. Might want to consider adding a.

    While waiting for redo logs is not mandatory, it is a good idea to have them. They must be the same size as the redo log and

    You should have the same number of newspapers of recovery sleep (or higher) as regular redo logs.

    Another query, you can use this:

    Select Group #, Member of v$ logfile where type = "STANDBY."

    Also if you need to delete or correct a standby log on the database in standby mode you must first cancel recovery.

    ALTER database recover managed standby database cancel;

    If you broker Data Guard configuration do not use the above command.

    DATABASE EDITION SET STATE = "APPLY-STOP."

    Best regards

    mseberg

  • Additional logging

    Hi all

    Additional registration has been activated at the level of the table for the primary key with the statement below

    ALTER table < schema >. < table > add additional data log columns (primary key);


    But under query returns nothing

    Select * from DBA_LOG_GROUP_COLUMNS where owner = < schema >;


    Instead of registering at the

    Select * from dba_log_groups where owner = < schema >;

    Can you please is because it has been activated as the primary key instead of individual columns?

    Thank you.

    Additional registration has been activated at the level of the table for the primary key with the statement below

    ALTER table .

    Add additional columns log data (primary key);

    But under query returns nothing

    Select * from DBA_LOG_GROUP_COLUMNS where owner = ;

    Instead of registering at the

    Select * from dba_log_groups where owner = ;

    Can you please is because it has been activated as the primary key instead of individual columns?

    Yes - you did NOT specify all the columns there is NOTHING to show in this view. As the API says:

    ALL_LOG_GROUP_COLUMNSDescribes the columns that are accessible to the current user and that are planned in the newspaper groups

    There was not "the columns that are available.. ' specified that none can be shown."

  • extract OGG gets NULL when updated

    RDBMS version: 10.2.0.3 (RAC)

    OGG version: 11.2.1.0.1(for 10g)

    I'm confused by something like this for ten days:

    There is a 10.2.0.3 RAC cluster and using ogg 11.2.1.0.1 for record extraction of 10g, files of parameters as follows:

    EXCERPT:

    EXTRACT extte

    SETENV (ORACLE_HOME = / u01/oracle)

    SETENV (NLS_LANG = AMERICAN_AMERICA. ZHS16GBK)

    Username ogg@test,PASSWORD oggpassword

    TRANLOGOPTIONS ASMUSER sys@OGGASM,ASMPASSWORD asmpassword

    KEYNAME AES192 ENCRYPTTRAIL exttekey

    EXTTRAIL/ogg/dirdat/you

    FETCHOPTIONS FETCHPKUPDATECOLS

    TABLE TEST.*;

    I've added some exttrail of following commands:

    dblogin USERID ogg@test,PASSWORD oggpassword

    ADD THE EXTRACT OF EXTTE, TRANLOG, BEGIN NOW, NET 2

    ADD EXTTRAIL/ogg/dirdat/you, EXTRACT of EXTTE

    and began to extract. Then create and insert records in test schema:

    drop table t is serving;

    create table t (key primary number name age number);

    Insert values into t (1100);

    commit;

    update of age set t = 200, where name = 1;

    commit;

    And I see the strangest thing has happened using logdump:

    Open/ogg/dirdat/te000000

    HDR - Ind: E (x 45) Partition:.  (x 04)

    UndoFlag:.  (X 00)     Brutal: A (x 41)

    RecLength: 20 (x 0014) IO time: 16:55:56.000.000 23/12/2013

    IOType: 5 (x 05) OrigNode: 255 (xff)

    TransInd:.  (X 03)     FormatType: R (x 52)

    SyskeyLen: (0x00) incomplete:.  (X 00)

    AuditRBA: 17 AuditPos: 4146564

    Next: N (x 00) RecCount: 1 (x 01)

    23/12/2013 16:55:56.000.000 insert Len 20 RBA 1054

    Name: TEST. T

    After Image: Partition 4 G s

    0000 0005 0000 0001 3100 0100 0700 0000 0331 3030 | ........ 1... 100

    Column 0 (x 0000), Len 5 (x 0005)

    0000 0001 31                                      | .... 1

    Column 1 (x 0001), Len 7 (x 0007)

    0000 0003 3130 30 | .... 100

    Logdump 393 > n

    ___________________________________________________________________

    HDR - Ind: E (x 45) Partition:.  (x 04)

    UndoFlag:.  (X 00)     Brutal: A (x 41)

    RecLength: 19 (x 0013) IO time: 16:55:57.000.000 23/12/2013

    IOType: 15 OrigNode (x0f): 255 (xff)

    TransInd:.  (X 03)     FormatType: R (x 52)

    SyskeyLen: (0x00) incomplete:.  (X 00)

    AuditRBA: 17 AuditPos: 4147728

    Next: N (x 00) RecCount: 1 (x 01)

    23/12/2013 16:55:57.000.000 FieldComp Len 19 RBA 1182

    Name: TEST. T

    After Image: Partition 4 G s

    0000 0004 ffff 0000 0001 0007 0000 0003 3230 30 | ................ 200

    Column 0 (x 0000), Len 4 (x 0004)

    ffff 0000                                         | ....

    Column 1 (x 0001), Len 7 (x 0007)

    0000 0003 3230 30 | .... 200

    the primary key has become ffff 0000 which is NULL in "after the image. So I replicate in the target databases, complained ogg who found no documents that have a NULL value as the primary key.

    And I have it reproduced in the environment of the product, it again in ogg 11.2.1.0.1 for 11g on 10.2.0.3 (RAC) RDBMS.

    Earlier, I installed 10203 database on the single node (no CARS) and make the extraction of ogg even totally:

    HDR - Ind: E (x 45) Partition:.  (x 04)

    UndoFlag:.  (X 00)     Brutal: A (x 41)

    RecLength: 20 (x 0014) IO time: 17:21:02.000.000 23/12/2013

    IOType: 5 (x 05) OrigNode: 255 (xff)

    TransInd:.  (X 03)     FormatType: R (x 52)

    SyskeyLen: (0x00) incomplete:.  (X 00)

    AuditRBA: 40333 AuditPos: 42779524

    Next: N (x 00) RecCount: 1 (x 01)

    23/12/2013 17:21:02.000.000 insert Len 20 RBA 1600

    Name: TEST. T

    After Image: Partition 4 G s

    0000 0005 0000 0001 3100 0100 0700 0000 0331 3030 | ........ 1... 100

    Column 0 (x 0000), Len 5 (x 0005)

    0000 0001 31                                      | .... 1

    Column 1 (x 0001), Len 7 (x 0007)

    0000 0003 3130 30 | .... 100

    ___________________________________________________________________

    HDR - Ind: E (x 45) Partition:.  (x 04)

    UndoFlag:.  (X 00)     Brutal: A (x 41)

    RecLength: 20 (x 0014) IO time: 17:21:02.000.000 23/12/2013

    IOType: 15 OrigNode (x0f): 255 (xff)

    TransInd:.  (X 03)     FormatType: R (x 52)

    SyskeyLen: (0x00) incomplete:.  (X 00)

    AuditRBA: 40333 AuditPos: 42781160

    Next: N (x 00) RecCount: 1 (x 01)

    23/12/2013 17:21:02.000.000 FieldComp Len 20 RBA 1735

    Name: TEST. T

    After Image: Partition 4 G s

    0000 0005 0000 0001 3100 0100 0700 0000 0332 3030 | ........ 1.. 200

    Column 0 (x 0000), Len 5 (x 0005)

    0000 0001 31                                      | .... 1

    Column 1 (x 0001), Len 7 (x 0007)

    0000 0003 3230 30 | .... 200

    everything seems fine.

    Am I wrong in PAP/ASM configuration for extraction?

    I thank you very much for reading and appreciate all of the advice.

    the reference to

    https://community.Oracle.com/thread/2395043

    Read the answer by amardeep.sidhu, and I checked

    SUBSECTIONS SUP SUP SUP SUP

    -------- --- --- --- ---

    NO YES NO NO NO

    only a minimum extra newspaper was saved again log.trandata was not enabled.

    further adjustment of the sessions to:

    change the database adds additional log data (primary key) columns;

    and I get good data.

    Thanks a lot to this thread.

    And now I have the time to find a way for another problem of ASM password which includes '%' while ogg unable connect instance asm :)

  • Interface PartitionListener

    I am interested by implementing the PartitionListener interface for a partitioned area that I defined in my distributed system. I am particularly interested in the afterPrimary (...) method that defines the interface. The documentation for the interface to http://www.vmware.com/support/developer/vfabric-gemfire/663-api/com/gemstone/gemfire/cache/partition/PartitionListener.html, however, recommended to contact support before using this API. This interface is safe to use? Am I right to assume that the afterPrimary (...) method fires on the node where the primary bucket?

    Thank you

    Tom

    Tom,

    The PartitionListener is safe to use. We have several customers using it.

    The recall of afterPrimary fires on the node where the primary is created. One interesting thing to note is that the afterRegionCreate callback will be called after afterPrimary callbacks are called when a persistent region is retrieved. Thus, you will not know in this case region becomes the afterPrimary callback. I usually set up the region name in the properties for the listener so I don't have to rely on the recall of afterRegionCreate. I've attached an example.

    Here's how to configure:

    
      TestPartitionListener
      
        myRegion
      
    
    

    Barry

Maybe you are looking for