Queries sewn in OBIEE

Question:

OBIEE is capable of handling queries sewn?

I have 2 tables of facts which are to be joined by a date dimension. Like this:

PRICE IS > - DIMENSION DATE - < FX MADE RATES

Now, my PRICE FACT table is also attached to another dimension called AREA. This is an overview:

AREA - < BECAUSE of PRICES > - DIMENSION DATE - < FX MADE RATES

If I run a query like this:

DATE - PRICE - FXRATE my application runs without any problems, but if I add a dimension that is NOT shared by the two FACT tables then I get an error like this:

[nQSError: 15018] Incorrectly source the logical table set (for the fact FX rate table) does not contain a mapping for the AREA.

OBIEE throws an error because the FACTS of the FX RATE table does not share the dimension of the BOX PRICE is.

How can I solve this problem? If I want to create a query like this:

AREA - DATE - PRICE - CHANGE CURRENCY

After working with Cognos for awhile, it's called queries sewn and I wonder if OBIEE have the same functionality.

Help, please.

Thank you.

When you click on LTS to a logic of table, you see three matrons - general, column mapping and content.
Select the content here. Content aggregation - by logic level. For each dimension, select the level. i.e. for consistent dim select Details (or lower) and for nonconformed select or total.

Tags: Business Intelligence

Similar Questions

  • How to cut Inline queries

    Hello Experts,

    How to avoid the queries generated by OBIEE inline. From now OBIEE is generating some 3-4 inline queries in a single query and return the results.

    Thank you
    Gouda.

    Try the snowflake conversion to the stars.

    Mark as correct if it is useful.

    Thank you.

  • Exicuting "xmlelement" in the query SQL of BI Publisher

    I use Oracle BI Publisher 10.1.3.4.0d

    I was reading a blog and saw that you should be able to have the possibility of using SQL XML. However, when I put the following in the datamodel SQL SQL code I get this error: java.io.IOException: prepare the request failed [nQSError: 27002] about: syntax error [nQSError: 26012]

    SQL as follows:
    Select
    XMLELEMENT ('Test'
    XMLATTRIBUTEs)
    BOOK_FACT. BOOK_NUM as BOOK_NUM,
    BOOK_FACT. BOOK_TYPE as BOOK_TYPE,
    BOOK_FACT. BOOK_USER_CODE as BOOK_USER_CODE,
    BOOK_FACT. CMPY_NUM as CMPY_NUM
    )
    ). getClobVal() like SampleBook
    of the BOOK. BOOK_FACT BOOK_FACT


    I need to get data from BIPublisher as follows
    rowset <>
    < ROW >
    < SAMPLE >
    < BOOK_NUM = "1" BOOK_TYPE = "223" "1925" CMP_NUM = BOOK_USER_CODE = ".116" >
    < / SAMPLE >
    < / ROW >
    ....


    Hope you can help because it's pretty frustraiting.

    Regards

    E

    of the request, it looks at, your data source is OBIEE.

    The query that you have written is not a valid query of OBIEE. That's why it failed.
    test this query to the ODBC client, and she throws the same exception.

    These structure of XQuery is supported in direct sql queries. not in OBIEE queries.

  • Connect OBIEE Admintool to hive: problems with ODBC. Any option?

    We are facing a blocking problem when you try to connect to an instance of hive Admintool using an ODBC driver.

    For a CEP, we are trying to set up a domain of hive to analyze in OBIEE. Everything seems to work OK

    until we try to load the metadata. In this case, the schema is found but not table under him. We can add the

    Physics of table by hand, and then we can successfully launch the number of lines. We then try to load the lines, but to

    the point that we have faced a strange behavior. But summarize the steps we have taken.

    (1) we downloaded ODBC drivers to connect to the hive (we tried with Hortonworks, Cloudera, Microsoft and)

    MapR, but the results are the same);

    (2) we have set up the connection to the hive on port 10000 and successfully run the ODBC test (BTW: to)

    SQLDeveloper we can connect to the same server via the JDBC driver and query the tables here);

    (3) then we start Admintool. Once we have defined the name and the password, we then select the ODBC connection.

    in fact there aren't strictly necessary user name and password, but we offer them anyway;

    (4) the next step is to load the metadata; and here is the first issue, because we can see the patterns but not

    the tables.

    Connection_OBIEE_BigData.PNG

    (5) anyway we are going on and insert a schema (we read there could be problems with more than one schema, then)

    We simply select one); in a blog (by Rittman, who really is 'THE' guru OBIEE) it is said that the tables are not

    loading directly, so we tried simply adding a physical table in the physical layer as the default schema.

    making sure the table exists;

    (6) once we have added 'physically' table that we proceed with the number of lines; and we get correctly the number of

    ranks once we overview the name of table after that;

    (7) then we try to retrieve the lines and the question; We get the following error message:

    ErroreSQL.PNG

    We also dug a bit little and concluded that the request received by hive instead of being

    SELECT * FROM trucks

    it becomes

    SELECT FROM trucks

    If the ' *' is passed to the SQL statement sent. If we run the same query on the Beehive, we get

    exactly the same exception ParseException error, it really seems that there is a problem in the

    construction of the SQL statement.

    Any idea about this problem? (if a fix exists, because after so many attempts, we are convinced

    This is a bug that can only be fixed by a new version of Admintool)

    BTW: we have tried almost all the possible options for the ODBC driver (such as 'native of use request',

    ("Use Unicode" and so on) and in regards to the Admintool, we have not seen what option

    or preferably (at the physical layer) can influence this anomaly.

    Any help is really, really appreciated. Thanks in advance.

    OK, I got what you're saying. I mean, to focus on the bigger picture. If you can make a data view or list Table to import metadata is a bit unrelated to the final goal. Annoying, but irrelevant. What you try to do, is get OBIEE to build queries dynamic SQL based on the logical requests which will submit your users via presentation against the presentation layer Services.

    Continue to build your SPR (ignorant that display data does not work) at a very simple level, with a single fact/Sun in the business model, single domain - and then run an appropriate query generated in answers through it. The point of all this is to see how OBIEE SQL query then generates, and if it's worth a one hive.

  • OBIEE 11 g, Nq server file system - is quickly filling up

    Hello

    We must work on the project of Migration OBIEE 10 g and 11g. Reports that are run very well in 10g are fill the Nq 11 g and error causing server file system.

    Configuration DB - OBIEE 10 g - current 11i.

    New Configuration - OBIEE 11g - 12 DB

    QL Server space is same in both configurations. and the report is exactly the same in the environment.

    There is a report that works very well in 10g is now generation huge file to the NQ server to 11g. If several users are running this report, with different parameters, it server NQ fills up quickly and causes the error.

    We have checked logical queries, they are exactly even.

    Physics questions, is slightly different but the result is exactly the same.

    Ideas or guidelines to debug this problem.

    Kind regards

    GHE.

    The solution was about length of database column.

  • OBIEE 11 g bi_server Purge cache automatically

    Hello

    We have an obligation to our deployment for a real-time report close. Now, I want to OBIEE/clear one clear cache bi_server for each load etl during the day.

    I have an idea to use a dynamic variable repository to read update a column in a table filled by ODI etl load.

    My question is how and where then do I use/call a dynamic variable to order the OBIEE on clear/empty the cache.

    Thank you.

    If I understand you correctly, I need to activate the cache in EM to OBIEE then include the purging of the cache in the ETL after a full charge.

    Yes

    If yes where can I specify the cache purge in my ETL. Please explain to me or to any doc who will show me the steps

    Maybe one of the JeromeFr to help with the details on. Basically, you send a command (for example, SAPurgeAllCache, but there are others that target of the tables or queries) on the BI server, which can be connected to via its JDBC driver, which I suppose so can be called as part of a routine of ODI.

    "Edit: I wrote a blog post detailing how to purge the cache of ODI OBIEE: Rittman Mead Consulting" manage the Cache of the OBIEE BI of ODI 12 c server

  • Problem with ATG OOTB reports using OBIEE 11 g

    Hello
    R. j. Nunes

    For OBIEE within the ATG OOTB reports, I am getting error with all analyses that have a view with queries below union lattice. Analysis such as product sales Top, Top product returns, key indicators traffic etc.

    Error during the generation of advertising. Error getting cursor in GenerateHead

    Error details

    Error codes: OAMP2OPY:E22KEPYE

    DXE compiler error. Nested aggregate expression is not allowed in the aggregate query. Source name: c465ddfed81c35fcc_X_leaf_1. «"XML: < sawxd:expr xmlns:sawxd="com.siebel.analytics.web/expressiondxe/v1.1 "xmlns: xsi ="http://www.w3.org/2001/XMLSchema-instance"xsi: type ="sawxd:aggregate1"op = 'first' > < sawxd:expr xmlns:sawq="com.siebel.analytics.web/querydxe/v1.1 "xsi: type ="sawq:field"pos ="12"/ > < / sawxd:expr > »»

    I tried to run the workaround mentioned in https://support.Oracle.com/epmos/faces/DocumentDisplay?_afrLoop=450299094214827 & ID = 1467110.1

    but I still get the error.

    Anyone else facing a similar issue?

    ATG Version: 10.2

    OBIEE Version: 11.1.1.6.0

    Thanks, Saud

    Hi Saud

    You need to install

    11.1.1.6.2 BP1
    

    That should solve any problems you encounter.

    It is in fact support for ATG 10 version.

    ++++

    Thank you

    Gareth

    Please indicate any update as "Good response" or "Useful answer" If this update help and answers your question, so that others can identify the correct/good update between the many updates.

  • Adding a data source XML in Obiee 11 g on Linux

    Hi guys, I need your help.

    I create a new database in the physical layer as XML.

    On the connection pool (named xml_source), I'll put the path to the XML file, inside the datasourse name, with this format: / u01/app/xmltest /.

    My test XML file is the following: xml_test.xml, so in the information in the table, I'll put the name of the XML like this: xml_test (without the xml extension)

    The issue is when I am issuing the SQL, the displayed error message says:
    Query Failed: [nQSError: 64023] cannot access the /u01/app/xmltest\xml_test.xml: no such file or directory

    As you can read in the error message there is a "-" just before the xml file. So Linux is not a '-' but a ' / ', why obiee puts a bad path? How can I 'know' obiee that there is a Linux environment not a Windows?

    Thank you in advance.

    Sorry for my mistake, but I found a named XML in the physical Table Properties tab. I put the full URL to the file and now I am able to issue sql to her queries. Thank you!

  • Services Web OBIEE executeSQLQuery()

    Hello

    Documentation and example of the use of web services is extremely poor - does anyone have an example of SQL logic that works in the executeSQLQuery() of the XmlViewService method the sample? It does not accept any question, I give, leading me to the question of the scope of its visibility. What exactly can these targets queries? Someone at - it an example query hit a table from the HCM domain?

    Thank you

    Tor

    Hey Tor,

    Your system analyst didn't really help you

    Your LSQL should look like this example:

    SELECT
      "A - Sample Sales"."Products"."P4  Brand",
      "A - Sample Sales"."Time"."T02 Per Name Month",
      "A - Sample Sales"."Base Facts"."1- Revenue"
    FROM "A - Sample Sales"
    WHERE
    "Time"."T05 Per Name Year" = '2010'
    

    FROM is not an array, but the domain name of the topic, your 'table' you put in the column name (the first part is the area in question once again, come then the name of the table, finally, is the name of the column).

    If you do not have access to the front-end OBIEE where you can create a scan of your columns to return and just copy and paste the LSQL, if you do not have access to the RPD to see the exact names, if you do not have a very good doc with all areas of the object and the content... I will say that you have no way of guessing the correct names.

    Try to start with your LSQL as simple as possible, remove the WHERE and ask just 1-2 columns.

    Try asking your system analyst to check details, he gave you.

    This method question the BI server (and not the physical server used in the RPD), if you query the OBIEE presentation layer: subject areas, presentation tables, columns of presentation.

  • Partitioning strategy for the OBIEE query performance

    I use partitioning for the first time and I'll have trouble determining if I partition my fact table in a way that will allow the partition size to work with queries generating OBIEE.  I've set up a simple example using query I wrote to illustrate my problem.  In this example, I have a star with a fact table schema and I join in two dimensions.  My fact table is partitioned on JOB_ID and TIME_ID RANGE LIST and those are the keys that link the two dimensions that I use in this application.


    Select sum (boxbase)

    TEST_RESPONSE_COE_JOB_QTR a

    Join DIM_STUDY C on A.job_id = C.job_id

    Join DIM_TIME B on A.response_time_id = B.time_id

    where C.job_name = "FY14 CSAT"

    and B.fiscal_quarter_name = ' quarter 1';


    What can I say, because the query is in fact done filtering on columns in the dimensions instead of they, the columns in the table, the size isn't actually is happening.  I see actually slightly better performance of a non-partitioned table, even though I wrote this application specifically for partitioning strategy which is now in place.


    If I run the next statement, it runs a lot faster and a plan to explain is very simple and it seems to me that it's size down to a partition of sub as I hoped.  This isn't any query generated by OBIEE how will seem so.


    Select sum (boxbase)

    of TEST_RESPONSE_COE_JOB_QTR

    where job_id = 101123480

    and response_time_id < 20000000;


    Any suggestions?  I get some benefits from the exchange of partition by using this configuration, but if I'm going to sacrifice performance reports then that maybe isn't useful, or at the very least, I would need to get rid of my partitions void if they are not providing any benefit.


    Here are the plans to explain that I got for two queries in my original post:

    Operation

    Name of the object

    Lines

    Bytes

    Cost

    Object node

    In/Out

    PStart

    PStop

    INSTRUCTION SELECT optimizer Mode = ALL_ROWS


    1


    20960





    AGGREGATION OF TRI


    1

    13






    VIEW

    SYS. VW_ST_5BC3A99F

    101 K

    1 M

    20960





    NESTED LOOPS


    101 K

    3 M

    20950





    PARTITION LIST SUBQUERY


    101 K

    2 M

    1281



    KEY (SUBQUERY)

    KEY (SUBQUERY)

    RANGE OF PARTITION SUBQUERY


    101 K

    2 M

    1281



    KEY (SUBQUERY)

    KEY (SUBQUERY)

    CONVERSION OF BITMAP IN ROWID


    101 K

    2 M

    1281





    BITMAP AND









    MERGE TO BITMAP IMAGE









    KEY ITERATION BITMAP









    BUFFER









    INDEX SKIP SCAN

    CISCO_SYSTEMS. DIM_STUDY_UK

    1

    17

    1





    BITMAP INDEX RANGE SCAN

    CISCO_SYSTEMS. FACT_RESPONSE_JOB_ID_BMID_12






    KEY

    KEY

    MERGE TO BITMAP IMAGE









    KEY ITERATION BITMAP









    BUFFER









    VIEW

    CISCO_SYSTEMS.index$ _join$ _052

    546

    8 K

    9





    HASH JOIN









    INDEX RANGE SCAN

    CISCO_SYSTEMS. DIM_TIME_QUARTER_IDX

    546

    8 K

    2





    INDEX FULL SCAN

    CISCO_SYSTEMS. TIME_ID_PK

    546

    8 K

    8





    BITMAP INDEX RANGE SCAN

    CISCO_SYSTEMS. FACT_RESPONSE_TIME_ID_BMIDX_11






    KEY

    KEY

    TABLE ACCESS BY ROWID USER

    CISCO_SYSTEMS. TEST_RESPONSE_COE_JOB_QTR

    1

    15

    19679



    ROWID

    L LINE









    Operation

    Name of the object

    Lines

    Bytes

    Cost

    Object node

    In/Out

    PStart

    PStop

    INSTRUCTION SELECT optimizer Mode = ALL_ROWS


    1


    1641





    AGGREGATION OF TRI


    1

    13






    SIMPLE LIST OF PARTITION


    198 K

    2 M

    1641



    KEY

    KEY

    RANGE OF SINGLE PARTITION


    198 K

    2 M

    1641



    1

    1

    TABLE ACCESS FULL

    CISCO_SYSTEMS. TEST_RESPONSE_COE_JOB_QTR

    198 K

    2 M

    1641



    36

    36


    It seems unreasonable to think that relying on our index in a table partitioned (or partitioned in a way only focused on the help of ETL) can actually work better than partitioning in a way that we could get some size dynamic, but never static pruning?

    Yes - standard tables with indexes can often outperform partitioned tables. It all depends on types of queries and predicates to query which are typically used and the number of rows returned in general.

    Size of the partition eliminates the partitions TOGETHER - regardless of the number of rows in the partition or table. An index, on the other hand, is prohibited if the query predicate needs a significant number of lines since Oracle can determine that the cost is lower simply use close readings and make a full scan.

    A table with 1 million rows and a query predicate who wants 100 K of them probably will not use an index at all. But the same table with two partitions could easily have one of the partitions pruned by the "number of effective lines" only 500 k or less.

    If you are partitioning for performance, you should test your critical queries to make sure partitioning/pruning is effective for them.

    Select sum (boxbase)

    TEST_RESPONSE_COE_JOB_QTR a

    Join DIM_STUDY C on A.job_id = C.job_id

    Join DIM_TIME B on A.response_time_id = B.time_id

    where C.job_name = "FY14 CSAT"

    and B.fiscal_quarter_name = ' quarter 1';

    So, what is a typical value for 'A.response_time_id '? That represents a 'B.time_id '?

    Because a way of providing explicit partition keys may be to use a range of 'response_time_id' of the FACT table rather than a value of 'fiscal_quarter_name' of the DIMENSION table.

    If "1 quarter" could correspond to a range of dates from 01/01/YYYY ' at ' 03/31/yyyy '. "."

    Also, you said that on the partitioning: JOB_ID and TIME_ID

    But if your questions relate mainly to the DATES / TIMES, you might be better use the TIME_ID for PARTITIONS and JOB_ID, if necessary, for the subpartitioning.

    Date range partitioning is one of the most common around.and serves both performance and ease of maintenance (delete/archive old data).

  • OBIEE 11.1.1.7.1 - research question

    Hello

    We went to OBIEE 11.1.1.7.1. When you click apply in the quick bar to generate one of our dashboard, reports appear 'SEARCH' forever.

    When I check the status of the 'Administration' - 'Manage the Sessions' report, I can see that queries are completed in 5 sec.

    When taxes on the page, the reports display correctly. This question has repercussions that a single area of the object / dashboard and only some reports.

    Is someone knows this specific problem/bug? Apparently, we have the bug in Firefox. Only in Internet Explorer.

    Thank you

    The issue was found. There is a bug in the version 11.1.1.7 with Internet Explorer.

    The '11.1.1.7.140114' patch is to fix the problem.

  • OBIEE 11.1.1.6.2 line Wise Init for variable roles

    Gurus,

    Why is the NQ_SESSION. ROLES (initialized wise line) behaves differently compared to the other sages initialized line session variables.

    I use EBS authentication and authorization for OBIEE, so my request for authorisation is

    --
    SELECT '' DISTINCT ROLES, RESPONSIBILITY_KEY
    OF FND_USER, FND_USER_RESP_GROUPS, FND_RESPONSIBILITY_VL
    WHERE FND_USER.user_id = FND_USER_RESP_GROUPS.user_id
    AND FND_USER_RESP_GROUPS. RESPONSIBILITY_ID = FND_RESPONSIBILITY_VL. RESPONSIBILITY_ID
    AND FND_USER_RESP_GROUPS. RESPONSIBILITY_APPLICATION_ID = FND_RESPONSIBILITY_VL. APPLICATION_ID
    AND FND_USER_RESP_GROUPS. Start_date < SYSDATE
    AND (CASE WHEN FND_USER_RESP_GROUPS. End_date IS NULL THEN SYSDATE TO_DATE (FND_USER_RESP_GROUPS.end_Date) to ANOTHER END) > = SYSDATE
    AND FND_USER.user_name = ' VALUEOF (NQ_SESSION. THE USER) ";
    --

    Now, I intend to use these roles (responsibility of the EBS name) which I've filled in a DB table against a cost center and here's how I view the data in DB.

    ID | PROFIT_CENTER | RESPONSIBILITY
    -----------------------------------
    0 | 0 | 0
    1. 100. BI_Fin_Role
    2. 200 | BI_P2P_Role
    3. 300. BI_Inv_Role
    ......

    Then my block initialization of Profit centers is now

    SELECT DISTINCT 'PROFIT_CENTER', PROFIT_CENTER OF WC_OBIEE_PC_SECURITY WHERE THE RESPONSIBILITY OF (VALUELISTOF (NQ_SESSION. ROLES))

    Therefore, User1 has BI_Fin_Role and PC_Security role makes User2 has BI_Inv_Role and PC_Security now when User1 connects to they should see only 100 data to Profit Center and User2 should see only 300.

    I created this application role (PC_Security) data filter and limit with the "Dim.Profit Center". "' Profit center ' = VALUEOF (NQ_SESSION. ("' PROFIT_CENTER")

    But the first problem I encounter is that there is no definition of the value for PROFIT_CENTER, snap, which means VALUELISTOF (NQ_SESSION. Value of ROLES) is not managed or recognized by all the time that BI Server sends this request to DB.

    This is confirmed by my queries log that says:

    [2013 04-29 T 12: 49:06.000 + 00:00] [OracleBIServerComponent] [TRACK: 5] [USER-39]] [ecid: 11d1def534ea1be0:48033065:13e4213bbd0:-8000-0000000000008dc8] [tid: 47796940] [requestid: fffe0313] [sessionid: fffe0000] [username:]-an initialization block named 'PC_Security', name of a Session Variable, has issued the following SQL query: []

    SELECT DISTINCT 'PROFIT_CENTER', PROFIT_CENTER OF WC_OBIEE_PC_SECURITY WHERE THE RESPONSIBILITY OF (VALUELISTOF (NQ_SESSION. ROLES))

    Returns 0 rows. Query the status: success

    --

    So I try to issue the SQL to the BI thru problem SQL Server directly:

    SELECT 'Profit Center'. "" Profit Center ""SLA details", WHERE"Profit Center". "' Profit center ' = VALUEOF (NQ_SESSION. ROLES)


    and the query log gives the underside of paper that blew my mind as his being delimited by '; '.

    Select distinct T1260626. ACCOUNT_SEG3_CODE C1
    Of
    W_GL_ACCOUNT_D T1260626 / * Dim_W_GL_ACCOUNT_D * /.
    where (T1260626. ACCOUNT_SEG3_CODE = ' BIAuthor; BIConsumer; PC_Security; BI_Fin_Role; AuthenticatedUser')

    I have other blocks of the line Wise Init for HR_ORG when fired and used in reports giving injections ('1000 ', ' 2000',...) which is what I expected to see in the filter and query here.

    Am I doing something wrong here can someone please point me to the right direction please.

    Any help is very appreciated.

    Thank you
    VidyaS

    Published by: VidyaS on 29 April 2013 14:47

    This is because the variable ROLES in OBIEE 11 g is designed to retrieve groups LDAP or DB etc... as a semicolon separators, wouldn't be the same case with the other blocks of init line wise.

    Refer to: OBI 11g - LDAP and string delimited by semicolons for groups [1274964.1 ID]

    HTH,
    SVS

  • Concurrency OBIEE

    I'm working with OBIEE 11 g (11.1.1.5).
    I have the dashboard with several sections of reports\queries each.
    Each query takes a few seconds and thus, the dashboard takes some time until this that fully charged.
    The journal implies that queries are run sequentially and not in parallel.

    Is it possible to configure the OBIEE to run several queries at the same time?

    I checked in the administration tool connection pool configuration - connections is set to 10.

    Thank you

    Report queries are meant to operate in parallel if they are independent.
    You can share the part of the newspaper where he says that the queries are executed sequentially.

    Thank you

  • Support Office BI and IE for OBIEE 11.1.1.6 on windows 7

    Hello Experts

    Could you pls answer support queries for fooolowing on OBIEE 11.1.1.6 BP1

    1 is supported by IE 8 or I'll have to switch to IE9.

    2 BI office plugins: are they supported on Windows 7 32 bit.

    3 Admin tool: if he if supported on Windows 7 32 bit.

    I saw the matrix certification, but seems confusin.

    Emergency assistance/pointer would be appreciated.

    No points ;)

  • Can we do Union between 2 fields in OBIEE formula column?

    Hello gurus,

    Can we do the union between 2 fields and calculate a column that in the criteria of the obiee report?

    Vieira says:
    Hey David,

    I m using the same method to evaulte year and MOnthname

    Year: YEAR (CURRENT_DATE)

    Name of the month: MONTHNAME (TIMESTAMPADD (SQL_TSI_MONTH-1, CURRENT_DATE))

    but I want both values in unique colmun...
    Because I have to create a report that runs to the year and last month both at the same time.
    so once I have it calculate such doin union between YEAR and MONTHNAME column, I'll use the PivotTable to create this report...
    So basically I want a column that has the CURRENT YEAR, LAST MONTHNAME as list of values. and that's the reason why I want the union in the formula in the column...

    Let me know if you need more information...

    OK, got it.

    So follow these steps:

    (1) in your first report, put your attribute columns and measure your column a dummy column.

    (2) place a filter on the column "equal to / in".

    MONTH = MONTH (table.date) (TIMESTAMPADD (SQL_TSI_MONTH-1, CURRENT_DATE))

    (3) in the fx of the dummy column, delete content and enter 'Previous month' and name the 'Date Range' column (or whatever you want).

    This will generate a report where the measure is for the previous month.

    (4) in your second report, put the same attribute columns and measure your column a dummy column.

    (5) place a filter on the column "equal to / in".

    Year (table.date) = YEAR (CURRENT_DATE)

    (6) in the fx of the dummy column, delete the content and enter "current Year-to-Date" and name the 'Date Range' column (or what you want).

    This will generate a report where the measure is for the current year.

    (7) combine queries into a UNION.

    The synthesis report will have the values "previous month" and "current year-to-date" in the same column you want.

    (8) in your PivotTable, put the attribute columns in the Section of lines, as in measures Section and the 'Date Range' column in the columns Section.

    That should do it.

Maybe you are looking for

  • Archives as a filter action

    In my opinion, it would be a great feature to have "Archive" as a filter action.This way I could create a filter that detects the messages over X days and "archive".In this way we, the users, will depend not the other users writing and maintaining th

  • I need 128-bit encryption to use a government site

    I need to go to this web site for an app of the job, he won't let me submit my information, because he says that I need 128-bit encryption

  • Restart the laptop everytime I close.

    Hello people... I have the problem that whenever I close my T60 still running, I have to restart again. Does anyone knows how to fix this?

  • How to find drivers for the fire of Kindle?

    Original title: Kindle Fire On the connection of a new fire Kindle to my computer, program of upgrading windows could not find the drivers for the Kindle, so it lacks drivers yet. Anyone know how do I find drivers for Kindle?

  • System requirements for Windows 10 CPU?

    He says that my processor is not supported. The said requirements processor: 1 gigahertz (GHz) processor or more fast or SoC My processor is AMD C-50 (2 CPUs) processor, ~1.0 GHz Is this enough? I read that the app is buggy and reports false reports