Calculations of the rate of aggregation in essbase ASO SEEP cubes

The question (limitation to ASO) we tried to find a solution/workaround. ON PBCS (Cloud)

Details of the application:


Application: type of ASO in cloud Oracle planning 11.1.2.3 (PBCS) application


Dimension : Total 8 dimensions. Account to the dynamic hierarchy. Remaining 7 dimensional hierarchies Stored value. Only 2 dimensions have about 5000 members and others are relatively small and flat hierarchies.

Description of the question of the requirement: We have a lot of calculations in the sketch that use amount = units * rates type of logic. The obligation is such that these calculation logic should apply only to the intersections of Level0 and then data resulted must roll up (down) to the respective parents across all dimensions. But when apply us this hierarchical logic / formula calculation to ASO, the logic(i.e.,amount=units*rate) of calculation is applied at all levels (not only the leaf level) of remaining dimensions. Here, rates are also numbers derived using the formula MDX.

Some of the options explored so far:
Option1: This is an expected behavior in ASO as all stored hierarcies are calculated first, then the dynamic hierarchies. So we tried to change the formula for each of the calculated members to explicitly summarize data at parent levels using algorithm as shown below.

IF (Leaf Level combination)
amount = units * rate

Else / * for all levels parents * /.

Use the function sum adding up the amounts between the children of the current members of dimension1, dimension2 and so on.

End

Result: Recovery works through the parents for a dimension. When the summary level members are selected in 2 or more dimensions, the recovery freezes.

Option2: Change the type of hierarchy to group all the dimensions to "Dynamic" so that they calc after account (i.e. after amount = units * rate runs at intersections Level0).

Result: Same as option 1. Although the aggregation works through one or 2 dimensions, it freezes when the summary level members are from many dimensions.

Option3: ASO use custom Calc.
We created a custom calc by fixing the POV Level0 members of any size and with the amount of formul = units * rate.

Result: Calc never ends because the rate used is a dynamic calc with formula MDX (which is needed to roll forward rates for a specified period at all the following exercises).

If you could get any help on this, it would be a great help.


Thank you and best regards,

Alex keny



Your best bet is to use the allocation of the ASO, what difference does make. (one ton)

There are a few messages blog out there that can help you meet this goal. (including mine), the trick is to create a member calculated with a NONEMPTYMEMBER in the formula

Then it will be a member with an inside MDX formula

NONEMPTYMEMBER units, rates

Units * rates

Now, make a copy of data (allocation) of this member-to-Member stored.

http://www.orahyplabs.com/2015/02/block-creation-in-ASO.html

http://camerons-blog-for-Essbase-hackers.blogspot.com/2014/08/calculation-Manager-BSO-planning-and.html

Concerning

Celvin Kattookaran

PS I found NONEMPTYTUPLE does not and still used NONEMPTYMEMBER

Tags: Business Intelligence

Similar Questions

  • Calculation of frame rates of acquisition image using the 7 Format

    So I finally got my Basler firewire (IEE1394b) camera to capture images at his rate max (120 fps).

    But to do this, I need to use the '7' Format.... which is a bit confusing to me.  When I use the '7' Format, I can't specify the frame (frames per second) rate.  Apparently, you only specify the parameters (length, width, color / mono) of the image and the size of the packets and a few other things.

    So, how can I calculate speed?  I need to know exactly how much time elapses between each image.  And it must be constant.  I can't have a different frame rate.

    NEITHER told I can calculate the frame rate using this equation:

    That comes from this article.

    But I am skeptical because the article also says:

    Please note that the time to transfer an image is slightly faster than the time it takes to acquire an image.

    People out there can clarify this for me?  Is the frame rate constant?  And then I actually calculate when using size 7?

    The rate will be constant.  It won't change from image to image.  The difficult part is to find what it is.

    Basler camera manual will tell you how to calculate the frame rate.  There are three different calculations, and gives you the slower pace is one you use.  These calculations are accurate enough, I think.

    Another possibility is to measure the frame rate, but that requires several seconds (or minutes) for a measurement precise.  The simplest method is to start a life-long and save a frame in time at the beginning, wait several seconds (or minutes) and save the time of another framework.  Subtract the chassis numbers, subtract the time and divide to get the frame rate.  With a wait of several minutes, it's extremely accurate.  To get the acquisition time, read an image whose value "Next Image", then save the chassis number and read the msec timer immediately after reading.  Using a flat sequence structure is probably the best to ensure that everything happens in the desired order.  If you loop playback, you will see real cadence cadence converge slowly.  Stop it when it is pretty accurate.

    In a program I wrote, I controlled the pace by setting the shutter exposure time.  I used the derived formula of the Basler manual for cadence and reversed to calculate the shutter speed.  This only works if you have another way to adjust the brightness levels (opening, lighting, etc.).

    Bruce

  • Automate the calculation against Essbase ASO through the calculation Manager

    Hi all

    Anyone know if I can automate the calculation stored in calculation on a cube Essbase ASO Manager?

    I need calculation automated directly against an Essbase cube as it is not a planning application.

    I tried to use CalcMgrCmdLineLauncher.cmd , but it seems not to be able to find the application. I'm assuming that she tries to locate a planning instead of an application of Essbase application.

    Currently I have a number of calculation parameters in the calculation Manager but I would like to automate the process, as well as on a daily basis, the cube is rebuild and recalculated.

    Any ideas if CalcMgrCmdLineLauncher.cmd should work or if there is another method, in addition to my calculations in MaxL.

    Thank you

    Jimmy

    What version? If it's so 11.1.2.3 for Essbase you use a launcher separated in % EPM_ORACLE_HOME%\common\calcmgr\11.1.2.0\lib, calcmgrCmdLine.jar file if it is an earlier version and then probably will need to use MaxL

    See you soon

    John

    http://John-Goodwin.blogspot.com/

  • Formula Member Essbase ASO (the order of resolution)

    Hello everyone,

    I was hoping to get some opinions on how they would handle a calc question I have.  I built a cube ASO (my first) for the loan on the housing data. A few fields I'm loading are a "Eff Int rate" and "Rate Eff Trans" (below) which are then used to calculate the 'interests Inc. or Exp' and 'transfer Inc. or Exp.

    Interest Inc or Exp =.

    CASE

    When IS ([account]. CurrentMember, [loan account]) THEN 0

    ON THE OTHER

    (([Avg Bal Mth] * ([taux d'Int Eff] / 100)) / [days per year]) * [days]

    END

    Transfer Inc. or Exp =.

    CASE

    When IS ([account]. CurrentMember, [loan account]) THEN 0

    ON THE OTHER

    (([Avg Bal Mth] * ([taux de Trans Eff] / 100)) / [days per year]) * [days]

    END

    Capture.JPG

    The challenge I'll have and don't know how to manage in essbase is in what regards running sums.  I have a dimension labeled "Loan account" with about 15,000 members of individual accounts. For all the accounts of members, it works fine however when it gets to the cumulative of the 'loan account' it calculates incorrectly because of the aggregation.  To test, I tried isolating the different ways to make this work with my current solution (as seen in the above Calc) is to reset just the update rollup for now.  However, the ultimate goal would be to have this dimension of 'Loan account' always aggregate the information accurately.  Here's an example to help explain in more detail:

    Capture.JPG

    For example, if I were to use the size of the attribute "Loan officer" and then drill at the low level of the 'loan account' it would recover 4 accounts and then those subtotal as stated above.

    I hope someone is able to give me some ideas or outline.  If this isn't the case, I can just try to accomplish in SQL before load my data.

    Thank you in advance,

    Bret

    Bret, I think what you see is a fundamental limitation of ASO.  Stored hierarchies are ALWAYS grouped together in front of a member formulas.  The only real workaround in Essbase is to use a procedural calc of the ASO - this would actually save results to the cube as input data that can then be wound the stored size.

  • Rates get aggregated on dynamic parents

    I have a list of items in the child-parent relationship, in which parents are dynamic Calc. I calculate the values of the elements on the basis of the rate of the order of the day and their quantities. When I go to the parents point, quantities and values are aggregated which are logically correct, but rates are also grouped together what they should not. In fact they should on average of dynamic parents of the elements. Is it a good solution for that?

    Hello

    Yes, there is a solution. In the time dimension that contains the tag, you can do this with the time average balance (TBAverage).

    In other dimensions, it is more difficult. I usually put the rate in technology accounts where end-users do not navigate. It is used for the calculation of the values in the Level0. It has a circumflex to aggregate not (I am always grateful to Oracle development to give this functionality for us). Then I create another Member with a visible form to users. This will be a formula that calculates the average of the aggregated values.

    Kind regards

    Philip Hulsebosch

    www.trexco.nl

  • the time of acquisition of data - how to calculate the rate of analog output

    I want to calculate an acceptable rate of analog output, one that is taken in charge by material (PCIe6353), without the rate being changed by the VI DAQmx Timing (sample clock). The final objective is to have a rate of analog output that is an integer multiple of the analog input for precise frequency, since the sinusoid AO's amplifiers, which have a ringtone when AO updates occur.

    According to 27R8Q3YF of the knowledge base: how the actual scanning speed is determined when I specify the rate of scanning to My d..., the rate is revised as needed by calculating the rate of clock / asked for advice, divide the result rounded downwards and upwards in the clock of the Board and use the one that is closest to the requested speed.

    If 'Embedded clock' is selected, which is the result "Council clock.  DAQmx sample clock timebase Timing node - SampClk.Timebase.Rate says 100 ms/s. However, for a rate resulting from the update of 2.38095MS / s, the divisor of the time base timing node - "SampClk.TimebaseDiv" gives a value of 42. 42 x 2.38095 M = 99, 999, 990, where it should be 100 ms/s.

    How to calculate an acceptable rate of analog output is supported by the hardware? I have other plates, in addition, a general method would be appreciated.

    I haven't worked all the details yet but noticed a few things that may be relevant.

    Req AI rate isn't a whole ditch 1E8. It is used to determine the rate of the AO.

    There is no check to ensure that the rate of the AO is an integer division.

    It seems that you have the right idea, but the implementation is not yet there.

    Lynn

  • Rapid calculation of the exponential decay constants

    Hi all

    I try to develop a routine that quickly calculates the exponential decay of a given waveform constant.  I use two different techniques, dealing with the calculation of the directions and another using corrects successive integration (LRS).  The two usually give the correct time for the input waveform even constant with a significant amount of noise.  The LRS solution is significantly less sensitive to noise (desirable), but much more slowly (DFT computations run the order of 10s of microseconds for a waveform pt 1000, while the LRS, such that it is coded in Labview, running at about 1.5 ms).  The LRS technique has been developed by researchers at the George Fox University in Oregon, and they claim that they could perform some computation time on the order of 200 US for both techniques.  I have been unable to reach this time with the LRS technique (obviously) and attempted to use a node of the library Call to call a dll compiled this code in C.  However, at best, I get a growth factor 2 in speed.  In addition, additional calculations using the dll seem to be additive - i.e. for four calculations similar running in the same structure with no dependence on each other, the total computation time is about 4 times that of one.  In my case, this is not enough because I try to calculate 8 x to 1kH.

    Looking through the discussion, I have been unable to determine if I should wait for a performance for C gain well written on Labview well written (most seem to ask why you want to do something external).  In any case, I join the code, then you can be the judge as to if it's well-written, or if there is no improvement in performance.  The main function is the Test analysis Methods.vi that generates a wave exponential scale, offset and noise and then the decay constant tau is calculated using the VI is Tau.vi.  In addition, I am attaching the C code as well as the dll for solving the equations of LRS.  They were coded in Labview 8.6 and the C has been encoded using the latest version of Visual C++ Express Edition from Microsoft.  Themain VI uses the FPGA VI module ' Tick count to determine the rate of computation in microseconds, so if you do not have this module you should remove this code.

    Any thoughts are appreciated.  Thanks, Matt

    Hi Matt,

    After changing the summation loop in your calculation of CWR, the routine runs as fast (or faster) than the variants of the DFT... Anyway: check the results to be sure it is still correct.

  • Type [0] unknown calculation for the dynamic calculation. Only default agg/formula/time balance operations are managed.

    Hi all

    I came across this error last Monday. I tried all the recommendations and configurations and nothing seems to work to solve the problem.

    Here is the error message-

    [Game Sep 24 12:04:27 2015] Local, ARPLAN, ARPLAN, Ess.Tee@MSAD_2010/9240/Error (1012703)

    Type [0] unknown calculation for the dynamic calculation. Only default agg/formula/time balance operations are managed.

    [Game Sep 24 12:04:33 2015] Local, ARPLAN, ARPLAN, Ess.Tee@MSAD_2010/9240/Warning (1080014)

    Abandoned due to the State [1012703] [0x2e007c (0x56042d17.0xeadd0)] transaction.

    [Game Sep 24 12:04:33 2015] Local, ARPLAN, ARPLAN, Ess.Tee@MSAD_2010/8576/Warning (1080014)

    Abandoned due to the State [1012703] [0x40007d (0x56042d18.0x781e0)] transaction.

    [Game Sep 24 12:04:34 2015] Local, ARPLAN, ARPLAN, Ess.Tee@MSAD_2010/736/Info (1012579)

    Total time elapsed Calc [Forecast.csc]: [621,338] seconds

    The script I'm running-

    SET CACHE HIGH;

    SET MSG SUMMARY;

    LOW GAME REVIEWS;

    UPDATECALC OFF SET;

    SET AGGMISSG

    GAME CALCPARALLEL 2;

    SET CREATEBLOCKONEQ

    SET HIGH LOCKBLOCK;

    FIX ('FY16', 'Final', 'Forecasts', '11 + 1 forecasts', 'prediction of 10 + 2', '9 + 3 forecast', '8 + 4 forecasts', "forecast 7 + 5", "6 + 6 forecast", "forecast 5 + 7", 'forecast of 4 + 8', '3 + 9 forecast', 'forecast 2 + 10', '1 + 11 forecasts')

    DIFFICULTY (@IDESCENDANTS ('entity'))

    CALC DIM ("account");

    ENDFIX

    DIM CALC ("entity", "Currency");

    ENDFIX

    In the essbase.cfg I have already included-

    NETDELAY 24000

    NETRETRYCOUNT 4500

    /Calculator cache settings

    CALCCACHEHIGH 50000000

    CALCCACHEDEFAULT 10000000

    200000 CALCCACHELOW

    Lockblock/set limits

    CALCLOCKBLOCKHIGH 150000

    CALCLOCKBLOCKDEFAULT 20000

    CALCLOCKBLOCKLOW 10000

    Please suggest if there is a way to fix this error. I get a similar error for other calculations as well.

    Kind regards

    EssTee

    And you are positive that no one came in a new Member at level 0 as dynamic Calc?

    What are the versions do you use?

  • CDA & QTD calculation using the calculation Script

    Hello

    I named 'Periodicity' in my Essbase database, which has 3 members under the name "BAT", 'QTD' & 'CDA' dimension.
    I'm figuring "CDA" & "QTD" and I wrote following script for the calculation of the "CDA":

    DIFFICULTY)
    @GENMBRS ("VFS planning Dimension entity", 6);
    @GENMBRS ("VFS planning Dimension entity", 7);
    & CurYear,
    @LEVMBRS ("P & L", 0);
    "Local."
    "CDA",.
    "HSP_InputValue,"
    @CHILDREN (the ' scenario Dimension'),
    @CHILDREN ("Dimension" version)
    )

    "Jan" ="BAT"-> "Jan";
    "Feb" ="BAT"-> @PTD("Jan":"Feb");
    'Mar ' =' BAT'-> @PTD("Jan":"Mar");
    "Apr" ="BAT"-> @PTD("Jan":"Apr");
    "May" ="BAT"-> @PTD("Jan":"May");
    "Jun" ="BAT"-> @PTD("Jan":"Jun");
    "Jul" ="BAT"-> @PTD("Jan":"Jul");
    "Aug" ="BAT"-> @PTD("Jan":"Aug");
    "Sep" ="BAT"-> @PTD("Jan":"Sep");
    "Oct" ="BAT"-> @PTD("Jan":"Oct");
    "Nov" ="BAT"-> @PTD("Jan":"Nov");
    "Dec" ="BAT"-> @PTD("Jan":"Dec");

    ENDFIX

    However, the above script gives me following error:

    [Error: 1200354 error compiling formula [Feb] (line 22): type [MEMBER] [number] ([@PTD]) in function]

    Please help me with this calculation "CDA" & "QTD.

    Thank you and best regards,

    AK

    Yep, I missed that...
    but you can get by using the @sumrange function.

    DIFFICULTY)
    @GENMBRS ("VFS planning Dimension entity", 6);
    @GENMBRS ("VFS planning Dimension entity", 7);
    & CurYear,
    @LEVMBRS("P&L",0),
    "Local."
    "HSP_InputValue,"
    @CHILDREN (the ' scenario Dimension'),
    @CHILDREN ("Dimension" version)
    )
    datacopy mtd to CDA.

    Fix (YTD)
    "Feb"=@sumrange(MTD,"Jan":"Feb");
    "Mar"=@sumrange(MTD,"Jan":"Mar");
    "Apr"=@sumrange(MTD,"Jan":"Apr");
    "May"=@sumrange(MTD,"Jan":"May");
    "Jun"=@sumrange(MTD,"Jan":"Jun");
    "Jul"="@sumrange(MTD,"Jan":"Jul");
    "Aug"=@sumrange(MTD,"Jan":"Aug");
    "Sep"=@sumrange(MTD,"Jan":"Sep");
    "Oct"=@sumrange(MTD,"Jan":"Oct");
    "Nov"=@sumrange(MTD,"Jan":"Nov");
    "Dec"=@sumrange(MTD,"Jan":"Dec");
    endfix
    endfix

    Alternatively, you can use as follows:

    DIFFICULTY)
    @GENMBRS ("VFS planning Dimension entity", 6);
    @GENMBRS ("VFS planning Dimension entity", 7);
    & CurYear,
    @LEVMBRS("P&L",0),
    "Local."
    "HSP_InputValue,"
    @CHILDREN (the ' scenario Dimension'),
    @CHILDREN ("Dimension" version)
    )
    datacopy mtd to CDA.

    Fix (YTD)
    "Feb" = Jan + Feb-> MTD;
    "Mar" = Feb + Mar-> MTD;
    "Apr" = MAR + APR-> MTD;
    "Peut" = APR + may-> MTD;
    "Jun" = May + June-> MTD;
    "Jul" = June + July-> MTD;
    "Aug" = July + August-> MTD;
    "Sep" = August + Ms-> MTD
    "Oct" = Ms + Oct-> MTD
    "Nov" = Oct + Nov-> MTD
    'Dec' = Nov + Dec-> MTD
    endfix
    endfix

    -Krish

    Published by: Krish on August 9, 2010 16:41

  • Decimal places missing in Pivot using the rule of aggregation 'average '.

    Hi, I have a PivotTable that has three layers. The first two are made and displayed in the form of values, and the last of them uses the average of the aggregation rule to create an average of the first two. Thus, for example, I have a month a closed case counts, then a count of the number of days to close the cases, and then I want the average number of days to end. The problem is that days on average close appears always to round down - even if I'm showing two decimal places.

    Closed the case = 103
    Closing days = 1599
    Closing days on average = 15.52 (calculated) but the poster 15.0 pivot

    Any ideas on how to solve this problem would be appreciated.

    Thank you
    Larry

    OBIEE is actuelly the correct calculation of handling. If you divede two whole mathematical standards you can not get a fraction. To work around this, you must first convert duplicate ==> CAST ("days to close" DOUBLE) / CAST ("closed cases" as DOUBLE)

    Concerning

    John
    http://www.obiee101.blogspot.com/

  • Automatic calculation of the budgetary cost

    Hello

    In projects approved oracle budget cost can take hours and hourly resources the calculation of rates based on the employee's employment to manipulate himself to prepare the budget?

    Is it possible to Oracle project? If so, can any 1 help me on the process?

    Thanks in advance much n...

    Hello

    You may want to consider Project Management User Guide, there is a chapter on the budgeting and forecasting.
    This text says:
    You can set budgets and forecasts of projects Oracle to calculate the gross cost, ployez under the burden of costs and revenues for each line in your budget or forecast. Projects Oracle calculates the amounts based on the quantities and amounts that you enter and from the fee that you specify for the costs and income in your budget or the planned rate to plan the scheduling options...
    .
    Note the extended functionality described here is relevant if you enable financial plans and use HTML pages.
    If you use the budgeting forms you will enter the quantity, but need Extension customer Budget to calculate the cost of crude and loaded automatically. You need to develop for these calculations in the extension code.

    Dina

  • Help on the model and aggregation

    Hi Experts,

    I use models to calculate the value of Member individuals (at the leaf level) of a size. The expression is like that
    A = B - C where: A is at the leaf level and B and C are at the second level from the top. The values are precalculated for thisdimension so B and C are available after the charge itself. After running the model value happens correctly for A. The cube is precalculated for all dimensions. All dimensions are had set to base hierarchies except time.

    Now, I want to consolidate this value to all the ancestors of A and for this reason I am running after an order:-

    -> aggregation PRTTOPVAR using < default aggmap >

    When I do this, data calculated for A also disappears and shows 0.

    My questions are:-

    1. what should I do to aggregate the data calculated at the level of all?
    2. why I m losing any data that is calculated after the execution of the aggmap default?
    3. as A, B and C belongs to the same dimension, I don't use other dimensions in the type clause of the dimension. Is this good? If Yes, then how the values will be calculated for a different combination of dimensions?

    Any help would be much appreciated.

    DB: 10.2.0.4
    MN: 10.2.0.3A
    OLAP applied patch.

    Thank you
    Brijesh

    Published by: BGaur on October 10, 2008 20:29

    Yes you are right; without naskip2 the value 'Yes' all NA value in your calc will return NA as the result, even if other cells of the aggregation containing no data.

  • can I call from sri lanka to the US and what are the rates

    I take my computer with me so I can call usa Sri lanka and what are the rates

    Beachbum wrote:

    I take my computer with me so I can call usa Sri lanka and what are the rates

    With Skype, your location is unimportant. Calls to (or within) the United States wherever you are cost 2.3 cents / minute or part of it.

    You can easily find the information yourself with a little research here:

    http://www.Skype.com/intl/en-us/prices/PAYG-rates/

    TIME ZONE - US EAST. LOCATION - PHILADELPHIA, PA, USA.

    I recommend that you always run the latest version of Skype: Windows & Mac

    If my advice helped to solve your problem, please mark it as a solution to help others.
    Please note that I usually do not respond to unsolicited private Messages. Thank you.

  • By default the rate of execution 15 Hz of Communication send loop

    Hi NIVS experts!

    The link understand engine VS, the default implementation of communication loop rate is set to 15 Hz.  I've found a way to change this default in VS.  As this Communication send loop transmits the values of layer VeriStand bridge, where working space gets the tables of channel.  The flow of data, I got look like this:

    Data channels data channels

    Communication send loop (15 Hz)---> VeriStand gateway---> Workspace (logging of data control (100 Hz))

    My question is: If the rate of implementation of the communication loop is only 15 Hz, does make sense for the control to save data on the workspace to record data at a much higher rate (say 100 Hz rate target)?   Or how can I change the default implementation of continuous loop rate?

    Thank you

    Pen

    We used the 'recording' and 'streaming' API of VeriStand, and I can confirm that the flow is equal to that of the loop of real-time data. The Kz 15 data source is used to provide enough slow data API "read channel".

    I assumed that the data connection control uses the 'recording', is not subject to the caching of 15 Hz.

    ++

  • How to slow the rate of model

    Hello

    I'm just slow down my simulink (dll) model runs in Veristand. The model was compiled at a sampling frequency of 100 Hz and I have been able to slow the speed of the model running in Veristand by reducing the rate of the primary control loop, although he won't reduce anything lower than 10 Hz. My goal is to launch the model 100 x slower than real time. Is this possible? If so, how could I do that?

    Thank you

    Hello claw,

    A basic rule concerning the models of simulation in VeriStand 2009 and 2010 is to run as slow as possible PCL.  Regarding onlymodels, it's the lowest divisible by all rates of the model.  (Other parts of the system may require the PCL to run faster).

    For example, that model A was compiled at 25 Hz, model B at 75 Hz and model C at 100 Hz.  The lowest rate that is divisible by all models is 300 Hz, this is what you run the PCL to.  The next step and maybe the answer to your question is to define the "Décimation" integer for each model in System Explorer.  System Explorer, decimation is always in what regards the PCL.  Decimation 1 means 'at the PCL rate. "  2 decimation means 'every other iteration of the PCL' or 'half the rate of PCL.  Decimation model both for the model A = 12, B = 4 and C = 3 model.

    The reason for the basic rule is that in VeriStand 2009 and 2010 models decimated are required to run a stage of time in an iteration of the PCL to be 'in time'.  In the previous case, models A, B and C each have 3.3ms to run once the step in parallel mode, otherwise they are fine.

    Steve K

Maybe you are looking for

  • Firefox does not fill the screen

    Hello Yesterday I had Firefox and explore (ugh!) browsers open at the same time, as I just changed my site on Yahoo Small Business Solutions and Firefox would not save the changes (as a result, the opening of Explorer.) After I finished with the Web

  • How to receive the Apple mail (iCloud) on Android phone

    Hi people, Since the upgrade to ElCapitan on my iMac, I have been unable to receive all messages electronic iCloud on my cell phone from Nokia. I'll be very grateful if someone can instruct me on how should move forward in order to overcome this prob

  • SATA Windows XP drivers needed for Satellite A205

    Not SATA drivers for windows XP THAT found on Web from Toshiba site work for all series of Satellite computers portable products? I have problem installing windows XP, XP cannot recognize a SATA hard drive. my model: Satellite A205-S5859.

  • Hard drive does not appear when you try to install Windows 7 ultimate x 64

    Hello I have a problem, try to install Windows 7 x 64 on a Compaq CQ56 When I get to choose the drive where I want to toion install windows noting appears it seems that I don't have the drivers for my HARD DISK can someone give me the right drivers f

  • Connectivity issues and incorrect clock? Risks CC

    Hello!I used CC with success until one day CC blanked me. The prompt says I am not connected to the internet or the computer's clock is incorrect - that are true. I reinstalled CC and tired of the many steps in troubleshooting based on the Site of Ad