Using WLST to test existing data source

Hi Forum,

I wonder if its possible to test an existing source of data (WL10.3) connection using WLST. Ideally, I would like to write something to check connectivity e2e. I prefer this rather than enter the details of the DS and external test for Weblogic. It would be also better to add some test the functionality of the application and the exhibitor. Thanks for the pointers.

Kind regards
Baños.

Hello..

It is a method on the JDBCDataSourceRuntimeMBean, which is located in the serverRuntime tree...

for example.

serverRuntime()

CD ('JDBCServiceRuntime')

CD to target you.

CD ('JDBCDataSourceRuntimeMBeans')

then cd into your data Source and testPool method is available here.

Tags: Fusion Middleware

Similar Questions

  • stupid questions on the use of mds-GOSA/bip_datasource data sources.

    Dear gurus,
    I'm confused about the use of some as securities data sources.
    And what is the relationship of them with RPD source data/connection pool setting?

    As I found in the turning performance guide, while I can't find where is the caller / user of these data sources and never mentioned turning guide turning data sources RPD which I think is more important...

    So I think I must have Miss understanding of something, need your help!

    Michael

    Hi Michael,

    The data sources mds-GOSA/bip_datasource etc. are for the weblogic server access to the metadata of Fusion Middleware.

    If you remember, we had used the RCU to create a repository of information of configuration settings, host of Fusion Middleware /product information/versions etc in a separate database, and these data sources are created in weblogic, and the different products that we install, in this case OBIEE/RTD/BIPublisher etc. can interact between them, transparently and through weblogic.

    Since then, it is the common platform (application server) on which all products have been deployed, it is necessary that we tune elsewhere too for optimal performance from the Infrastructure point of view.

    Now, the connection pool of the physical layer in our .rpd is all about access to data, and so the setting guide does not, in this regard because it is completely dependent on the type of data source, network interfaces of appeal, not concurrrent users, total dashboards, reports etc etc which is dependent demand.

    The setting guide is all about the setting of the Middleware Infrastructure + optimal settings for middleware (middleware) only.

    Hope that I was clear.

    Thank you
    Diakité

  • New table from existing data source

    Hi all

    M using Essbase Studio to generate the cube (Hyperion 11.1.2)... I imported a few tables in a data source. Later it is possible to import a new table in the data source?


    Kind regards
    Lolita

    Yes is was possible in all versions to add a table/view to a data source, just right click on the data source and select incremental update. 11.1.2 you can also remove or update the sources of existing data with new and changed columns.

  • By using a nested as a data source in a cursor query table?

    Hi all

    Can you please let me know if we can use a nested table created locally as a source of data in a query of cursor?

    I'm doing using the following logic, but on compliation failure with the error: "PL/SQL: ORA-00942: table or view does not exist.

    =================================================================================================

    DECLARE

    N_table_outage_dates TYPE IS an ARRAY OF DATE;
    nt_out n_table_outage_dates: = n_table_outage_dates();

    l_outage_date DATE;

    cursor l_temp_cursor is
    SELECT DISTINCT APS. VENDOR_ID
    OF AP_SUPPLIERS APS
    WHERE
    (
    Trunc (APS. BUS_CLASS_LAST_CERTIFIED_DATE) + To_Number (FND_PROFILE. Value('POS_BUS_CLASS_RECERT_PERIOD'))
    = Trunc (l_outage_date) + To_Number (FND_PROFILE. Value('POS_BUS_CLASS_RECERT_REMIND_DAYS'))
    AND l_outage_date IN (SELECT * FROM nt_out );

    BEGIN

    l_outage_date: = SYSDATE;

    FOR I IN 1.7 LOOP

    nt_out. EXTEND;
    nt_out (i): = l_outage_date;
    l_outage_date: = l_outage_date + 1;

    END LOOP;

    Open l_temp_cursor

    -other logic

    close l_temp_cursor;

    END

    =================================================================================================

    The other will be passing the value of 'l_outage_date' to a cursor in a loop and then storing the multiple outputs of the query in a separate table / v-table but it will lead to duplicated being filled, elements that will need some work to do.

    Please let me how can know I use the nested table created by user in the query cursor running regarding the above code? Is the statement to go wrong anywhere nested table?

    Thank you
    Sylvain

    You currently have

    TYPE n_table_outage_dates IS TABLE OF DATE;
    nt_out n_table_outage_dates := n_table_outage_dates();
    

    He declares nt_out as a type of PL/SQL... the engine SQL (the place that runs queries) can not understand / interpret what it is, because it is declared in a PL/SQL it is not inherited in any way in SQL.

    So you must either create the type in SQL as I showed earlier OR (now that we know your version) get grants on the object laid down by Solomon.

    AND THEN, you need to change

    (SELECT * FROM nt_out );
    

    TO

    (SELECT * FROM TABLE(CAST(nt_out as DEPENDS_ON_WHICH_OBJECT_YOU_CHOOSE_TO_USE) );
    

    As I mentioned earlier in my example.

    Published by: Tubby on December 23, 2009 11:23

  • Modify the existing report data source connection in web analytics

    Hello

    I developed some reports in web analytics 9.3.1 now I need to point these reports to a new cube with the same layout. Can someone let me know how to change this data source. There are many many reports so it will take a long time for me to re-create them.

    Please suggest any good approach

    xat-

    Another way is to export the files (using Studio Web Analytics) and then import it into a different folder - you get the ability to map existing data source connections or create new ones.

  • Data source configuration error

    Hello

    I couldn't configure the planning data source. Let me explain the whole scenario.

    Already, I've created a data source (SID: TEST, Port: 1571) and created an application and all works well. Now client changed the SID (Test1) and Port No. (1551) for the schema. I exported the schema and imported again SID. When I am editing the connection for the existing data source by editing again SID and the port number, it gives an error "Data Source configuration failed." Here's the configtool the journal entries.

    (November 26, 2009, 11:42:05), com.hyperion.planning.HspDBConfigurator, DEBUG, registration failed: null. Save failed: null.
    (November 26, 2009, 11:42:05), com.hyperion.cis.config.wizard.RunAllTasksWizardAction, ERROR, error:
    com.hyperion.cis.config.ProcessingException
    at com.hyperion.planning.HspDBConfigurator.configure(HspDBConfigurator.java:222)
    at com.hyperion.cis.config.wizard.RunAllTasksWizardAction.executeDBConfigTask(RunAllTasksWizardAction.java:282)
    at com.hyperion.cis.config.wizard.RunAllTasksWizardAction.execute(RunAllTasksWizardAction.java:151)
    at com.installshield.wizard.RunnableWizardBeanContext.run (unknown Source).

    I get the same error for other products that also is EAS, planning and reporting. But the SSP configured successfully and works well.

    Help, please...

    Thank you
    Naveen Suram

    Published by: Naveen Suram on November 26, 2009 15:21

    When configuring, I chose the option "Reuse existing database" instead of "Drop and create new database ' because I already imported the tables to the new schema.

    If everything is pointing to the correct location, then you don't need to configure them, have tried to layout planning service and ensure that it connects to the correct repository.

    See you soon

    John
    http://John-Goodwin.blogspot.com/

  • How to add a logic unit with an existing data store number

    I have a logic unit number I replicated from a SAN at a data center in a SAN to another data center. I want to map this LUN to a cluster of ESX 4 and use the store of existing data on the LUN to recover the virtual machines it contains. What is the best way to do it? Guests will see the existing data store, when I rescan HBAS or y at - it a trick to add a data store existing cluster?

    It's a separate LUN on a different SAN mapped to another set of hosts. Once I have a new analysis for the new LUNS and data warehouses, store data on this LUN will display in the list of data stores to then browse data warehouses and register virtual machines on it?  Or still do I add storage and first select the existing data store?

    Given that the hosts did not see the original LUN before, the data store should appear just after the new analysis.

    André

  • Do not connect to a data source in obiee 11g

    Hi friends,

    I can't able to import metadata into a new repository in OBIEE 11 g...

    It does not connect to the data source to import meta data

    It throws error as the failure of the connection...

    I also restarted services and also checked for odbc connectivity that's showing the connection is
    success...

    Also, I entered the details in Notepad TNS...

    Here are the following details

    TEST =
    (DESCRIPTION =
    (ADDRESS =
    (PROTOCOL = TCP)
    (HOST = 172.16.1.110)
    (PORT = 1522)
    )
    (CONNECT_DATA =
    (SID = TEST)
    )
    )

    The data source name that gave me is ORCLBI for my reference...

    So why not connect not not at the data source... Help me friends for the importation of tables to my deposit as soon as it is connected to the data source...

    Thank you
    Harry...

    Hello
    Through Vincent response in this... Can help you... Re: unable to connect to the database of the OBIEE 11g improved SPR

    Kind regards
    Srikanth

  • BI Publisher & dynamic data source

    Hello

    We try to have one OBI/EE configuration serve several application pre-production environments with mostly identical database schemas.

    There is a pool of dynamic connections in BI Server, its data source, the value of a session variable ([see here | http://hekatonkheires.blogspot.com/2009/10/obiee-101341-dynamic-data-source.html]). It works very well.

    Now a BI dashboard has a BI Publisher report on it against this dynamic data source. This is equivalent to shouting to the editor and Publisher to recall of presentation services. The session data source variable is lost in the process.

    Is there a way of sessions between services of presentation and publication of sharing or otherwise passing session here and there, variables such as URL parameters?

    Thanks in advance,

    Dirk.

    HY,

    I never try but here I think.

    The integration settings between BEEP and OBIEE is made via the guest of dashboard.

    If you use a response as a data source in the TWEET, you normally have no problem.

    If you use a SQL query as a data source, you must:
    * Add a command prompt with a default of your session (for the dynamic connection) variable
    * use it in your sql as a bind variable for a schema or a database link.

    Success
    Nico

  • "Database Config" vs "data source Configuration.

    In the Configuration utility for 9.3.1 What are the differences between "Database Config" and "data source Configuration.

    After running 'Config database' itl has created about 10 HSPsys_ * tables in the database of PLANSYS. It seems to me that this is where the planning and data source information instance are stored

    "Data source configuration. created about 30 HSP_ * tables in PLAN2.
    It seems that is where the application tables are stored, am I right?
    When you run "Configure data source", I shouldn't use PLANSYS database to store my tables for the application I'm good? Otherwise I would have problems with future migration, am I right?

    Hello

    Database config set up tables for the planning system to use this database schema / should be separated from the application planning.
    "Data source configuration" - configure just details about your connection to the planning application details, e.g. connections details to the repository and essbase. It stores this information in your planning system database.

    Once you create a planning application he uses the information from the data source to create the planning and application of all the tables in the database

    See you soon

    John
    http://John-Goodwin.blogspot.com/

  • Full of multiple data sources

    Hi all

    I work in a retail environment in retail and we build reports for stores. There are several reports and that they receive the same reports. We use the BEEP company with breaking function and it works fine.

    The problem is that there are too many reports and we would like to consolidate all of the reports from the same store in the same PDF. We use the function of multiple data source and managed to concatenate all the comprehensive reports.

    Using this model, and the new report, we cannot make breaking functionality works correctly: the burst report contain all components of report required.

    For example, I have 2 questions A and B producing each of the reports for 3 stores: 2, 4, and 5. Without rupture, the result is such that it is:
    Query A - store 2
    Query A - store 4
    Query A - store 5
    Query B - store 2
    Query B - store 4
    Query B - store 5

    With the outbreak in a file using the store number, I get the following:
    File 1 - store 2 - query A
    File 2 - store 4 - query A
    File 3 - Store 5 - query A
    + Store 2 - question B
    + Record 4 - B query
    + Store 5 - question B

    I wish I had the following result:
    File 1 - store 2 - query A
    + Store 2 - question B
    File 2 - store 4 - query A
    + Record 4 - B query
    File 3 - Store 5 - query A
    + Store 5 - question B

    The main question is: is it possible using Enterprise Pub BI? We are using the 10.1.3.3.2 version

    If this is the case, can you provide me with help on how to configure either / or queries and the model to accomplish this task?

    I create an SR and Support of Oracle is not an answser and suggested someone in the Forum could help.

    Thanks in advance,
    Minh

    I wish I had the following result:
    1-2-query A file + Store 2 Store - query B
    File 2 - store 4 - query A + Store 4 - query B
    File 3 - Store 5 - query A + Store 5 - query B

    the level of break must be give to
    1-2-query A file + Store 2 Store - query B

    the tag in the xml file must be broken down by common to these three rows.

    Since the data comes from the application of different, and data won't be under the single label.
    You can't burst using the concatenated data source.

    But you can do by using datatemplate and bind the request and get data for each file in a single query.

    Select distinct store_name in all stores

    Select * from query1 where to store the name =: store_name = 1st request

    Select * from query2 where to store the name =: store_name = 2nd request

    set the datastructure as you wish,

    the XML will contain something like that


    -to store 2
    -to store 3
    -for the store 4
    -to store 5

    Now you can it burst at the store level.

  • Table 'dbc_user' in the descriptor of the item: 'user' does not exist in a space of accessible table by the data source.

    In my development environment, when I run the production server, I get the following error in the server.log. I have configured the data source for the production. I use ATG 10.1. Please help solve this problem. Here are the logs

    22:46:45, 765 full repository INFO [ProfileAdapterRepository] SQL boot

    22:46:45, 988 full repository INFO [AdminSqlRepository] SQL boot

    22:46:45, 990 Initializing INFO [AdminAccountInitializer] account database/atg/dynamo/security/AdminAccountManager of/atg/dynamo/security/SimpleXmlUserAuthority

    22:46:46, 450 ERROR [ProfileAdapterRepository] Table 'dbc_user' in the descriptor of the item: 'user' does not exist in a space of accessible table by the data source.  DatabaseMetaData.getColumns returns without columns.  Catalog = null Schema = null

    22:46:46, 466 WARN [ProfileAdapterRepository] atg.adapter.gsa.GSARepository-> loadColumnInfo: SQLException in Table.loadColumnInfo.  Try again.

    com MySQL.JDBC.exceptions.jdbc4.MySQLSyntaxErrorException: Table 'atgcoredb.dbc_user' does not exist

    at sun.reflect.NativeConstructorAccessorImpl.newInstance0 (Native Method)

    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)

    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)

    at java.lang.reflect.Constructor.newInstance(Constructor.java:525)

    at com.mysql.jdbc.Util.handleNewInstance(Util.java:407)

    at com.mysql.jdbc.Util.getInstance(Util.java:382)

    at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:1052)

    at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3603)

    at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3535)

    at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:1989)

    at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2150)

    at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2620)

    at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2570)

    at com.mysql.jdbc.StatementImpl.executeQuery(StatementImpl.java:1476)

    to com.mysql.jdbc.DatabaseMetaData$ 7.forEach(DatabaseMetaData.java:3966)

    at com.mysql.jdbc.IterateBlock.doForAll(IterateBlock.java:51)

    at com.mysql.jdbc.DatabaseMetaData.getPrimaryKeys(DatabaseMetaData.java:3950)

    at atg.adapter.gsa.Table.loadColumnInfo(Table.java:1926)

    at atg.adapter.gsa.GSARepository.loadColumnInfos(GSARepository.java:7534)

    to atg.adapter.gsa.GSARepository$ 1.run(GSARepository.java:5431)

    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)

    to java.util.concurrent.ThreadPoolExecutor$ Worker.run (ThreadPoolExecutor.java:603)

    at java.lang.Thread.run(Thread.java:722)

    22:46:46, 467 WARN [ProfileAdapterRepository] unknown JDBC type for property: businessAddress, element type: user. Make sure that the column names in the database and match model.  The business_addr column is not found in the set of columns returned by the database: {} for this table.

    22:46:46, 470 ERROR [ProfileAdapterRepository] Table 'dbc_buyer_billing' in the descriptor of the item: 'user' does not exist in a space of accessible table by the data source.  DatabaseMetaData.getColumns returns without columns.  Catalog = null Schema = null

    22:46:46, 470 WARN [ProfileAdapterRepository] atg.adapter.gsa.GSARepository-> loadColumnInfo: SQLException in Table.loadColumnInfo.  Try again.

    com MySQL.JDBC.exceptions.jdbc4.MySQLSyntaxErrorException: Table 'atgcoredb.dbc_buyer_billing' does not exist

    at sun.reflect.NativeConstructorAccessorImpl.newInstance0 (Native Method)

    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)

    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)

    at java.lang.reflect.Constructor.newInstance(Constructor.java:525)

    at com.mysql.jdbc.Util.handleNewInstance(Util.java:407)

    at com.mysql.jdbc.Util.getInstance(Util.java:382)

    at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:1052)

    at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3603)

    at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3535)

    at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:1989)

    at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2150)

    at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2620)

    at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2570)

    at com.mysql.jdbc.StatementImpl.executeQuery(StatementImpl.java:1476)

    to com.mysql.jdbc.DatabaseMetaData$ 7.forEach(DatabaseMetaData.java:3966)

    at com.mysql.jdbc.IterateBlock.doForAll(IterateBlock.java:51)

    at com.mysql.jdbc.DatabaseMetaData.getPrimaryKeys(DatabaseMetaData.java:3950)

    at atg.adapter.gsa.Table.loadColumnInfo(Table.java:1926)

    at atg.adapter.gsa.GSARepository.loadColumnInfos(GSARepository.java:7534)

    to atg.adapter.gsa.GSARepository$ 1.run(GSARepository.java:5431)

    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)

    to java.util.concurrent.ThreadPoolExecutor$ Worker.run (ThreadPoolExecutor.java:603)

    at java.lang.Thread.run(Thread.java:722)

    22:46:46, 471 WARN [ProfileAdapterRepository] unknown JDBC type for the property: myBillingAddrs, element type: user. Make sure that the column names in the database and match model.  The addr_id column is not found in the set of columns returned by the database: {} for this table.

    22:46:46, ERROR 611 [ProfileAdapterRepository] Table 'dbc_org_billing' in the descriptor of the item: "organization" does not exist in a space of accessible table by the data source.  DatabaseMetaData.getColumns returns without columns.  Catalog = null Schema = null

    22:46:46, 611 WARN [ProfileAdapterRepository] atg.adapter.gsa.GSARepository-> loadColumnInfo: SQLException in Table.loadColumnInfo.  Try again.

    com MySQL.JDBC.exceptions.jdbc4.MySQLSyntaxErrorException: Table 'atgcoredb.dbc_org_billing' does not exist

    at sun.reflect.NativeConstructorAccessorImpl.newInstance0 (Native Method)

    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)

    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)

    at java.lang.reflect.Constructor.newInstance(Constructor.java:525)

    at com.mysql.jdbc.Util.handleNewInstance(Util.java:407)

    at com.mysql.jdbc.Util.getInstance(Util.java:382)

    at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:1052)

    at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3603)

    at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3535)

    at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:1989)

    at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2150)

    at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2620)

    at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2570)

    at com.mysql.jdbc.StatementImpl.executeQuery(StatementImpl.java:1476)

    to com.mysql.jdbc.DatabaseMetaData$ 7.forEach(DatabaseMetaData.java:3966)

    at com.mysql.jdbc.IterateBlock.doForAll(IterateBlock.java:51)

    at com.mysql.jdbc.DatabaseMetaData.getPrimaryKeys(DatabaseMetaData.java:3950)

    at atg.adapter.gsa.Table.loadColumnInfo(Table.java:1926)

    at atg.adapter.gsa.GSARepository.loadColumnInfos(GSARepository.java:7534)

    to atg.adapter.gsa.GSARepository$ 1.run(GSARepository.java:5431)

    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)

    to java.util.concurrent.ThreadPoolExecutor$ Worker.run (ThreadPoolExecutor.java:603)

    at java.lang.Thread.run(Thread.java:722)

    22:46:46, 612 WARN [ProfileAdapterRepository] unknown JDBC type for the property: myBillingAddrs, element type: Organization. Make sure that the column names in the database and match model.  The addr_id column is not found in the set of columns returned by the database: {} for this table.

    You want to run the B2BCommerce module? If so, then you must run the $dynamo_home /... / B2BCommerce/SQL/db_components/MySQL/b2b_user_ddl. SQL ddl because you are missing the B2BCommerce tables. If this isn't the case, you will have to redo without the B2BCommerce module.

    Thank you

    Joe

  • Using Lr test, I managed to copy photos in the Catalogue and in development. Now, I want to use the photo developed/published as master and delete the original. So using the "modified" as the new source version. What is the process to do this?

    Using Lr test, I managed to copy photos in the Catalogue and in development. Now, I want to use the photo developed/published as master and delete the original. So using the "modified" as the new source version. What is the process to do this?

    This is to manually remove the master files. But are you sure you want to do this? Especially if these files are raw files? If you do, you throw a lot of precious image data. Remember that the main files are completely intact and in their original state at any time. The catalog stores all the changes that you make. But if you really want to delete your master files, you need just to delete them individually or as a group.

  • How to use datasource jndi in Weblogic instead of add a DB data source

    Hi all

    version: 11.1.1.4

    I'm trying to understand how in my ADF applications I use a datasource jndi existing on our servers weblogic instead of having to bury the source database db in my adf applications. As SOA, I would refer to the DB directly in the design so that I can pull in entities and build display objects, but when I deploy I want it references on the weblogic Server jndi datasource.

    Is this possible? If so I don't know how to configure it as I would a DB adapter in SOA.

    As always, appreciate the info.

    Thank you

    S

    If you use ADF in the model layer (application modules) you can configure them to use JNDI Datasources. Just right click on the application module and select "Configurations."... ». In the next dialog box, you see all currently available configurations (named xxxxxlocal and a xxxxxshared named at least). Select local and press change. This opening of the DB connection dialog where you can change the connection of the JDBC URL of JDBC (JNDI) data source. Save your work, and when you start now use the JNDI name.

    Timo

  • After 5 months of use... "You must allow replacing existing data to start using this device"

    I currently have a very disturbing system status:

    "You must allow replacing the existing data to begin to use this appliance.

    "Are you sure you want to overwrite existing data?"

    The unit shall be set up a 8TB ix4 - 300 d in RAID (I forgot the number). I am still able to access and read / write from the network, and as far as I know that no other messages indicate that none of the drives failed.

    How can I solve it without risking the 2 TB of data? What happens if I click on OK?

    Hi whitby,.

    The first thing I recommend you to do is backup your data even if it is still available and you can access and read/write to the device. Your unit is probably in RAID 5 because this is the default value for the device. With RAID 5, you can lose a single drive and be able to continue to access your data, but if another hard drive were to fail, your data will be lost and you need data recovery.

    Once your data is backed up on a second location, you should be fine to replace the existing data and to rebuild the RAID. The real problem is that the drive is most likely a failure and it will be replaced soon. If you are covered by the warranty, I would recommend contacting technical support to send in a log dump so they can get more information from device to see if the hard drive needs to be replaced. The warranty covers the replacement of the hard disk. Otherwise, you will need to get a replacement of the hard disk that is the same manufacturer, speed, and as long as the original disks.

Maybe you are looking for