Best practices in environment metadata table of DW?

Hi guru,.


In data warehouse, we have 1. Diagram of step 2. DWH (reporting of schema data warehouse). Stageing we have about 300 source tables. In the DWH pattern, we create tables that are only required to view reports. some tables in the schema stageing, have been created in the schema DWH also well with the name of different tables and columns. The convention of naming for these same tables and columns in the DWH schema is based on trade names.

In order to keep track of these tables, we create the metadata table in the schema DWH say for example

Stage                DWH_schema
Table_1             Table_A          
Table_2             Table_b
Table_3             Table_c
Table_4              Table_D
My question is how to handle the names of columns in each of these tables. The names of columns stage_1, stage_2, and stage_3 have been renamed to DWH_schema that are part of the Table_A, Table_B, Table_c.

As mentioned previously, we approximately 300 paintings at the stage and may be about 200 tables in the schema DWH. Many names of columns have been renamed in the scheme of the DWH stage tables. In some tables, we have 200 column

so my concern is how to handle the names of columns in the table metadata? Do we need to keep only the table names into a table column metadata not names?


Any ideas will be greatly appreciated.

Thank you!

Dear OP,

I parked I can do that, but by prefing or suffexing is not useful from a business point of view. Always our professional user is not necessaryly know the name of the source table or field names that they care on this issue.

Allow me to correct me, you do not need the table prefix and the columns in the tables of DW, but only the staging of the prefix ones. So, you can stage tables very easily co related with tables of DW.

Start by...

MDATA_TAB (mtab_id, TABLE_NAME, tab_type, tab_desc); tab_type = STG/DW
MDATA_TAB_COL (id, mtab_id, Column, col_datatype, col_desc)

Published by: pgoel on March 11, 2011 21:11

Tags: Database

Similar Questions

  • Best practices for migrating data tables - please comment.

    I have 5 new tables stocked with data that must be promoted an evolution to a production environment.
    Instead of just DBA using a data migration tool, they are insistent that I have record and provide scripts for each commit, in good condition, necessary both to build the table and insert the data from ground zero.

    I'm very little used to such an environment, and it looks much more risky for me to try to reconstruct the objects from scratch, so I already have a model perfect, tested and ready.

    They require a lot of literature, where each step is recorded in a document and use for deployment.
    I think their purpose is that they do not want to rely on backups but would rather than rely on a document that specifies each step to recreate.

    Please comment on your view of this practice. Thank you!

    I'm not a DBA. I can't even hold a candle to the fans of the forum as Srini/Justin/sb.
    Now that I'm giving a slightly different opinion.

    It is great to have, and I paraphrase of various positions
    Deployment documents,
    sign off of groups
    recovery steps
    Source code control
    repositories
    "The production environment is sacred. All the risks that must be reduced to a minimum at any price. In my opinion a DBA should NEVER move anything from a development environment directly in a production environment. 'NEVER.'
    etc etc.

    But we cannot generalize that each production system must have these. Each customer is different; everyone has different levels of fault tolerance.

    You can't wait for a change of design of the product to a cabinetmaker to go through "as rigorous than at NASA. Why would it be different for software change?

    How rigorous you put is a bit subjective - it depends on the policies of the company, experiences in society with the disasters of migration (if any) and the corporate culture.

    OP can come from a customer with lax controls. And it can be questioned if the rigour at the level of the new customer is worth it.

    To a single client, (and I don't kid), the prod password is apps/apps. (after 12 years of being alive!) I was appalled at first. But it's a very small business. They got saddled with EBS during the .com boom Hay days. They use just 2 modules in EBS. If I speak of creation (and charge) these documents/processes, I would lose the customer.
    My point is that not all places must or want these controls. By trial and error, you get what is the best company.

    OP:
    You said that you're not used to this type of environment. I recommend that you go with the current first. Spend time on understanding the value/use/history of these processes. Ask if they still seem too much to you. Keep in mind: this is subjective. And if it comes down to your existing reviews of the v/s, you need either an authority OR some means of persuasion (money / sox (intentional typo here) / beheaded horse heads... everything that works!)

    Sandeep Gandhi

    Edit: Typo-detail fault

    Published by: Sandeep Gandhi, independent Consultant on 25 June 2012 23:37

  • Best practices for JTables.

    Hello



    I'm programming in Java since 5 months ago. Now I am developing an application that uses charts to present information to a database. This is my first time manipulation of tables in Java. I read the tutorial Swing of the Sun on JTable and more information on other sites, but they are limited to the syntax of the table and not in best practices.



    So I decided what I think is a good way to manage data in a table, but I don't know which is the best way. Let me tell you the General steps that I'm going through:



    (1) I query the data from the employee of Java DB (with EclipseLink JPA) and load it into an ArrayList.

    (2) I use this list to create the JTable, prior transformation to an Object [] [] and it fuels a custom TableModel.

    (3) thereafter, if I need to get something on the table, I search on the list and then with the index resulting from it, I get it from the table. This is possible because I keep the same order of the rows on the table and on the list.

    (4) if I need to put something on the table, I do also on my list, and so on if I need to remove or edit an item.



    Is the technique that I use a best practice? I'm not sure that duty always synchronized table with the list is the best way to handle this, but I don't know how I would deal with comes with the table, for example to effectively find an item or to sort the array, without first on a list.



    Are there best practices in dealing with tables?



    Thank you!

    Francisco.

    You should never list directly update, wait for when you first create the list and add it to the TableModel. All future updates must be performed on the TableModel directly.

    See the [http://www.camick.com/java/blog.html?name=row-table-model url] Table of line by a model example of this approach. Also, follow the link BeanTableModel for a more complete example.

  • Best practices for the design of a printing environment?

    Greetings,

    If there is a better place for this, please let me know.

    Objective:

    Redesign and redeploy my printing environment with best practices in mind.


    Overview: VMWare environment running 2008 R2, with about 200 printers. I have a majority of printers HP ranging from 10 years old again. LaserJet MFP, OfficeJets, etc... etc... In addition to the copiers Konica, Xerox and Savin. Many of our models of printer are not taken in charge in 2008, and still less x 64.

    Our future goals include services eprint, as well as the desire to manage the print quality, and levels of consumition by something like Web Jetadmin.

    Currently we have a 2003 Server x 86 server running our printers very old and 6 months ago the rest on a single x 64 server 2008r2. We ended up not giving it the attention to detail, it is necessary and pilots have become very congested, this led to a PCL6 only, UPD update that ended up corrupting several drivers through the UPD PCL 5 and 6 of the spectrum. At this time, we brought a second server 2008r2 and began to emigrate to those affected. In some cases, we were forced to remove manually the drivers off the customer system32-> coil-> driver and reinstall.

    I've not had much luck finding good best practice information and thought I'd ask. Some documents I came across suggested that I should isolate a universal driver to a single server, such as 3 servers for PCL5, PCL6, and Psalm then, there is the need to deal with my various copiers.

    I appreciate your advice, thank you!

    This forum focuses on the level of consumer products.  For your question you can have the best results in the forum HP Enterprise here.

  • Best practices for the Manager of the Ucs to the smooth running of our environment

    Hi team

    We are remaining with data center with Cisco Ucs blades. I want the best practices guide Ucs Manager Manager of Ucs check all things configured correctly in accordance with the recommendation of Cisco and standard to the smooth running of the environment.
    A certain provide suggestions. Thank you

    Hey Mohan,.

    Take a look at the following links. They should provide an overview of the information you are looking for:

    http://www.Cisco.com/c/en/us/products/collateral/servers-unified-computi...

    http://www.Cisco.com/c/en/us/support/servers-unified-computing/UCS-manag...

    HTH,

    Wes

  • I need to disconnect Vsphere 4 data in environment Vsphere 5 banks. I need to know the best practices

    I need to disconnect Vsphere 4 data in environment Vsphere 5 banks. I need to know the best practices

    http://KB.VMware.com/kb/2004605 has the correct procedure to use.

  • What is the best practice for a 'regular' Server VMware and VDI environment?

    What is the best practice for a "regular" VMware Server and VDI environment?   A single environment (ESXi and SAN) can accommodate two if it is a whole new configuration?  Or even better to keep separate?

    Enjoying inputs.

    Quick and dirty answer is that "it depends."

    serioulsy, it depends really two things budget and IO.  If you had the money for two without then buy two and don't have to host your server environment and the other for VDI desktop, their IO profiles are completely different.

    If this is not the case, try to keep each type of use for their own dedicated LUN.

  • Best practices temporary table

    Hello

    I have a temporary table, and this table is emptied (removal of table_name) several times daily using a procedure.

    I just wanted to know what the best practice is to manage this scenario taking into account the time of reading / the table space / all other advantages/disadvantages?

    Truncate and load.

    Or, if possible, use a MVIEW instead of the table...

  • Best practices for retrieving a single value from the Oracle Table

    I'm using Oracle Database 11 g Release 11.2.0.3.0.

    I would like to know the best practice to do something like that in a PL/SQL block:

    DECLARE
        v_student_id    student.student_id%TYPE;
    BEGIN
        SELECT  student_id
        INTO    v_student_id
        FROM    student
        WHERE   last_name = 'Smith'
        AND     ROWNUM = 1;
    END;

    Of course, the problem here is that when there is no success, the NO_DATA_FOUND exception is thrown, which interrupts the execution.  So, what happens if I want to continue despite the exception?

    Yes, I could create a block nested with EXCEPTION section, etc, but it seems awkward for what seems to be a very simple task.

    I've also seen this handled like this:

    DECLARE
        v_student_id    student.student_id%TYPE;
        CURSOR c_student_id IS
            SELECT  student_id
            FROM    student
            WHERE   last_name = 'Smith'
            AND     ROWNUM = 1;
    BEGIN
        OPEN c_student_id;
        FETCH c_student_id INTO v_student_id;
        IF c_student_id%NOTFOUND THEN
            DBMS_OUTPUT.PUT_LINE('not found');
        ELSE
            (do stuff)
        END IF;
        CLOSE c_student_id;   
    END;

    But it still seems to kill an Ant with a hammer.

    What is the best way?

    Thanks for any help you can give.

    Wayne

    201cbc0d-57b2-483a-89f5-cd8043d0c04b wrote:

    What happens if I want to continue despite the exception?

    It depends on what you want to do.

    You expect only 0 or 1 rank. SELECT INTO waiting for exactly 1 row. In this case, SELECT INTO may not be the best solution.

    What exactly do you do if you return 0 rows?

    If you want to set a variable with a NULL value and continue the treatment, Frank's response looks good, or else use the modular Billy approach.

    If you want to "do things" when you get a line and 'status quo' when you don't get a line, then you can consider a loop FOR:

    declare
      l_empno scott.emp.empno%type := 7789;
      l_ename scott.emp.ename%type;
    begin
      for rec in (
        select ename from scott.emp
        where empno = l_empno
        and rownum = 1
      ) loop
    l_ename := rec.ename;
        dbms_output.put_line('<' || l_ename || '>');
      end loop;
    end;
    /
    

    Note that when no line is found, there is no output at all.

    Post edited by: StewAshton - Oops! I forgot to put the result in l_ename...

  • Best practices for tags

    Hello

    In the bundled applications Tags are used in most applications. For example. in App Customer Tracker, we can add tags to a customer where these tags are stored in a varchr2 column in the Customers Table.
    In my case, I've pre-defined tags real estate (Real Estate) in a table of research called TAGS. For example, Full floor, furnished, equipped, duplexes, attached... What is the best practice for tag properties:
    1 - to store these tags in a varchar column in the table of PROPERTIES using the Shuttle box.
    OR
    2. to store in a third table Eg, PROPERTIES_TAGS (ID PK, FK property-ID, TAG_ID FK), then use the LISTAGG function to show tags in one line in the report properties.
    OR
    You have a better option?

    Kind regards
    Fateh

    Fateh says:
    Hello

    In the bundled applications Tags are used in most applications. For example. in App Customer Tracker, we can add tags to a customer where these tags are stored in a varchr2 column in the Customers Table.
    In my case, I've pre-defined tags real estate (Real Estate) in a table of research called TAGS. For example, Full floor, furnished, equipped, duplexes, attached...

    These seem to be two different use cases. In the bundled applications tags allow end users to join free-form metadata to the data for their own needs (they are sometimes called "folk taxonomies"). Users can use tags for different purposes or different tags for the same purpose. For example, I could add 'Wednesday', 'Thursday' or 'Friday' tags customers because these are the days that they receive their deliveries. For the same purpose, you could mark the same customers '1', '8' and '15' by the numbers of road trucks making deliveries. You can use 'Monday' to indicate that the client is closed on Mondays...

    In your application you assign to the known properties of predefined attributes. It is a model of standard attribute 1:M. their view using the metaphor of the label is not equivalent to the user of free-form tags.

    What is the best practice for tag properties:
    1 - to store these tags in a varchar column in the table of PROPERTIES using the Shuttle box.

    If you do this, how can you:

  • Search for furnished duplex properties effectively?
  • Change in the world "mounted" to "integrated"?
  • Ratio of the number of properties, broken down by full floor, double-sided, equipped...

    OR
    2. to store in a third table Eg, PROPERTIES_TAGS (ID PK, FK property-ID, TAG_ID FK), then use the LISTAGG function to show tags in one line in the report properties.

    As Why use Look up Table, shows the correct way to proceed. It allows the data to be indexed for efficient extraction, and issues such as those above should be dealt with simply by using joins and grouping.

    You might want to examine the possibility of eliminating the PK ID and use an index table organized for this.

    OR
    You have a better option?

    I'd also look carefully your data model. Make sure that you're not flirting with the anti-pattern VAE. Some/all of these values are not simply the attributes on the property?

  • / var/log is full. Best practices?

    One of the score of the newspaper of our host is 100% full. I'm not the practice administrator for this host, but manage/deploy the virtual machines it for others to use.

    I was wondering what's the best practice to deal with a more complete log partition? I found an article that mentioned editing the file /etc/logrotate.d/vmkernel/ so that files

    be compressed more often and saved for less often, but there was no real clear instructions on what to change and how.

    Is the only way to investigate on the console itself or the directory/var/log via putty? No there is no way to see VIC?

    Thank you

    Hello

    To solve the immediate problem, I would transfer to any newspaper in/var/log with a number at the end is dire.1,.2, etc. to a temporary storage outside the ESX host location. You could run something similar to the following command of the scp to do:

    scp /var/log/*.[0-9]* /var/log/*/*.[0-9]* host:TemporaryDir
    

    Or you can use winscp to transfer of the ESX host in a windows box. A you get the files from existing logs from the system for later playback, use the following to clear the space:

    cd /var/log; rm *.[0-9]* */*.[0-9]*
    

    I would therefore consist logrotation thus directed by hardening for VMware ESX.

    Best regards, Edward L. Haletky VMware communities user moderator, VMware vExpert 2009
    "Now available on Rough Cuts: url = http://www.astroarch.com/wiki/index.php/VMware_Virtual_Infrastructure_Security' VMware vSphere (TM) and Virtual Infrastructure Security: ESX security and virtual environment ' [url]
    Also available url = http://www.astroarch.com/wiki/index.php/VMWare_ESX_Server_in_the_Enterprise"VMWare ESX Server in the enterprise" [url]
    [url =http://www.astroarch.com/wiki/index.php/Blog_Roll] SearchVMware Pro [url] | URL = http://www.astroarch.com/blog Blue Gears [url] | URL = http://www.astroarch.com/wiki/index.php/Top_Virtualization_Security_Links Top security virtualization [url] links | URL = http://www.astroarch.com/wiki/index.php/Virtualization_Security_Round_Table_Podcast Virtualization Security Table round Podcast [url]

  • Implementation of best practices

    We have a budget tight but would like to be considered for implementation, as many best practices of VMware as possible. We realize that, with our budget, we will not be able to implement many of them but I would like to get the most, most critics made. What do you consider the ABSOLUTE MUST have best practices to an environment that includes the following:

    6 hosts ESX3.5 and 1 VC2.5 with the license server Center. Each host has 6 physical network interface cards. We have licenses for vMotion, DRS, and HA. Our current setup is:

    vSwitch0 is home to a group of virtual computer and Console of Service ports with 2 network cards

    vSwitch1 is home port for VM group

    vSwitch3 is home to port VM group

    vSwitch4 is a dedicated vMotion network

    Physically, our hosts are connected to a single physical Nortel SwitchStack affected ports to the VLAN

    We have our virtual machines on different subnets 3

    This infrastructure is inside our firewall. We have a demilitarized zone, but it is not virtual.

    Looking for candid suggestions.

    The vSwitch is an object that exists in the memory of an ESX server.  Therefore, the exchanges on this vSwitch belong within this memory object.  This vSwitch keeps his stats, the mac entity table.  With the help of a 2nd vSwitch creates a 2nd object that does not interact/interface with the first.  Then, it adds another layer of separation between the information.  Gaps are a good thing for this reason.

    -KjB

  • TDMS &amp; Diadem best practices: what happens if my mark has breaks/cuts?

    I created a LV2011 datalogging application that stores a lot of data to TDMS files.  The basic architecture is like this:

    Each channel has these properties:

    To = start time

    DT = sampling interval

    Channel values:

    Table 1 d of the DBL values

    After the start of datalogging, I still just by adding the string values.  And if the size of the file the PDM goes beyond 1 GB, I create a new file and try again.  The application runs continuously for days/weeks, so I get a lot of TDMS files.

    It works very well.  But now I need to change my system to allow the acquisition of data for pause/resume.  In other words, there will be breaks in the signal (probably from 30 seconds to 10 minutes).  I had originally considered two values for each point of registration as a XY Chart (value & timestamp) data.  But I am opposed to this principal in because according to me, it fills your hard drive unnecessarily (twice us much disk footprint for the same data?).

    Also, I've never used a tiara, but I want to ensure that my data can be easily opened and analyzed using DIAdem.

    My question: are there some best practices for the storage of signals that break/break like that?  I would just start a new record with a new time of departure (To) and tiara somehow "bind" these signals... for example, I know that it is a continuation of the same signal.

    Of course, I should install Diadem and play with him.  But I thought I would ask the experts on best practices, first of all, as I have no knowledge of DIAdem.

    Hi josborne;

    Do you plan to create a new PDM file whenever the acquisition stops and starts, or you were missing fewer sections store multiple power the same TDMS file?  The best way to manage the shift of date / time is to store a waveform per channel per section of power and use the channel property who hails from waveform TDMS data - if you are wiring table of orange floating point or a waveform Brown to the TDMS Write.vi "wf_start_time".  Tiara 2011 has the ability to easily access the time offset when it is stored in this property of channel (assuming that it is stored as a date/time and not as a DBL or a string).  If you have only one section of power by PDM file, I would certainly also add a 'DateTime' property at the file level.  If you want to store several sections of power in a single file, PDM, I would recommend using a separate group for each section of power.  Make sure that you store the following properties of the string in the TDMS file if you want information to flow naturally to DIAdem:

    'wf_xname '.
    'wf_xunit_string '.
    'wf_start_time '.
    'wf_start_offset '.
    'wf_increment '.

    Brad Turpin

    Tiara Product Support Engineer

    National Instruments

  • (Best practices) How to store the adjustment curve values?

    I got two sets of data, Xreal and Xobserved, abbreviated Xr and Xo. Xreal is a data set that contains the values of sensor from a reliable source (it's a pain to collect data for), and Xobserved is a set of data containing the values from a less reliable source, but much less maintenance, sensor. I'll create a VI that receives the entry of these two sources of data, stores it in a database (text file or csv) and crosses some estimators of this database. The output of the VI will be best approximation of linear adjustment (using regression, not the Xreal) of the input value of Xobserved.

    What are best practices for storage Xreal and Xobserved? In addition, I'm not too known using best VI made, take CSV files for entry? How would format it best?

    '

    Keep things simple.  Convert the table to CSV file and write to a text file.  See attached example.

  • Dell MD3620i connect to vmware - best practices

    Hello community,

    I bought a Dell MD3620i with 2 x ports Ethernet 10Gbase-T on each controller (2 x controllers).
    My vmware environment consists of 2 x ESXi hosts (each with 2ports x 1Gbase-T) and a HP Lefthand (also 1Gbase-T) storage. The switches I have are the Cisco3750 who have only 1Gbase-T Ethernet.
    I'll replace this HP storage with DELL storage.
    As I have never worked with stores of DELL, I need your help in answering my questions:

    1. What is the best practices to connect to vmware at the Dell MD3620i hosts?
    2. What is the process to create a LUN?
    3. can I create more LUNS on a single disk group? or is the best practice to create a LUN on a group?
    4. how to configure iSCSI 10GBase-T working on the 1 Gbit/s switch ports?
    5 is the best practice to connect the Dell MD3620i directly to vmware without switch hosts?
    6. the old iscsi on HP storage is in another network, I can do vmotion to move all the VMS in an iSCSI network to another, and then change the IP addresses iSCSI on vmware virtual machines uninterrupted hosts?
    7. can I combine the two iSCSI ports to an interface of 2 Gbps to conenct to the switch? I use two switches, so I want to connect each controller to each switch limit their interfaces to 2 Gbps. My Question is, would be controller switched to another controller if the Ethernet link is located on the switch? (in which case a single reboot switch)

    Tahnks in advanse!

    Basics of TCP/IP: a computer cannot connect to 2 different networks (isolated) (e.g. 2 directly attached the cables between the server and an iSCSI port SAN) who share the same subnet.

    The corruption of data is very likely if you share the same vlan for iSCSI, however, performance and overall reliability would be affected.

    With a MD3620i, here are some configuration scenarios using the factory default subnets (and for DAS configurations I have added 4 additional subnets):

    Single switch (not recommended because the switch becomes your single point of failure):

    Controller 0:

    iSCSI port 0: 192.168.130.101

    iSCSI port 1: 192.168.131.101

    iSCSI port 2: 192.168.132.101

    iSCSI port 4: 192.168.133.101

    Controller 1:

    iSCSI port 0: 192.168.130.102

    iSCSI port 1: 192.168.131.102

    iSCSI port 2: 192.168.132.102

    iSCSI port 4: 192.168.133.102

    Server 1:

    iSCSI NIC 0: 192.168.130.110

    iSCSI NIC 1: 192.168.131.110

    iSCSI NIC 2: 192.168.132.110

    iSCSI NIC 3: 192.168.133.110

    Server 2:

    All ports plug 1 switch (obviously).

    If you only want to use the 2 NICs for iSCSI, have new server 1 Server subnet 130 and 131 and the use of the server 2 132 and 133, 3 then uses 130 and 131. This distributes the load of the e/s between the ports of iSCSI on the SAN.

    Two switches (a VLAN for all iSCSI ports on this switch if):

    NOTE: Do NOT link switches together. This avoids problems that occur on a switch does not affect the other switch.

    Controller 0:

    iSCSI port 0: 192.168.130.101-> for switch 1

    iSCSI port 1: 192.168.131.101-> to switch 2

    iSCSI port 2: 192.168.132.101-> for switch 1

    iSCSI port 4: 192.168.133.101-> to switch 2

    Controller 1:

    iSCSI port 0: 192.168.130.102-> for switch 1

    iSCSI port 1: 192.168.131.102-> to switch 2

    iSCSI port 2: 192.168.132.102-> for switch 1

    iSCSI port 4: 192.168.133.102-> to switch 2

    Server 1:

    iSCSI NIC 0: 192.168.130.110-> for switch 1

    iSCSI NIC 1: 192.168.131.110-> to switch 2

    iSCSI NIC 2: 192.168.132.110-> for switch 1

    iSCSI NIC 3: 192.168.133.110-> to switch 2

    Server 2:

    Same note on the use of only 2 cards per server for iSCSI. In this configuration each server will always use two switches so that a failure of the switch should not take down your server iSCSI connectivity.

    Quad switches (or 2 VLAN on each of the 2 switches above):

    iSCSI port 0: 192.168.130.101-> for switch 1

    iSCSI port 1: 192.168.131.101-> to switch 2

    iSCSI port 2: 192.168.132.101-> switch 3

    iSCSI port 4: 192.168.133.101-> at 4 switch

    Controller 1:

    iSCSI port 0: 192.168.130.102-> for switch 1

    iSCSI port 1: 192.168.131.102-> to switch 2

    iSCSI port 2: 192.168.132.102-> switch 3

    iSCSI port 4: 192.168.133.102-> at 4 switch

    Server 1:

    iSCSI NIC 0: 192.168.130.110-> for switch 1

    iSCSI NIC 1: 192.168.131.110-> to switch 2

    iSCSI NIC 2: 192.168.132.110-> switch 3

    iSCSI NIC 3: 192.168.133.110-> at 4 switch

    Server 2:

    In this case using 2 NICs per server is the first server uses the first 2 switches and the second server uses the second series of switches.

    Join directly:

    iSCSI port 0: 192.168.130.101-> server iSCSI NIC 1 (on an example of 192.168.130.110 IP)

    iSCSI port 1: 192.168.131.101-> server iSCSI NIC 2 (on an example of 192.168.131.110 IP)

    iSCSI port 2: 192.168.132.101-> server iSCSI NIC 3 (on an example of 192.168.132.110 IP)

    iSCSI port 4: 192.168.133.101-> server iSCSI NIC 4 (on an example of 192.168.133.110 IP)

    Controller 1:

    iSCSI port 0: 192.168.134.102-> server iSCSI NIC 5 (on an example of 192.168.134.110 IP)

    iSCSI port 1: 192.168.135.102-> server iSCSI NIC 6 (on an example of 192.168.135.110 IP)

    iSCSI port 2: 192.168.136.102-> server iSCSI NIC 7 (on an example of 192.168.136.110 IP)

    iSCSI port 4: 192.168.137.102-> server iSCSI NIC 8 (on an example of 192.168.137.110 IP)

    I left just 4 subnets controller 1 on the '102' IPs for more easy changing future.

Maybe you are looking for

  • Thunderbird freezes and crashes - Logout required to close. (Windows 10)

    Hello. I have used this program for about a year with no problems whatsoever. Suddenly, the program crashes and freezes at startup. I have read many of the same problems here and have tried all the recommended solutions and still no resolution. The o

  • Why Apple removed reset play Counter in iTunes?

    Tim, Why Apple removed Reset counter in iTunes is a mystery to me. This function is essential for the functioning of my smart playlist series. Apple is a great company and I am more than satisfied with all of the options available in terms of hardwar

  • Restore backup to Time Capsule

    I got my iMac reinstalled and at the same time, updated to OS X El Capitan. After connecting to the network and my time Machine, I tried to restore the last backup before reinstalling. I can connect to my Time Capsule via Time Machine, but trying to

  • Adobe Reader download failure occurs at 80%

    I tried many many times to download adobe reader whenever he fails at about 80%, I also tried to download reader 9.4 it downloaded, but would not install Error 2755. Server returned an unexpected error 3. I tried to turn my firewall, anyone any ideas

  • Microsoft Windows 7 backup

    I went to back up my system and he said (put in place backup) don't know why he should need that since I saved before. But I clicked on set up and he caimed to start. Then he threw an error that says... The appilication backup could not start due to