Best practices for migrating data tables - please comment.

I have 5 new tables stocked with data that must be promoted an evolution to a production environment.
Instead of just DBA using a data migration tool, they are insistent that I have record and provide scripts for each commit, in good condition, necessary both to build the table and insert the data from ground zero.

I'm very little used to such an environment, and it looks much more risky for me to try to reconstruct the objects from scratch, so I already have a model perfect, tested and ready.

They require a lot of literature, where each step is recorded in a document and use for deployment.
I think their purpose is that they do not want to rely on backups but would rather than rely on a document that specifies each step to recreate.

Please comment on your view of this practice. Thank you!

I'm not a DBA. I can't even hold a candle to the fans of the forum as Srini/Justin/sb.
Now that I'm giving a slightly different opinion.

It is great to have, and I paraphrase of various positions
Deployment documents,
sign off of groups
recovery steps
Source code control
repositories
"The production environment is sacred. All the risks that must be reduced to a minimum at any price. In my opinion a DBA should NEVER move anything from a development environment directly in a production environment. 'NEVER.'
etc etc.

But we cannot generalize that each production system must have these. Each customer is different; everyone has different levels of fault tolerance.

You can't wait for a change of design of the product to a cabinetmaker to go through "as rigorous than at NASA. Why would it be different for software change?

How rigorous you put is a bit subjective - it depends on the policies of the company, experiences in society with the disasters of migration (if any) and the corporate culture.

OP can come from a customer with lax controls. And it can be questioned if the rigour at the level of the new customer is worth it.

To a single client, (and I don't kid), the prod password is apps/apps. (after 12 years of being alive!) I was appalled at first. But it's a very small business. They got saddled with EBS during the .com boom Hay days. They use just 2 modules in EBS. If I speak of creation (and charge) these documents/processes, I would lose the customer.
My point is that not all places must or want these controls. By trial and error, you get what is the best company.

OP:
You said that you're not used to this type of environment. I recommend that you go with the current first. Spend time on understanding the value/use/history of these processes. Ask if they still seem too much to you. Keep in mind: this is subjective. And if it comes down to your existing reviews of the v/s, you need either an authority OR some means of persuasion (money / sox (intentional typo here) / beheaded horse heads... everything that works!)

Sandeep Gandhi

Edit: Typo-detail fault

Published by: Sandeep Gandhi, independent Consultant on 25 June 2012 23:37

Tags: Database

Similar Questions

  • HP stream 11-d008tu: best practices for migrating to Windows 10?

    Hey there, I have a HP flow 11-d0008tu and received the notification I can upgrade to Windows 10. I want to do a clean install with a downloaded ISO but knowledge first of all, it is better to upgrade so that you get the activation done.

    Can someone give me some ideas on best practices for the upgrade? The stream has only 30GB HD so I have no recovery disk, I'll just do a recovery later on USB or SD card. When I do the upgrade can deselect recovery option or delete?

    Also, any other suggestions appreciated, especially which helps make effective for small HD

    See you soon

    Hi, I posted an installation procedure for Windows 10 fresh for the Tablet HP Stream here.  Also, I have a laptop 11 flow but do not have that yet. I think it must be the same because they both the same drive of mem 32 GB Hynix. A change would you probably want to use the 64-bit instead of 32-bit Windows 10 ISO file, the Tablet has only 1 GB of RAM.

  • Best practices for migration .rpd

    Hello
    Can someone please share with us the best industry practices for the migration of the file .rpd from a source (QA) to a destination environment (Prod). Our Oracle BI server is running on a unix machine, and now what we do to migrate is to save copies of the QA and prod .rpds on our local machine group and perform a merger. Once merged, we the .rpd resulting for the machine of destination (prod) via ftp. This approach seems unreliable, and we are looking for a better approach for the migration of .rpd. Kindly help me where I can find Documentation... Any help in this regard would be highly appreciated.

    I don't think that there is a best practice of the industry as such. What is true for one client doesn't necessarily apply to another site.

    Good read of this from Mark Rittman, it should give you some good ideas:

    http://www.rittmanmead.com/2009/11/OBIEE-software-configuration-management-part-2-subsequent-deployments-from-dev-to-prod/

    Paul

  • Best practices for migrating workflows, actions, patterns between environments

    Hello

    Is there a better document practical to migrate workflows, actions and configuration between vco (-> production development) environments. We want to lock all the workflows/action/configurations in prod. Migration can be done manually through package. Is there something more than that.

    Thank you

    Create a Package using Orchestrator (change the drop-down list in the client in CREATING packages of access mode).

    Add the desired workflow, actions, resources and Configurations to the package. As they added, dependent actions, workflows, resources and configuration items will be detected and added to the package.

    Right-click on the package and either export or synchronize.

    Backup file system export package / archive. Then import it on the server (test or production) target.

    Synchronize to test or prod server when you are ready.

    In both cases, you will be presented with a comparison window that indicates what workflow / actions/configurations/resources will be updated on the target system.

    Make an element by element synchronize would be tedious and could miss dependencies packages are the preferred and recommended best practice.

    For additional info of workflow development lifecycle, see blog of Christophe here: http://bit.ly/vroWdlc

  • Best practices for the data pump or import process?

    We are trying to copy the existing to another newly created schema schema. Pump data export to succeed the export schema.

    However, we met errors when you import dump again file schema. Remapped schema and storage areas, etc.
    Most of the errors occur in PL/SQL... For example, we have views as below in the original schema:
    "
    CREATE the VIEW * oldschema.myview * AS
    SELECT col1, col2, col3
    OF * oldschema.mytable *.
    WHERE coll1 = 10
    .....
    "
    Quite a few functions, procedures, packages and triggers contain "* oldschema.mytable *" in the DML (insert, select, update), for example.

    Get the following errors in the import log:
    ORA-39082: object ALTER_FUNCTION type: 'TEST '. "' MYFUNC ' created with compilation warnings
    ORA-39082: ALTER_PROCEDURE object type: 'TEST '. "" MYPROCEDURE "created with compilation warnings
    ORA-39082: the VIEW object type: 'TEST '. "' BIRD ' created with compilation warnings
    ORA-39082: object PACKAGE_BODY type: 'TEST '. "' MYPACKAGE ' created with compilation warnings
    ORA-39082: TRIGGER object type: 'TEST '. "' MON_TRIGGER ' created with compilation warnings

    Many actual errors/no valid in the new schema objects are due to:
    ORA-00942: table or view does not exist

    My question is:
    1. What can we do to correct these errors?
    2. is there a better way to do the import with such condition?
    3 update PL/SQL and recompile with the new scheme? Or update in the scheme of origin, firstly and export?

    Your help will be greatly appreciated!

    Thank you!

    @?/rdbms/admin/utlrp.sql

    Will compile the objects in the database through drawings. In your case, you re-mapping from one schema to another and utlrp objects will not be able to compile.

    SQLFILE impdp option allows to generate the DDL of the discharge of export and change the name of the schema on a global scale and run the script in sqlplus. This should solve most of your errors. If you still see errors, now proceed to utlrp.sql.

    -André

  • Best practices for creating data warehouses

    I have 10 iSCSI LUN (all on the same device), each 1.8 size to which I want to introduce ESXi to create data warehouses.  Are there all recommended for how I divide these LUNS to the top, or should I just do a giant data store?  Maybe there are performance factors to consider here?

    If I had to do 10 1.8 data stores to I can see a problem on the road when I need to expand a vmdk but cannot because there is not enough free space on the data store, it would be less of a problem if I had a giant data store at first.

    Thank you.

    First of all, it's one of those type questions "how long is a piece of string.

    It depends of course of the VMDK number you're going to be running, the available storage, the type of storage, IO, storage, type of virtual machines etc. etc. etc.,

    Things to consider are for example, you have a storage that deduplication and storage cost a major factor (and so on)
    Of course. . almost always, a cost reduction is equivalent to a drop in performance.

    In any case, a rule very loose, I have (in most cases), among which I size LUN somewhere between 400 and 750 GB and rarely (if ever) have more than 30 VMDK per LUN.

    Almost always, redirect the request to the following resources:

    first of all, the maximum rates of configuration:
    http://www.VMware.com/PDF/vSphere4/R40/vsp_40_config_max.PDF

    http://www.gabesvirtualworld.com/?p=68
    http://SearchVMware.TechTarget.com/Tip/0, 289483, sid179_gci1350469, 00.html
    http://communities.VMware.com/thread/104211
    http://communities.VMware.com/thread/238199
    http://www.yellow-bricks.com/2009/06/23/vmfslun-size/

    (although Post-andre above covers most of them)

  • What are the best practices for creating only time data types, and not the Date

    Hi gurus,

    We use 12 c DB and we have a requirement to create the column with datatype of time only, if someone please describe what are the best practices for the creation of this.

    I would strongly appreciate ideas and suggestions.

    Kind regards
    Ranjan

    Hello

    How do you intend to use the time?

    If you are going to combine with DATEs or timestamps from a other source, then an INTERVAL DAY TO SECOND or NUMBER may be better.

    Will you need to perform arithmetic operations on time, for example, increase the time to 20%, or take an average?   If so, the NUMBER would be preferable.

    You are just going to display it?  In this case, DAY INTERVAL in SECONDS, DATE or VARCHAR2 would work.

    As Blushadow said, it depends.

  • Best practices for the integration of the Master Data Management (MDM)

    I work on the integration of MDM with Eloqua and are looking for the best approach to sync data lead/Contact changes of Eloqua in our internal MDM Hub (output only). Ideally, we would like that integration practically in real time but my findings to date suggest that there is no option. Any integration will result in a kind of calendar.

    Here are the options that we had:

    1. "Exotic" CRM integration: using internal events to capture and queue in the queue changes internal (QIP) and allows access to the queue from outside Eloqua SOAP/REST API
    2. Data export: set up a Data Export that is "expected" to run on request and exteernally annex survey via the API SOAP/REST/in bulk
    3. API in bulk: changes in voting that has happened since the previous survey through the API in bulk from Eloqua outside (not sure how this is different from the previous option)

    Two other options which may not work at all and who are potentially antimodel:

    • Cloud connector: create a campaign questioning changes to schedule and configure a connector of cloud (if possible at all) to notify MDM endpoint to query contact/lead "record" of Eloqua.
    • "Native" integration CRM (crazy): fake of a native CRM endpoint (for example, Salesforce) and use internal events and external calls to Eloqua push data into our MDM

    Issues related to the:

    1. What is the best practice for this integration?
    2. Give us an option that would give us the close integration in real-time (technically asynchronous but always / event-based reminder)? (something like the outgoing in Salesforce e-mail)
    3. What limits should consider these options? (for example API daily call, size response SOAP/REST)

    If you can, I would try to talk to Informatica...

    To imitate the integrations of native type, you use the QIP and control what activities it validated by internal events as you would with a native integration.

    You will also use the cloud api connector to allow you to set up an integration CRM (or MDM) program.

    You have fields of identification is added objects contact and account in Eloqua for their respective IDs in the MDM system and keep track of the last update of MDM with a date field.

    A task scheduled outside of Eloqua would go to a certain interval and extract the QAP changes send to MDM and pull the contacts waiting to be sent in place of the cloud connector.

    It isn't really much of anything as outgoing unfortunately use Messaging.  You can send form data shall immediately submit data to Server (it would be a bit like from collections of rule of integration running of the steps in processing of forms).

    See you soon,.

    Ben

  • Just improved m tips on best practices for sharing files on a Server 2008 std.

    The field contains about 15 machines with two domain controllers, one's data is the app files / print etc...  I just upgraded from 2003 to 2008 and want to get advice on best practices for the establishment of a group of file sharing. Basically I want each user to have their their own records, but also a staff; folder. Since I am usually accustomed to using windows Explorer, I would like to know if these actions can be done in the best conditions. Also I noticed on 2008 there is a feature of contacts. How can it be used? I would like to message or send an email to users their file locations. Also, I want to implement an admin at a lower level to handle the actions without making them far in on the server, not sure.

    I read a certain bbut I don't like test direct more because it can cause problems. So basically a way short and neat to manage shares using the MMC, as well as the way that I approach their mail from the server of their actions. Maybe what kind of access cintrol or permissions are suitable also for documents. Also how can I have them use office templates without changing the format of the model.

    THX

    g

    Hello 996vtwin,

    Thank you for visiting the Microsoft Answers site. The question you have posted is related to Windows Server and would be better suited to the Windows Server TechNet community. Please visit the link below to find a community that will support what ask you:

    http://social.technet.Microsoft.com/forums/en-us/category/WindowsServer

    Hope this helps J

    Adam
    Microsoft Answers Support Engineer
    Visit our Microsoft answers feedback Forum and let us know what you think

  • [ADF, JDev12.1.3] Best practices for maintaining a form validation

    Hallo,

    in my application, I need to create a registration form which contains fields that must be validated (for example they should follow a format like e-mail, phone number, tax code,...).

    If the data inserted by the user are ok, a new record in my custom db table Users will be created.

    I would like to know which are the best practices for maintaining the validation, which means the place where the controls must be made and a message to the user who fills out the form when something goes wrong.

    The best vo or EO or managed bean? Or some controls should be put in the OS, others in the VO and other in the managed bean?

    I would be happy if you could give me some examples.

    Thank you

    Federico

    Assuming you want the validation on the value of the field to any screen data can be entered in (and possibly web services that rely on the same BC ADF) then put the validation on the definition of the attribute in the EO.

    If you want to add a little more friendliness and eliminate some of the network traffic to the server, you can also implement the validation client in your page - for example by using the regular expression validator.

    https://blogs.Oracle.com/Shay/entry/regular_expression_validation

  • Best practices for tags

    Hello

    In the bundled applications Tags are used in most applications. For example. in App Customer Tracker, we can add tags to a customer where these tags are stored in a varchr2 column in the Customers Table.
    In my case, I've pre-defined tags real estate (Real Estate) in a table of research called TAGS. For example, Full floor, furnished, equipped, duplexes, attached... What is the best practice for tag properties:
    1 - to store these tags in a varchar column in the table of PROPERTIES using the Shuttle box.
    OR
    2. to store in a third table Eg, PROPERTIES_TAGS (ID PK, FK property-ID, TAG_ID FK), then use the LISTAGG function to show tags in one line in the report properties.
    OR
    You have a better option?

    Kind regards
    Fateh

    Fateh says:
    Hello

    In the bundled applications Tags are used in most applications. For example. in App Customer Tracker, we can add tags to a customer where these tags are stored in a varchr2 column in the Customers Table.
    In my case, I've pre-defined tags real estate (Real Estate) in a table of research called TAGS. For example, Full floor, furnished, equipped, duplexes, attached...

    These seem to be two different use cases. In the bundled applications tags allow end users to join free-form metadata to the data for their own needs (they are sometimes called "folk taxonomies"). Users can use tags for different purposes or different tags for the same purpose. For example, I could add 'Wednesday', 'Thursday' or 'Friday' tags customers because these are the days that they receive their deliveries. For the same purpose, you could mark the same customers '1', '8' and '15' by the numbers of road trucks making deliveries. You can use 'Monday' to indicate that the client is closed on Mondays...

    In your application you assign to the known properties of predefined attributes. It is a model of standard attribute 1:M. their view using the metaphor of the label is not equivalent to the user of free-form tags.

    What is the best practice for tag properties:
    1 - to store these tags in a varchar column in the table of PROPERTIES using the Shuttle box.

    If you do this, how can you:

  • Search for furnished duplex properties effectively?
  • Change in the world "mounted" to "integrated"?
  • Ratio of the number of properties, broken down by full floor, double-sided, equipped...

    OR
    2. to store in a third table Eg, PROPERTIES_TAGS (ID PK, FK property-ID, TAG_ID FK), then use the LISTAGG function to show tags in one line in the report properties.

    As Why use Look up Table, shows the correct way to proceed. It allows the data to be indexed for efficient extraction, and issues such as those above should be dealt with simply by using joins and grouping.

    You might want to examine the possibility of eliminating the PK ID and use an index table organized for this.

    OR
    You have a better option?

    I'd also look carefully your data model. Make sure that you're not flirting with the anti-pattern VAE. Some/all of these values are not simply the attributes on the property?

  • best practices for placing the master image

    Im doing some performances / analysis of load tests for the view and im curious about some best practices for the implementation of the image master VM. the question is asked specifically regarding disk i/o and throughput.

    My understanding is that each linked clone still reads master image. So if that is correct, then it seems that you would like the main image to reside in a data store that is located on the same table as the rest of the warehouses of data than the House related clones (and not some lower performing table). The reason why I ask this question, it is that my performance tests is based on some future SSD products. Obviously, the amount of available space on the SSD is limited, but provides immense quantities of e/s (100 k + and higher). But I want to assure you that if, by putting the master image on a data store that is not on the SSD, I am therefore invalidate the IO performance I want high-end SSD.

    This leads to another question, if all the linked clones read from the master image, which is general practices for the number of clones related to deploy by main image before you start to have problems of IO contention against this single master image?

    Thank you!

    -


    Omar Torres, VCP

    This isn't really neccissary. Linked clones are not directly related to the image of the mother. When a desktop pool is created and used one or more data stores Parent is copied in each data store, called a replica. From there, each linked clone is attached to the replica of the parent in its local data store. It is a replica of unmanged and offers the best performance because there is a copy in every store of data including linked gradient.

    WP

  • Best practices for JTables.

    Hello



    I'm programming in Java since 5 months ago. Now I am developing an application that uses charts to present information to a database. This is my first time manipulation of tables in Java. I read the tutorial Swing of the Sun on JTable and more information on other sites, but they are limited to the syntax of the table and not in best practices.



    So I decided what I think is a good way to manage data in a table, but I don't know which is the best way. Let me tell you the General steps that I'm going through:



    (1) I query the data from the employee of Java DB (with EclipseLink JPA) and load it into an ArrayList.

    (2) I use this list to create the JTable, prior transformation to an Object [] [] and it fuels a custom TableModel.

    (3) thereafter, if I need to get something on the table, I search on the list and then with the index resulting from it, I get it from the table. This is possible because I keep the same order of the rows on the table and on the list.

    (4) if I need to put something on the table, I do also on my list, and so on if I need to remove or edit an item.



    Is the technique that I use a best practice? I'm not sure that duty always synchronized table with the list is the best way to handle this, but I don't know how I would deal with comes with the table, for example to effectively find an item or to sort the array, without first on a list.



    Are there best practices in dealing with tables?



    Thank you!

    Francisco.

    You should never list directly update, wait for when you first create the list and add it to the TableModel. All future updates must be performed on the TableModel directly.

    See the [http://www.camick.com/java/blog.html?name=row-table-model url] Table of line by a model example of this approach. Also, follow the link BeanTableModel for a more complete example.

  • Best practices for the use of reserved words

    Hello
    What is the best practice for the use of the reserved words as column names.
    For example if I insisted on the use of the word to a column name comment as follows:

    CREATE TABLE...
    VARCHAR2 (4000) "COMMENT."
    ...

    What is the impact on the track I could expect and what problems should I be informed when doing something like that?

    Thank you
    Ben

    The best practice is NOT to use reserved words anywhere.
    Developers are human beings human. Humans have their moments to forget things.
    They will not forget to use the "", or you can force it to use the "' everywhere.
    The two methods are Oracle certified ways to end up in hell.

    ----------
    Sybrand Bakker
    Senior Oracle DBA

  • I would like to know the "best practices" for unplugging my computer permanently to the internet and other updates.

    Thank you for taking the time to read this. I would like to know the "best practices" for unplugging my computer permanently to the internet and other updates. I thought I would do a clean install of Windows XP, install my Microsoft Works again and nothing else. I would like to effectively transforming my computer into a word processor. He continues more and more slow. I get blue screen errors, once again. I received excellent Microsoft Support when it happened before, but since my computer is around 13 years, I think it is not worth the headache to try to remedy. I ran the Windows 7 Upgrade Advisor, and my computer would not be able to upgrade. Please, can someone tell me how to make it only a word processor without updates or internet connection? (I already have a new computer with Microsoft Windows 7 Home Premium, it's the computer that I use. The old computer is just sitting there and once a week or so I updates.) I appreciate your time, thank you!

    original title: old computer unstable

    http://Windows.Microsoft.com/en-us/Windows-XP/help/Setup/install-Windows-XP

    http://www.WindowsXPHome.WindowsReinstall.com/sp2installxpcdoldhdd/indexfullpage.htm

    http://aumha.NET/viewtopic.php?f=62&t=44636

    Clean install XP sites
    You can choose which site to reinstall XP.

    Once it is installed, then you do not have to connect what anyone, however, some updates may be required to perform the work, test this by installing work and see if you get an error msg. Except that you should be fine.

Maybe you are looking for

  • Can I pay for the Apple support after my three years apple Plan of Protection ends in June?

    Can I continue to get the support of Apple after my three-year AppleCare plan expired in June? I'd be willing to pay an annual fee, as the support has been fantastic.

  • Installed the update offered several times to install it again

    Original title: windows update icon does not disappear after installation I have installed the update, restart the computer, and the icon is still in the status bar to install the update again.

  • WVC80N Utiliy recording camera

    I did something that I have to do, because there is no such thing as a Ultility program to an iMac. I'm running Windows 7 on my iMac Using Paralells 8, here I'm utility program runs (I have several WCV80Ns) and write the data to a hard drive external

  • Cannot produce sound (JBL flip) bluetooth speaker

    Hello I have a bluetooth speaker portable flip of JBL that works very well with my smartphone etc.  My phone (Samsung, windows 7) connects with her and identifies it as a bluetooth headset, but I can't play the music through it all.  The laptop does

  • Programmatically format SD flash

    Is there a java API for the formatting of the SD card? There is JSR 75, which allow access to the files. But is there a way to detect the new card and prepare it for work (inside the app, without system menu)?