Best practices: linking opportunity data step contact SFDC

Hi - I was hoping someone could give me a best practice to manage and automate the following use cases.

I have tried a number of different things in Eloqua but met a roadblock.

CASE OF USE/PROBLEM:

I need to remove contacts from a campaign of education that are associated with a designated SFDC opportunity as won.
I realized my first step, which is to remove a contact from a campaign by referencing a custom object.

However, I need updated chance data mapped to a contact to handle all upward.

Thus so, my real problem is updated (every 30 minutes or more) data in the custom about opportunities object table.

What I've tried map data updated to a contact opportunity:

(1) auto Synch the opportunity table an object custom

I was able to bring opportunities but the email address/contact data is stored on the Contact role table so no e-mail address were brought in Eloqua.

(2) automatic synchronization on the Contact role opportunity table an object custom

This works if I do an automatic synchronization and automatic synchronization does not work with the updated filter, and the last successful Upload Date.

Is it possible to change the filter to make the data when the opportunity is changed and not the role of Contact?
And if so, can someone give me direction on how to implement that in Eloqua?

If you know of something else to do to manage this entire upward please let me know. I appreciate any assistance.

Blake Holden

Hi Kathleen,.

I understand. Below shows you an automatic synch successfully pull data in the role of opportunity Contact SFDC Eloqua. Once a week, I still don't have a full automatic synchronization to ensure that all data from, but most of the time, it is entirely automated.

Blake

Tags: Marketers

Similar Questions

  • Best practices for migrating data tables - please comment.

    I have 5 new tables stocked with data that must be promoted an evolution to a production environment.
    Instead of just DBA using a data migration tool, they are insistent that I have record and provide scripts for each commit, in good condition, necessary both to build the table and insert the data from ground zero.

    I'm very little used to such an environment, and it looks much more risky for me to try to reconstruct the objects from scratch, so I already have a model perfect, tested and ready.

    They require a lot of literature, where each step is recorded in a document and use for deployment.
    I think their purpose is that they do not want to rely on backups but would rather than rely on a document that specifies each step to recreate.

    Please comment on your view of this practice. Thank you!

    I'm not a DBA. I can't even hold a candle to the fans of the forum as Srini/Justin/sb.
    Now that I'm giving a slightly different opinion.

    It is great to have, and I paraphrase of various positions
    Deployment documents,
    sign off of groups
    recovery steps
    Source code control
    repositories
    "The production environment is sacred. All the risks that must be reduced to a minimum at any price. In my opinion a DBA should NEVER move anything from a development environment directly in a production environment. 'NEVER.'
    etc etc.

    But we cannot generalize that each production system must have these. Each customer is different; everyone has different levels of fault tolerance.

    You can't wait for a change of design of the product to a cabinetmaker to go through "as rigorous than at NASA. Why would it be different for software change?

    How rigorous you put is a bit subjective - it depends on the policies of the company, experiences in society with the disasters of migration (if any) and the corporate culture.

    OP can come from a customer with lax controls. And it can be questioned if the rigour at the level of the new customer is worth it.

    To a single client, (and I don't kid), the prod password is apps/apps. (after 12 years of being alive!) I was appalled at first. But it's a very small business. They got saddled with EBS during the .com boom Hay days. They use just 2 modules in EBS. If I speak of creation (and charge) these documents/processes, I would lose the customer.
    My point is that not all places must or want these controls. By trial and error, you get what is the best company.

    OP:
    You said that you're not used to this type of environment. I recommend that you go with the current first. Spend time on understanding the value/use/history of these processes. Ask if they still seem too much to you. Keep in mind: this is subjective. And if it comes down to your existing reviews of the v/s, you need either an authority OR some means of persuasion (money / sox (intentional typo here) / beheaded horse heads... everything that works!)

    Sandeep Gandhi

    Edit: Typo-detail fault

    Published by: Sandeep Gandhi, independent Consultant on 25 June 2012 23:37

  • Compare the best practice issue to Date

    Hello

    I'm far from an expert on Oracle and PL/SQL, if I get this wrong of all.  I'm working on some selection scripts and the guy who wrote these loves me to convert parts of dates in numbers in order to make a comparison.  It seems to me a trunc (date, part) would be much more effective.  Can someone confirm my thoughts here?

    I'm seeing this a lot:

    AND TO_NUMBER (TO_CHAR (scheduleddate, 'YYYY')) = TO_NUMBER (TO_CHAR (SYSDATE - 1, 'YYYY'))

    Seems to me that I would be better:

    trunc (scheduleddate, 'YEAR') = trunc (SysDate, 'YEAR').

    Who is?

    Thank you!

    Chad

    CSchrieber wrote:

    Hello

    I'm far from an expert on Oracle and PL/SQL, if I get this wrong of all.  I'm working on some selection scripts and the guy who wrote these loves me to convert parts of dates in numbers in order to make a comparison.  It seems to me a trunc (date, part) would be much more effective.  Can someone confirm my thoughts here?

    I'm seeing this a lot:

    AND TO_NUMBER (TO_CHAR (scheduleddate, 'YYYY')) = TO_NUMBER (TO_CHAR (SYSDATE - 1, 'YYYY'))

    Seems to me that I would be better:

    trunc (scheduleddate, 'YEAR') = trunc (SysDate, 'YEAR').

    Who is?

    Thank you!

    Chad

    I think you're smarter than the guy who wrote the original code.

    Go and teach others.

  • Best practices for creating data warehouses

    I have 10 iSCSI LUN (all on the same device), each 1.8 size to which I want to introduce ESXi to create data warehouses.  Are there all recommended for how I divide these LUNS to the top, or should I just do a giant data store?  Maybe there are performance factors to consider here?

    If I had to do 10 1.8 data stores to I can see a problem on the road when I need to expand a vmdk but cannot because there is not enough free space on the data store, it would be less of a problem if I had a giant data store at first.

    Thank you.

    First of all, it's one of those type questions "how long is a piece of string.

    It depends of course of the VMDK number you're going to be running, the available storage, the type of storage, IO, storage, type of virtual machines etc. etc. etc.,

    Things to consider are for example, you have a storage that deduplication and storage cost a major factor (and so on)
    Of course. . almost always, a cost reduction is equivalent to a drop in performance.

    In any case, a rule very loose, I have (in most cases), among which I size LUN somewhere between 400 and 750 GB and rarely (if ever) have more than 30 VMDK per LUN.

    Almost always, redirect the request to the following resources:

    first of all, the maximum rates of configuration:
    http://www.VMware.com/PDF/vSphere4/R40/vsp_40_config_max.PDF

    http://www.gabesvirtualworld.com/?p=68
    http://SearchVMware.TechTarget.com/Tip/0, 289483, sid179_gci1350469, 00.html
    http://communities.VMware.com/thread/104211
    http://communities.VMware.com/thread/238199
    http://www.yellow-bricks.com/2009/06/23/vmfslun-size/

    (although Post-andre above covers most of them)

  • Best practices for the data pump or import process?

    We are trying to copy the existing to another newly created schema schema. Pump data export to succeed the export schema.

    However, we met errors when you import dump again file schema. Remapped schema and storage areas, etc.
    Most of the errors occur in PL/SQL... For example, we have views as below in the original schema:
    "
    CREATE the VIEW * oldschema.myview * AS
    SELECT col1, col2, col3
    OF * oldschema.mytable *.
    WHERE coll1 = 10
    .....
    "
    Quite a few functions, procedures, packages and triggers contain "* oldschema.mytable *" in the DML (insert, select, update), for example.

    Get the following errors in the import log:
    ORA-39082: object ALTER_FUNCTION type: 'TEST '. "' MYFUNC ' created with compilation warnings
    ORA-39082: ALTER_PROCEDURE object type: 'TEST '. "" MYPROCEDURE "created with compilation warnings
    ORA-39082: the VIEW object type: 'TEST '. "' BIRD ' created with compilation warnings
    ORA-39082: object PACKAGE_BODY type: 'TEST '. "' MYPACKAGE ' created with compilation warnings
    ORA-39082: TRIGGER object type: 'TEST '. "' MON_TRIGGER ' created with compilation warnings

    Many actual errors/no valid in the new schema objects are due to:
    ORA-00942: table or view does not exist

    My question is:
    1. What can we do to correct these errors?
    2. is there a better way to do the import with such condition?
    3 update PL/SQL and recompile with the new scheme? Or update in the scheme of origin, firstly and export?

    Your help will be greatly appreciated!

    Thank you!

    @?/rdbms/admin/utlrp.sql

    Will compile the objects in the database through drawings. In your case, you re-mapping from one schema to another and utlrp objects will not be able to compile.

    SQLFILE impdp option allows to generate the DDL of the discharge of export and change the name of the schema on a global scale and run the script in sqlplus. This should solve most of your errors. If you still see errors, now proceed to utlrp.sql.

    -André

  • Best practices for the integration of the Master Data Management (MDM)

    I work on the integration of MDM with Eloqua and are looking for the best approach to sync data lead/Contact changes of Eloqua in our internal MDM Hub (output only). Ideally, we would like that integration practically in real time but my findings to date suggest that there is no option. Any integration will result in a kind of calendar.

    Here are the options that we had:

    1. "Exotic" CRM integration: using internal events to capture and queue in the queue changes internal (QIP) and allows access to the queue from outside Eloqua SOAP/REST API
    2. Data export: set up a Data Export that is "expected" to run on request and exteernally annex survey via the API SOAP/REST/in bulk
    3. API in bulk: changes in voting that has happened since the previous survey through the API in bulk from Eloqua outside (not sure how this is different from the previous option)

    Two other options which may not work at all and who are potentially antimodel:

    • Cloud connector: create a campaign questioning changes to schedule and configure a connector of cloud (if possible at all) to notify MDM endpoint to query contact/lead "record" of Eloqua.
    • "Native" integration CRM (crazy): fake of a native CRM endpoint (for example, Salesforce) and use internal events and external calls to Eloqua push data into our MDM

    Issues related to the:

    1. What is the best practice for this integration?
    2. Give us an option that would give us the close integration in real-time (technically asynchronous but always / event-based reminder)? (something like the outgoing in Salesforce e-mail)
    3. What limits should consider these options? (for example API daily call, size response SOAP/REST)

    If you can, I would try to talk to Informatica...

    To imitate the integrations of native type, you use the QIP and control what activities it validated by internal events as you would with a native integration.

    You will also use the cloud api connector to allow you to set up an integration CRM (or MDM) program.

    You have fields of identification is added objects contact and account in Eloqua for their respective IDs in the MDM system and keep track of the last update of MDM with a date field.

    A task scheduled outside of Eloqua would go to a certain interval and extract the QAP changes send to MDM and pull the contacts waiting to be sent in place of the cloud connector.

    It isn't really much of anything as outgoing unfortunately use Messaging.  You can send form data shall immediately submit data to Server (it would be a bit like from collections of rule of integration running of the steps in processing of forms).

    See you soon,.

    Ben

  • Best practice? Storage of large data sets.

    I'm programming a Client to access the customer address information. The data are delivered on a MSSQL Server by a Web service.

    What is the best practice to link these data namely ListFields? String tables? Or an XML file with is analyzed?

    Any ideas?

    Thank you, hhessel

    These debates come from time to time. The big question is how normally geet she on the phone after

    someone asks why BB does not support databases. It is there no magic here - it depends on what you do with

    the data. Regarding the General considerations, see j2me on sun.com or jvm issues more generally. We are all

    should get a reference of material BB too LOL...

    If you really have a lot of data, there are libraries of zip and I often use my own patterns of "compression".

    I personally go with simple types in the store persistent and built my own b-tree indexing system

    which is also j2se virtue persistable and even testable. For strings, we'll store me repeated prefixes

    that only once even though I finally gave up their storage as only Aspire. So if I have hundreds of channels that start "http://www.pinkcat-REC".

    I don't store this time. Before you think of overload by chaining these, who gets picked up

    the indexes that you use to find the channel anyway (so of course you have to time to concatenate pieces)

    back together, but the index needs particular airspace is low).

  • Best practices for a NFS data store

    I need to create a data store on a NAS and connect to some servers ESXi 5.0 as a NFS datastore.

    It will be used to host virtual machines less used.

    What are the best practices to create and connect a datastore NFS or networking and storage view bridges in order to get the best possible performance and decrease is not the overall performance of the network?

    Concerning

    Marius

    Create a new subnet of layer 2 for your NFS data warehouses and set it up on his own vSwitch with two uplinks in an active configuration / eve of reunification. Uplink should be variously patches in two distinct physical switches and the subnet must have the disabled bridge so that NFS traffic is not routable in other parts of your network. NFS export can be restricted to the IP address of storage host IP (address of the VM kernel port you created for NFS in the first step), or any address on that subnet. This configuration isolates traffic NFS for performance, ensures the security and redundancy. You should also consult your whitepapers of storage vendors for any specific recommendation of the seller.

    Data warehouses can be made available for the guests you wish and you can use Iometer to compare PAHO are / s and flow rate to see if it meets your expectations and requirements.

  • Best practices on the steps of the Post after 11.2.0.2.4 installation of ORACLE RAC

    I finished 11.2.0.2 RAC installation and it patched to 11.2.0.2.4. The database is also created.

    Nodes are Linux redhat and ASM storage.

    There the good article or links regarding best practices post not after installation?

    Thanks in advance.

    Hello

    I also want to know what kind of analysis scripts, I can use for the installer as cron tasks to monitor or detect any failure or problems?

    To control the Cluster (OS level):
    I suggest you use a tool powerful "CHM" already accompanying the product grid Infrastructure.

    How do you set up? Nothing... Just use.

    Cluster Health Monitor (CHM) FAQ [ID 1328466.1]

    See this example:
    http://levipereira.WordPress.com/2011/07/19/monitoring-the-cluster-in-real-time-with-chm-cluster-health-monitor/

    To monitor the database:
    With the HELP OF ADVISORS of PERFORMANCE TUNING AND FUNCTIONS of MANAGEABILITY: AWR and ASH and ADDM Sql Tuning Advisor. [276103.1 ID]

    The purpose of this article is to illustrate how to use the new features of 10g management to diagnose
    and resolve performance issues in the Oracle database.
    Oracle10g features powerful tools to help the DBA to identify and resolve performance issues
    without the hassle of complex statistical data analysis and comprehensive reports.

    Hope this helps,
    Levi Pereira

    Published by: Levi Pereira on November 3, 2011 23:40

  • Question/best practice data warehousing

    I received the copy task a few tables of our production database to a database on a base (one night) once per day of data warehousing. The number of tables will grow over time; Currently, it is 10. I am interested in not only the success of the task, but also best practices. Here's what I came with:

    (1) remove the table in the destination database.
    (2) re - create the destination table from the script provided by the SQL Developer when you click the "SQL" tab while you view the table.
    (3) INSERTION IN the destination table in the source table using a database link. Note: I'm not aware of all the columns in the tables themselves that could be used to filter the lines added/deleted/modified only.
    (4) after importing the data, create indexes and primary keys.

    Issues related to the:
    (1) Developer SQL included the following lines during the generation of the table creation script:

    < creation table DDL commands >
    then
    PCTFREE, PCTUSED, INITRANS 40 10 1 MAXTRANS 255 NOCOMPRESS SLAUGHTER
    STORAGE (INITIAL 251658240 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645)
    PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 DEFAULT USER_TABLES)
    TABLESPACE "TBLSPC_PGROW".

    It generated this snippet for the table, the primary key and each index.
    Is it necessary to include in my code, if they are all default values? For example, one of the index gets scripted as follows:

    CREATING INDEXES "XYZ". "' PATIENT_INDEX ' ON 'XYZ '. "' PATIENT ' (the ' Patient')
    -the following four lines do I need?
    PCTFREE, INITRANS 10 2 MAXTRANS 255 COMPUTE STATISTICS
    STORAGE (INITIAL 60817408 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645)
    PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 DEFAULT USER_TABLES)
    TABLESPACE "TBLSPC_IGROW".

    (2) anyone who has advice on best practices for the storage of data like that, I'm very eager to learn from your experience.

    Thanks in advance,

    Carl

    I strongly suggest not dropping and re-creating the tables every day.

    The simplest option would be to create a materialized view on the destination database that queries the database source and do a refresh materialized every evening from this point of view. You can then create a log of materialized on the source table view and then make a gradual refresh of the materialized view.

    You can schedule the refresh of the materialized view or in the definition of the materialized view, as a separate job, or creating a group to refresh and adding one or more materialized views.

    Justin

  • What are the best practices for creating only time data types, and not the Date

    Hi gurus,

    We use 12 c DB and we have a requirement to create the column with datatype of time only, if someone please describe what are the best practices for the creation of this.

    I would strongly appreciate ideas and suggestions.

    Kind regards
    Ranjan

    Hello

    How do you intend to use the time?

    If you are going to combine with DATEs or timestamps from a other source, then an INTERVAL DAY TO SECOND or NUMBER may be better.

    Will you need to perform arithmetic operations on time, for example, increase the time to 20%, or take an average?   If so, the NUMBER would be preferable.

    You are just going to display it?  In this case, DAY INTERVAL in SECONDS, DATE or VARCHAR2 would work.

    As Blushadow said, it depends.

  • Best practices for obtaining of all activities for a Contact

    Hello

    I'm new to the Eloqua API and wonder what is the best practice to download a list of all the activities for a given contact.

    Thank you

    Hi Mike,.

    For activities in general, export activity 2.0 bulk will be the best way to go. Docs are here: http://docs.oracle.com/cloud/latest/marketingcs_gs/OMCBB/index.html

    But it can be a complex process to wrap around your head if you are new to the Eloqua API. So if you're in a pinch and don't bother the association of these activities to campaigns and just shoot a little contact activities, you can resort to the use of REST API calls.

    Calls of the activity are visible (from the console Firebug or Chrome), if you open any contact record and go to the tab "activity log". If you put in all the activities, it will trigger a dozen or more calls or you can choose an individual from the drop-down list to inspect this call more in detail.

    Best regards

    Bojan

  • vCenter/vSphere best practices documents or links

    Hi all

    I'll do some work on a client site next month. It seems to me that there are many things we can do in for vSphere vCenter.

    Y at - he of the docs of best practices or the links for vSphere?

    Thank you

    Here is a blog with lots of links to the best practices of VMware: VMware vSphere best practices - VMwaremine - Artur Krzywdzinski | Nutanix

    If you want something more specific, let me know.

  • I need to disconnect Vsphere 4 data in environment Vsphere 5 banks. I need to know the best practices

    I need to disconnect Vsphere 4 data in environment Vsphere 5 banks. I need to know the best practices

    http://KB.VMware.com/kb/2004605 has the correct procedure to use.

  • Best practices for moving to the 1 of 2 VMDK to different data store

    I have several virtual machines who commit a good amount of data on a daily basis.  These virtual machines have two VMDK; one where the operating system and that where data is committed to.  Virtual machines are currently configured to store in the same data store.  There is a growing need to increase the size of the VMDK where data are stored, and so I would like these put special in a separate data store.  What is the best practice to take an existing virtual computer and moving just a VMDK in another data store?

    If you want to split the vmdks (HDDs) on separate data warehouses, just use Storage vMotion and the "Advanced" option

Maybe you are looking for