Practice schedules!

Hello

can someone help me with the below question.

Q: for a given emp, view ename, sal and sal_status

low: 0 to 2000

medium: 2001 to 3000

top > 3000

I don't know how to print a sal_status, I wrote the sub program but I think that its correct and it gives lots of error messages too.

DECLARE

number of Veno: = & empno;

VNAME emp.ename%type;

vsal emp.sal%type;

Vstatus varchar (5);

BEGIN

Select ename, sal

IN VNAME, vsal

WCP

where empno = veno;

If (sal < = 2000) THEN

dbms_output.put_line ("Low");

another (sal between 2001 AND 3000) THEN

dbms_output.put_line ("medium");

ON THE OTHER

dbms_output.put_line ('High');

ENDIF;

END;

/


Help, please.

Hello

working with PL/SQL when a simple SQL statement enough is a very bad idea. You simply use a SELECT statement:

SELECT ename
sal

CASE WHEN sal<=>

THEN "LOW".

Sal WHEN<=>

THEN "WAY.

ELSE 'HIGH '.

END sal_status

WCP

WHERE empno = & given_empno

;

(Note: assuming that "sal is not null')

Best regards

Bruno Vroman.

Tags: Database

Similar Questions

  • Need best practices PROCESS for scheduling backups RMAN.

    Hi all

    I would like a suggestion on what follows for RMAN backups:

    Details:

    Database:-11 GR 2, size 3 TB on ASM - DW database.

    As suggestions on:

    (1) what kind of backups for plan - additional as well as backups of block?

    (2) can size necessary to allocate space for backup - one ASM or disk space?

    (3) anything else - please suggest.

    Thank you.

    Thanks Hemant, this is very useful for me - the explanation is clear and understandable.

    Thank you!

  • Windows xp Task Scheduler service does not start

    I have a problem with the Task Scheduler in XP SP3 (home edition) service. The service is configured to start automatically, but hangs during startup and the State is 'starting' permanently In this State, I can't start or stop the service manually.  If I disabled the service completely, restart, put it to manual and click Start, I get the following error:

    Unable to start the task on the local computer Scheduler service
    Error 1053: The service did not demand launch or control in a timely

    I am under an administrator with a password account, so that shouldn't be the problem
    The CPP and the services event log work correctly (Autostart value)

    I need to get this service running in order to allow the read-ahead on this pc. Currently the prefetch folder is totally empty the .ini file is missing as well as all the .pf files. I tried to rebuild the .ini with the help of the > rundll32.exe advapi32.dll, ProcessIdleTasks< command="" at="" the="" command="" prompt="" but="" this="" doesn't="" do="" anything,="" probably="" because="" of="" the="" lack="" of="" the="" scheduler="" service.="" in="" the="" registry,="" enable="" prefetcher="" is="" set="" to="" 3,="" monitoring="" both="" applications="" and="">

    Scheduler tasks and prefetch worked very well - I think that a setting in the registry is damaged or missing. Any suggestions on how to fix this? Thanks in advance.

    It's curious.  The ST journal file must be in the Windows folder.

    I downloaded a copy of the file you need on my SkyDrive (everyone has a SkyDrive for file sharing).

    Here is the link to my SkyDrive and you can get the file you need here:

    http://CID-6a7e789cab1d6f39.SkyDrive.live.com/redir.aspx?RESID=6A7E789CAB1D6F39! 311
    When you see the files available for download, you will not see the extension of file (.exe, .dll, .cpl, .sys, etc), but when download you them they will have the right extension.

    You have to put the downloaded files in the correct folders on your system.

    I don't know where to put the file mstask.ini - windows\system32 maybe?  There is no .ini file in the folder inf. so I don't know and you may need to try different places and keep reading...

    I don't think I've never reinstalled TS (but other things) so I did all the instructions on how to do it, but if the installation stops and asks you for a file, at this point, leave the installation 'wait' and then search your HARD drive for the file he wants to (be sure to include the search for hidden and system files too) and if/when you find return to the installation of waiting and fill the void or navigate to the folder where the installation file is enough (point installation where the file is, so it can find the file he wants to) and continue up to the next point.  You may need to do it for multiple files until you get all the way through it and you will see that some are just not on your HARD drive and you need to download.  You can't cancel the installation and start over every time - just work through a single file at a time by pointing the installation to the folder where the file he needs.

    Maybe later, I reinstall my TS service to practice and be able to understand the files he needs and give some better advice about this.

    Comodo may also interfere (I do not use it, but it's possible).  Did you see if your TS will start with disabled Comodo?

    You should also consider the malware, then you might want to start with this:

    Download, install, update and do a full scan with these free malware detection programs:
    Malwarebytes (MMFA): http://malwarebytes.org/
    SUPERAntiSpyware: (SAS): http://www.superantispyware.com/
    They can be uninstalled later if you wish.
  • Unable to schedule the disk, always set to 'never' Defragmenter in Vista

    When I call the Disk Defragmenter screen still reads that my next scheduled time is 'never '.  even if I reset the calendar at a specific time, when I close and call it back up the next scheduled time is again back to 'never', what is wrong?

    There are several reasons why the built-in function may not work correctly:
     
    0. it is malware on the system. Solution: Run an anti-virus control and also a spyware check.
     
    1. the disk is too full (you need at least 15% free space, sometimes 20%). Solution: Delete files unnecessary and programs until you have more than 20% free space.
     
    2. the disc is damaged and must be repaired. Solution:
     
    a.Open "My computer" and right click the drive that you want to disable frag.
    b. Select 'Properties' and click on 'tools '.
    c. Select "Check now" to check the drive for errors.
    d. Select the two options and click on 'start '.
     
    (This can take time and can restart the PC so that it can do the check at boot time. Be patient and let it complete).
     
    3. Disk Defragmenter can be altered, needing a system restore to fix it. Solution:
     
    a. start - all programs - Accessories - System Tools - System Restore (click here to open);
    b. Select a good restore point before you start having problems with Disk Defragmenter.
    c. start the restore process and let it finish. (Name the descriptive restore as "Repair of Defrag" operation).
     
    4. There are other programs that run the Defragmenter interruption integrated. Solution:

    a. close all running programs.

    b. If you think there may be some programs that run in the background.

    c. press Ctrl + Alt + Delete and

    d. Select "start Task Manager".

    e. under the 'Applications' tab, you will find a list of all running applications - you can close these by selecting "end task."
     
    5. If still no luck, try disabling the screen saver when you run Disk Defrag (you should very well leave the system only when running the built-in defragmentation utility).
     
    6. If still no go, try to run the defragmenter in Mode without failure. If she runs, something interferes with her and stalking interference may not be easy. Note: Some versins of Win 7 disable the ability to run the Defragmenter built-in in safe mode. If disabled, it will say when you try to start it in safe mode.
     
    7 Disk Defragmenter may no longer be on the system or is damaged while it needs a re - install. Solution:
     
    a. open the menu 'start '.
    b. type '% Windir%\Inf' in the 'Search' box and press 'Enter '.
    c. in the window that opens, find the file named "dfrg.inf.
    d. right-click on "dfrg.inf" and "install". (Note: apparently the specified file is not present on some versions of Win 7 - go to step 8).

    8. most of the problems will be solved by #6 above, but if not, and if your windows installation is otherwise working well, you should consider to download a free trial of a Defrag tool commercial rather than drastic measures to restore the perfect bed-s. Third-party programs are more robust and many work in the background, so you can use your PC during defragmentation.
     
    Most third party programs offer a fully functional free trial version (better ones are for 30 days). Install one of them allows you to disable built-in and if you decide to uninstall, removal will replace the built-in function, repaired several times in the process: it's worth a try...
     
    Here's a recent Top 10 comments side-by-side comparison of the best available defragmentation programs:
     
    http://Disk-Defragmenter-software-review.TopTenReviews.com/ 
     
    The gold medal is the only defrag of program that prevents the fragmentation (see the reiew).
     
    Why is it good to prevent fragmentation?

    Basically, it significantly increases life expectancy of your hard drive (especially since you don't have to leave the system on all night - a much 'greener' agricultural practice), not to mention it also increases the performance of the system.

    Find out what the review has to say on this.

    Good luck to you!
     
    Bill R TechSpec

    PS: If still no go, repair may involve editing the registry or repair installation of windows and possibly re - install windows itself. These aren't simple solutions. If so, indicate if they can be provided.

  • Best practices for the integration of the Master Data Management (MDM)

    I work on the integration of MDM with Eloqua and are looking for the best approach to sync data lead/Contact changes of Eloqua in our internal MDM Hub (output only). Ideally, we would like that integration practically in real time but my findings to date suggest that there is no option. Any integration will result in a kind of calendar.

    Here are the options that we had:

    1. "Exotic" CRM integration: using internal events to capture and queue in the queue changes internal (QIP) and allows access to the queue from outside Eloqua SOAP/REST API
    2. Data export: set up a Data Export that is "expected" to run on request and exteernally annex survey via the API SOAP/REST/in bulk
    3. API in bulk: changes in voting that has happened since the previous survey through the API in bulk from Eloqua outside (not sure how this is different from the previous option)

    Two other options which may not work at all and who are potentially antimodel:

    • Cloud connector: create a campaign questioning changes to schedule and configure a connector of cloud (if possible at all) to notify MDM endpoint to query contact/lead "record" of Eloqua.
    • "Native" integration CRM (crazy): fake of a native CRM endpoint (for example, Salesforce) and use internal events and external calls to Eloqua push data into our MDM

    Issues related to the:

    1. What is the best practice for this integration?
    2. Give us an option that would give us the close integration in real-time (technically asynchronous but always / event-based reminder)? (something like the outgoing in Salesforce e-mail)
    3. What limits should consider these options? (for example API daily call, size response SOAP/REST)

    If you can, I would try to talk to Informatica...

    To imitate the integrations of native type, you use the QIP and control what activities it validated by internal events as you would with a native integration.

    You will also use the cloud api connector to allow you to set up an integration CRM (or MDM) program.

    You have fields of identification is added objects contact and account in Eloqua for their respective IDs in the MDM system and keep track of the last update of MDM with a date field.

    A task scheduled outside of Eloqua would go to a certain interval and extract the QAP changes send to MDM and pull the contacts waiting to be sent in place of the cloud connector.

    It isn't really much of anything as outgoing unfortunately use Messaging.  You can send form data shall immediately submit data to Server (it would be a bit like from collections of rule of integration running of the steps in processing of forms).

    See you soon,.

    Ben

  • Maintenance schedule policies

    Hello

    We have a vrops environment 6.1 in place, we build on as a surveillance system wide enterprise.

    We have a schedule of rolling Windows patches that we defined for maintenance windows.

    I'm not find good documentation of the custom policies that define the planning.  Also, I don't see a way to confirm that the objects are assigned to a maintenance schedule.

    Following best practices (?), I can gather documentation:

    vSphere Tag-> Custom-> Custom Policy Group-> maintenance schedule

    I can confirm that policy of the custom that I created is listed as main for the VM strategy desired.

    The policy itself is a child that overrides the Virtual Machine - definition of the time interval.

    But when I export the xml data in Schedule policy is not listed and the reference is < TimeSettings allHoursAndDays = 'true' dataRange = "30" / >

    Capture.PNG

    If you have experience in deploying a successful program maintenance thanks to a policy, I'd appreciate your experience.

    It started to work yesterday.  No change to the policy or the environment.

    May have been expelled to speed by restarting our cluster of vRops.

  • ESXI4 installation, best practices RAID, Stripe size, VD, the partitions?

    Hi all

    I have server Dell Poweredge PE2970 with PERC6 / I and PERC6/E RAID controllers.

    and the Bay of Dell Powerwault MD1000 storage.

    PERC6 / I is conducted 6 x 150 GB, 10,000 rpm, SATA drives. (560 GB RAID 6) and,

    PERC6/E is engine x 15 x 1 TB 5400 RPM, SATA drives. (12 to RAID 6)

    This combination is used to provide iSCSI and NFS services for film and music production environment.

    I plan to create 3x100Gb, 1x200Gb, and 1x60Gb virtual disks of 560 GB RAID 6 array.

    60 GB to install VMware ESXI4 and StorMagic SvSAN.

    100 GB for virtual machines (Linux, Windows, NFS, AD, backup, servers etc.)

    100 GB for Audio iSCSI (work Pro Tools disk)

    100 GB for video iSCSI (work Pro Tools disk)

    200 GB for iSCSI Virtual Instruments (used by Pro tools)

    and 6 x 2 TB of storage, backups, etc.

    How to create these virtual disks when I create RAID arrays?

    What Strip size to use?

    How about this VD 60 'system' VD ESXI4 and SvSAN, or 100 GB 'virtual machine' for other servers?

    I had to do it like this, or should I create a 160 GB VD, for all the servers and facilities of ESXI?

    or should I create a VD to each their own?

    I mean like VD 1 GB for ESXI4, 25 GB (two partitions of 5 GB and 20 GB) VD for SvSAN, VD (two partitions 40 GB and 40 GB) 80 GB for Windows server.

    5 GB VD for Audio Linux NFS, 100 GB VD for iSCSI server, etc. In this solution, I could choose a different distribution for each VD size.

    I know this isn't the best solution, and in the future I could replace all the drives 10,000 rpm with fast SSDS 32 GB (128 GB RAID6)

    for the system and servers, and have a second table MD1000 for iSCSI disks dedicated 10,000 rpm. But for now, it's how to deal with.

    All suggestions and advice are welcome.

    Concerning

    Petzu

    We create a 5120 MB vd for esxi installation.  5121 actually like the perc bios rounds.

    Then we can recreate all installing esxi without touching anything else.

    The virtual machine are limited in the size of their maximum vmdk. For example, you could create just the minimum number of data warehouses.

    Keep it simple and straightforward unless you have a specific reason to diverge.

    Let me paraphrase what mentions a dev to vmware, (it was in what concerns the amendment of vmfs default block size and I like to think that it is also applied to vmware Scheduler, a great great piece of programming). "We optimize, so you can just go with the default value and know he's going to do the right thing."

    The default size of the distribution is a good compromise, optimized to work under most workloads; different size block sizes can have radically different performance based on the workload characteristics.  The default value works well and 2008 and later versions of the most recent vm of windows properly aligned on 64 k.

    Dell has a ton of technical documents by comparing the performance of raid levels.  Already a few months that we are talking about performance raid comparng and I whimper.

    If it needs to be super fast I pick up 10, lots of space 5, more reliable but more space, and then from 10 to 6.

    Dell technology said that most of the people raid 5, because disk is so reliable.

    We use raid 6 for reliability on volumes in addition to 12. Depending on the level of incorrigible error on a raid of 12 to rebuild.

    http://m.ZDNet.com/blog/storage/why-RAID-6-stops-working-in-2019/805 (which implies an ure 1214 and I think I'm 12 years old company records15 ure).

    The backup raid controller cache battery alleviates some of the supposed raid 6 over raid 5 performance drops.

    In your case I use raid 5 for warehouses of operational data for performance and raid 6 for the backup data store.

    In addition a synthetic benchmark not always told you the return you will get with a real application in an operating system.

    When we first virtualized mysql, according to our benchmark iometer, we thought performance would be an order of magnitude worse. In practice, they were good enough that we went hog wild and virtualized of many others.  You should always be aware of the performance characteristics of your application.

    For example, we have two pairs of distinct mysql replication, and each of them get their own volume 5 disc on the md1000 even.

    Heterogeneous workloads on the same volume of mixture, specifically servers oversees with lots of random file io and vm with support for example sequential access will hurt the performance of the database.  ESXI 4.1 storage io control feature is designed to mitigate this.

    The funniest on the axles and raid controllers, it's that sometimes a lot of slower batteries will out perform less higher speed axes.

    If you think you aggregate read the md1000 on performs faster small volume.

    Battery learn cycle is running all 90 days or more and turn off the cache writeback, which hurt performance. He must run because the cache battery degrades over time, and he needs to know when the battery lasts less than 24 hours.  It determines this by measuring the time it takes to recharge.

    We have never noticed this or necessary to adjust it to our server by default openmanage farm, I just thought I would mention it because we were on a subject very.

    Install openmanage for esxi.  Disable the cache on individual disks, as this cache is not battery backup.

    Don't forget to document your config, because it won't remember what you were doing when you do a recovery.

  • Best practices for automation of ghettoVCBg2 starting from cron

    Hello world!

    I set up an instance of vma for scheduling backups with ghettoVCBg2 in a SIN store. Everything works like a charm from the command line, I use vi fastpass for authentication, backups complete very well.

    However, I would like to invade the cron script and got stuck. Since vifp is designed to run only command line and as I read not supposed to work from a script, it seems that the only possibility would be to create a backup user dedicated with administrator privileges and store the user and pass in the shell script. I'm not happy to do so. I searched through the forums but couldn't ' find any simple solution.

    any IDE for best practices?

    Thank you

    eliott100

    In fact, incorrect. The script relies on the fact that the host ESX or ESXi are led by vi-fastpass... but when you run the script, it does not use vifpinit command to connect. It access credentials via the modules of vi-fastpass which don't vifpinit library but as you have noticed, you cannot run this utility in off-line mode. Therefore, it can be scheduled via cron, basically, but you must run the script interactively, just set up in your crontab. Please take a look at the documentation for more information

    =========================================================================

    William Lam

    VMware vExpert 2009

    Scripts for VMware ESX/ESXi and resources at: http://engineering.ucsb.edu/~duonglt/vmware/

    Twitter: @lamw

    repository scripts vGhetto

    Introduction to the vMA (tips/tricks)

    Getting started with vSphere SDK for Perl

    VMware Code Central - Scripts/code samples for developers and administrators

    VMware developer community

    If you find this information useful, please give points to "correct" or "useful".

  • update of best practices of folios

    Hello

    I have a couple of folios that must be updated every month.  Could someone please help me with best practices for planning the release of the new folio and you ensure that it replaces the old one in the Viewer?

    To begin with, in the folio builder Panel, can I create a new folio every month or can I remove items from folio of last month, change its name and keep reusing that?

    I'm also a little confused about how to plan a new available for producers of folio.

    I would really appreciate hearing how the whole process must be managed, Panel builder folio to folio digitalpublishing.acrobat.com producer

    Thank you!

    You're doing a completely new folio for January so that publish in Folio Producer when you want to live for your readers. It replaces the previous edition, he lives next to her. Your readers will be able to see these two questions in your viewer and can choose who they want to download and read.

    You can leave the new folio not published in Folio Producer until the day you are interested in to go live and then click on publish. You can also use our new feature of advanced publication date to schedule a date up to 7 days in advance for the folio automatically go live. You will see this option when you click on the button publish on the folio.

    Neil

  • Practice of certification requirement

    Try certification OCP 11g Database. So I am very confused as to what format of training will satisfy the requirement.

    By http://education.oracle.com/pls/web_prod-plq-dad/show_desc.redirect?dc=D50079GC20; Its only
    instructor or instructor of this course online formats will meet the requirement of Certification practices. Free course study CD-Rom and Knowledge Center DO NOT meet the requirement of Hands-on.
    This means only
    Schedule/Purchase Training Formats        Price          Duration Course Materials Instruction Language 
    View Schedule       Classroom Training     US$ 3,250  5 Days    English             English  
    View Schedule       Live Virtual Class        US$ 3,250  5 Days    English  
    will satisfy the requirement?

    Those on the bottom will not?
    Schedule/Purchase Training Formats         Price          Duration         Course Materials Instruction Language 
    Purchase Online      Self Study CD-ROM    US$ 1,650   not applicable  English     
    Purchase Online      Training on Demand   US$ 3,250  5 Days            English   
    Can someone please check? Or suggest a person should I contact?

    Formats that you listed, they will be meet the training requirement:
    Classroom training
    Live virtual class
    Training on-demand (there is a new format of training. The course should be listed in the list of approved courses http://bit.ly/wyajDv.)

    The study CD-ROM does not reimburse the training obligation.

    Kind regards
    Brandye Barrington
    Certification Forum Moderator

  • best practices infrastructure shutdown

    I would like to know the best practice to close completely all ESX hosts in our environment for scheduled maintenance.   Once all virtual machines have been turned off and I'm ready to knock the ESX host, should I first put the hosts in maintenance mode or simply to stop.   I have four ESX 3.5 servers in a cluster with HA active.

    If all virtual machines are already turned off and you really want to stop the hosts, so you don't have to put it in maintenance mode.  Personally, I don't do all the time but most of the time I try to make sure that the host is in mode now until I get turns just to make sure I have stop all VM and puts nothing without telling me when it is consumed in back upward.

    • Kyle

  • Validation of best backup practice 11 GR 2 on Windows

    Hi all

    I read just grace to guides on the verification of different types of corruption on my database. It seems that after having DB_BLOCK_CHECKSUM the TYPICAL value is supported by a large part of the physical corruption and will alert you to the fact that it occurred. In addition, default RMAN makes its own audit of physical block. Logical corruption however does not seem to be controlled automatically unless the CONTROL LOGIC is added to the RMAN command. There are also different VALIDATE commands that could be executed on different objects.

    My question is really, what is recommended for the control of block corruption. Even people not worth regularly checking cela and allow just Oracle manage itself, or it is recommended to have the command CHECK LOGIC in RMAN (even if it is not added by default when you set up backup through OEM jobs) or people schedule jobs and output reports to a VALIDATE command on a regular basis?

    Thank you very much

    For use CHECK clause LOGIC is regarded as best practice by the Support of Oracle at least according to
    NOTE: 388422.1  Top 10 backup and restore best practices

    (referenced in http://blogs.oracle.com/db/entry/master_note_for_oracle_recovery_manager_rman_doc_id_11164841).

  • AppAssure best practices for the storage of machine image

    I work in a regulated environment where our machines are validated before being put into service.  Once they are put in service, changes to the machines are controlled and executed by a process of change request.  We use AppAssure to back up critical computers and with the ' always on ' incremental backup, works very well for the restoration of level file/folder in the case where something is removed, moved, etc..

    Management process asks me to perform once a machine is validated, to take a picture of it and store the image further so that if there is a hardware failure, the validated image can be restored and the machine is essentially to it of original validated state.  In addition to having the image of the machine as soon as it is posted, I need to also back up the machine on a regular basis in order to restore files, folders, databases, etc. on this machine.

    So my question is how to achieve this with AppAssure?  My first thoughts are to perform the backup of the machine Base, then archive it, and then let the AppAssure to perform the scheduled backup.  If I need to restore the computer to the Base image, then I back the Archive and then do the restore.  Is this a feasible solution and practice?  If this isn't the case, please tell us the best way I could accomplish what I want to do.

    Thank you

    Hi ENCODuane,

    You can carry out the plan of action of the WES in the following way

    1. to protect the agent with the name "[Hostname\IP_Base]".

    2. take the base image.

    3 remove the protection agent, but let the recovery points

    4. After these steps, you will have recovery basis point for the agent in the section 'recovery only Points.

    5. go to the agent computer, open the registry and change the ID of the Agent (a sign of evolution will be enough).  HKEY_LOCAL_MACHINE\SOFTWARE\AppRecovery\Agent\AgentId\Id

    for example, of

    94887f95-f7ee-42eb-AC06-777d9dd8073f

    TO

    84887f95-f7ee-42eb-AC06-777d9dd8073f

    6. restart the agent service

    7. from this point core Dell-AppAssure will recognize machine as a different agent.

    8 protect the machine with the name "[Hostname\IP]".

    If after these steps, you'll Base image for the 'Recovery only Points' machine (with the name "[Hostname\IP_Base]" which will not be changed by the update rollup and transfers) and protected the machine with the name "[Hostname\IP]" where the transfers will be made based on configured policy.

  • Schedule of captioning

    Hello

    Anyone know if there is a way to adjust the schedule of captioning closed in advanced management of Audio for the subtitles, you have already added in the text of the slide without having to click on the + and retype it?

    Boxes of time time and late early appear but they are not clickable and I can't understand how to adjust them.

    Very annoying.

    Thank you!

    Aimee

    Hi Aimee

    See if this little video helps with that.

    Click here to see

    See you soon... Rick

    Useful and practical links

    Captivate wish form/Bug report form

    Certified Adobe Captivate training

    SorcerStone blog

    Captivate eBooks

  • Essbase for reporting - is this best practice?

    Hello
    We have installed HFM v11 with DRM. We want to be able to plan and produce reports using Smartview. Consultant suggested that we build an Essbase cube in order to get the zoom capability of eliminations and other details. Who is considered "best practice"? Can we not use Smartview to declare and to break into the repository of HFM SQL?
    Thanks in advance

    This looks like a lot of work/maintenance.

    (1) you can easily program reports to aid Financial Reporting of Hyperion and batch Scheduler in the workspace.
    (2) you can use Extended Analytics to the largest reporting requirements and query data via an ODBC connection with access, etc.
    (3) you can use the VBA of SmartView feature to auto-refresh reports 'open' and save a copy in a different location - so you can run the macros via the Windows Task Scheduler.

Maybe you are looking for