Replicate EQL in a group with several pools

I wonder how and if you even go about it.

I'm replying to a group of elders without in our disaster recovery site which have speeds of 10 k - 7.2 k disks. Yes I know its DR, but I split the san 3-10 k in a pool and 2-7 k san in a pool.

How to reproduce 1 prod to 10 k of the pool and 2 Volume to the pool 7 k?

Whenever I looked there delegation of space I could not understand how to remove specifically 1 pool or in the other, and if it would still be selectable during replication.

So is this possible WITHOUT cut it pools in their own groups.

Anyone?

Hello, Tsz109...

With PS Series firmware v8 and later versions, you can assign space for delegates in several pools, and the whole storage space can be used in the replication group. Delegate space can be assigned in one, some, or all the pools. Prior to firmware, delegated PS Series v8 space was restricted to a storage pool for a particular replication partner. Also, in the PS firmware v8 series and later, you can configure a mapping between primary and secondary pools for original replica set placement.

See the following for more information:

en.Community.Dell.com/.../19861448

Please let me know if I can provide additional information.

Have a great day.

Tags: Dell Products

Similar Questions

  • Question about the use of sharing with several pools of resources

    The design of our environment vSphere has several pools of resources at the root, each series 'Normal' value stocks, and if the resources tab shows that each resource pool '% of the shares' value of 3%.

    However, some resource pools have 50 + VM and others have only 1 or 2.

    Is someone can confirm that this is wrong and that should have a pool of resources with more than VM have a proportionately higher value of action?  This assumes of course that all the resource pools are of same priority (be they makes me wonder why we use pools of resources at all... but that's another issue)

    The reason why I ask is that the 'worst deal allocation' is for memory and CPU is much higher on the pools of resources with multiple virtual machines, so it's obviously something in the algorithm of resource allocation that is aware of the total number of virtual machines in the pools. Resource pools are defined as extensible and I'm assuming that this "worst case allocation" would go down considerably as the cluster approached his ability, but I wanted to check before starting to change things...

    Welcome to the Forums - you are right to worry about this situation - check on http://www.yellow-bricks.com/2010/02/22/the-resource-pool-priority-pie-paradox/ for description - but don't worry Duncan also has a solution - http://www.yellow-bricks.com/2010/02/24/custom-shares-on-a-resource-pools-scripted/

    If you find this or any other answer useful please consider awarding points marking the answer correct or useful

  • Unable to create cache with several tables group

    Hello

    I need to create a group of cache with several tables, which are referential to the other.

    There are 2 related tables and table 1 child...
    While trying this thing, it gives me an error:

    8222: multiple parent tables found

    It is not possible to use several tables of root in a single cache group? Is there another way to use it?

    Script, I used is:

    create a cache group asynchronous writethrough TEST. CG1
    of the TEST. ROOT1)
    KEY PRIMARY ID VARCHAR2 (8 BYTES),
    NAME VARCHAR2 (50 BYTE),
    DESCRIPTION VARCHAR2 (255 BYTE),
    POLICYTYPEID VARCHAR2 (7-BYTE)),

    TEST. ROOT2)
    PARAMTYPEID VARCHAR2 (5 BYTES) PRIMARY KEY,
    PARAMETERUSAGE VARCHAR2 (1 BYTE),
    NAME VARCHAR2 (25 BYTE),
    DESCRIPTION VARCHAR2 (255 BYTE)),

    TEST. CHILD1)
    PARAMDETAILID NUMBER (20) PRIMARY KEY,.
    ID VARCHAR2 (8 BYTE),
    PARAMETERUSAGE VARCHAR2 (1 BYTE),
    DISPLAYVALUE VARCHAR2 (255 BYTE),
    OPERATORID VARCHAR2 (5 BYTE),
    VENDORID NUMBER (20).
    FOREIGN KEY REFERENCES TEST. ROOT1 (ID),
    TEST KEY (PARAMETERUSAGE) REFERENCES STRANGERS. ROOT2 (PARAMETERUSAGE));

    You can't have multiple root within a group of cache tables. The requirements for tables in the group a cache are very strict; There must be only one top-level (root table) table and there may possibly be several children tables. Tables of the child should be linked through foreign keys to the root table or a child table above in the hierarchy.

    The solution for your case is among the tables of root and the other root table in a separate cache group and the child table in a cache group. If you do this, you must take care of a few things:

    1. you cannot define foreign keys between the tables of groups of different cache in TimesTen (keys may exist in Oracle) so the application must enforce referential integrity itself for these cases.

    2. If you load data in a cache group (using LOAD the GROUP CACHE or "load we demand") and Timesten will not automatically load the corresponding data in the other group of cache (since he doesn't know the relationship). The application must load the data into the other group of cache explicitly.

    There is no problem with transactional consistency when changes are pushed to Oracle. TimesTen maintains and reinforces the coherence transactional regardless of how the tables are arranged in groups cache correctly.

    Chris

  • Photo constantly order of photos in a slideshow changes no matter how many times the movements of the user the photo back to the good look at an order. Example: Bathroom Plans eventually grouped with pictures of kitchen! I have found no way to stop this o

    Apple Photo 1.3 serious problems - how can I SOLVE all these problems?

    (1) breaks down without rhyme or reason no matter where I am in the workflow.

    (2) pictures will not be Shut Down Every Time, even after several days of waiting.

    (3) aPhoto frequently badly chooses picture in the EDIT picture option, I get a picture different than the one I clicked on which is on a 100 pictures in a row.

    (4) picture constantly order of photos in a slideshow changes no matter how many times the movements of the end user the photo back to the good look at an order. Example: Bathroom Plans eventually grouped with pictures of kitchen! I have found no way to stop this weird behavior! Is there a way to stop this? If I drag the photo again some 7 additional photos in the slide show, after a minute or less, he appears again to where it was it not. !@#$%$#

    (5) If you make any CHANGES to a photo, it often changes the appearance of your complete slideshow of this picture with impatience. So you lose all this not work fix your configuration of the slide show. Even changing the order of photos once more that I had put back where they should be. !@#$$#@

    (6) photo identifies often shades of lamps and long door handles as the faces of the people.

    (7) photo made bad decisions when it comes to brightness, contrast and colors effortlessly around other than to use other software, where as with iPhoto there was a lot of workarounds. I could continue, but will save one who might be reading of this.

    I am up to date on all updates for my Mac. If anyone have REAL answers so please spilling the beans, but according to me, it's the only truth is that Apple has rolled out a product inferrer to replace an exceptional product, called iPhoto, which does not work on my new iMac computer 5K of 27 ".   If I knew what I would have chosen another computer that I use iPhoto to prepare more of fifty to sixty thousand photos in a given year and I use iPhoto to make hundreds of slideshows from it.  Are there plugins for Photo 1.3? I ask because I see where there could be Add-ons, but I can't find.

    Apple has taken a serious decision by turning his back to iPhoto and tens of millions of loyal users.

    Thanks in advance to anyone brave enough to tackle this job.

    James

    First, back up your library of Photos and hold down the command and option keys while launching Photos - repair your database - you have a corrupted database

    LN

  • Creating security group with grants decided in active directory - Server 2003

    Hello

    I need to create several different security groups for about 7 users with grant different access rights, but all users will access the same folder main and some of the same void records. I created a group with some of the users but appear to have access to all the folders there particular subfolder but I only want to have access to some of the folders in the selected subfolder.

    I guess what I'm asking is how do I create groups of different security with grants decided for each groups and ensuring that users in these groups only have access and subsidies to certain folders.

    I don't know if I explained myself properly but I certainly confused myself, I hope someone can point me in the right direction to solve this problem.

    Thanks in advance

    Jah

    Jah,

    For assistance, please ask for help in the appropriate Microsoft TechNet Windows Server Forum.

    Thank you.

  • ASA EzVPN with several remote subnets

    Hello world

    I'll have the challenge of EasyVPN installation based on ASA 5520, and ASA 5505 (with the ASA5505 as the vpnclient) with several networks behind the ASA 5505.

    Access by the network directly connected on the 5505 to the central site works very well.

    But the second network segment (which is behind a router on the directly connected network) cannot connect to the central site.

    I guess I need to specify that some sort of acl's to be able to do that.

    BTW we do not use tunneling split, because all traffic moves through the tunnel (no local internet access).

    The layout looks like this

    (--LAN--)-5520---5505-(--LAN1--)-ROUTER-(--LAN2--)-(WAN)-

    LAN1 and LAN connection works great through the EZVPN Tunnel.

    LAN2 connection to the LAN does not work through the Tunnel of EZVPN.

    Here is the configuration used so far (outside the normal SHEEP, groups of objects and stuff ISAKMP crypto):

    Client:

    vpnclient Server 10.x.x.x

    extension-mode network mode vpnclient

    EzVPN vpngroup vpnclient password *.

    vpnclient username user1 password *.

    vpnclient enable

    Crypto ipsec df - bit clear-df outdoors

    Server:

    internal EzVPN group strategy

    Group Policy attributes EzVPN

    allow to NEM

    allow password-storage

    tunnel-group EzVPN type ipsec-ra

    General characteristics of tunnel-group EzVPN

    Group Policy - by default-EzVPN

    IPSec-attributes tunnel-group EzVPN

    pre-shared key *.

    user user1 password *.

    I hope you can help

    Best regards

    Jarle

    Unfortunately, it is not supported on the platform of the SAA. With EasyVPN on the SAA, only the connected networks can be advertised. To accomplish what you want to do, you need to configure a static IPSec tunnel and announce local networks via ACL interesting traffic. You can also use an IOS device that does not have the capabilities of "multiple subnet" with EasyVPN.

    http://www.Cisco.com/en/us/docs/iOS/sec_secure_connectivity/configuration/guide/sec_easy_vpn_rem.html#wp1098057

  • Group with ASA RA-VPN-map

    Dear all!

    I have a setup of RA - VPN with a Cisco VPNC and a Cisco Secure ACS 4.2. I do VPN tunnel-group mapping clears the user to class attributes 25 RADIUS (OU =...), and it works fine. I migrated this solution since the VPNC to an ASA5520 with 8.0 software image (4) and I just can't do this mapping of tunnel-group, although ACS configuration is the same (of course) and I think that FW configuration is correct also.

    All tunnel groups internal and authentication is just everywhere, but the tunnel projection is not working.

    Can someone email me an example configuration for ASA to check?

    Y at - it a special order (e.g. "tunnel-Group-map enable or") should I use?

    Thanks for the replies!

    By (e)

    Miki

    The pools and ip addresses can be either set on the group policy with the correct value, or you can use the ACS with either a static ip address on the user or with the pool on the group or user, this attribute will be passed on to the RADIUS access accept as a framed ip address value.

  • Create different group with VPN remote access

    Hello world

    The last time, I ve put in place a VPN for remote access to my network with ASA 5510

    I ve access to all my internal LAn helped with my VPN

    But I want to set up a vpn group in the CLI for a different group of the user who accesses the different server or a different network on my local network.

    Example: computer group - access to 10.70.5.X network

    Group consultant network - access to 10.70.10.X

    I need to know how I can do this, and if you can give me some example script to complete this

    Here is my configuration:

    ASA Version 8.0 (2)
    !
    ASA-Vidrul host name
    vidrul domain name - ao.com
    activate 8Ry2YjIyt7RRXU24 encrypted password
    names of
    DNS-guard
    !
    interface Ethernet0/0
    nameif outside
    security-level 0
    address IP X.X.X.X 255.255.255.X
    !
    interface Ethernet0/1
    nameif inside
    security-level 100
    address IP X.X.X.X 255.255.255.X
    !
    interface Ethernet0/2
    Shutdown
    No nameif
    no level of security
    no ip address
    !
    interface Ethernet0/3
    Shutdown
    No nameif
    no level of security
    no ip address
    !
    interface Management0/0
    Description Port_Device_Management
    nameif management
    security-level 99
    address IP X.X.X.X 255.255.255.X
    management only
    !
    2KFQnbNIdI.2KYOU encrypted passwd
    passive FTP mode
    DNS server-group DefaultDNS
    vidrul domain name - ao.com
    access-list 100 scope ip allow a whole
    access-list extended 100 permit icmp any any echo
    access-list extended 100 permit icmp any any echo response
    vpn-vidrul_splitTunnelAcl permit 10.70.1.0 access list standard 255.255.255.0
    vpn-vidrul_splitTunnelAcl permit 10.70.99.0 access list standard 255.255.255.0
    inside_nat0_outbound list of allowed ip extended access all 10.70.255.0 255.255.255.0
    pager lines 24
    Outside 1500 MTU
    Within 1500 MTU
    MTU 1500 management
    IP local pool clientvpngroup 10.70.255.100 - 10.70.255.200 mask 255.255.255.0
    ICMP unreachable rate-limit 1 burst-size 1
    ASDM image disk0: / asdm - 602.bin
    don't allow no asdm history
    ARP timeout 14400
    Global 1 interface (outside)
    NAT (inside) 0-list of access inside_nat0_outbound
    NAT (inside) 1 10.70.0.0 255.255.0.0
    Access-group 100 in the interface inside
    Access-group 100 interface inside

    Timeout xlate 03:00
    Timeout conn 01:00 half-closed 0:10:00 udp 0:02:00 icmp 0:00:02
    Sunrpc timeout 0:10:00 h323 0:05:00 h225 mgcp from 01:00 0:05:00 mgcp-pat 0:05:00
    Sip timeout 0:30:00 sip_media 0:02:00 prompt Protocol sip-0: 03:00 sip - disconnect 0:02:00
    Timeout, uauth 0:05:00 absolute
    dynamic-access-policy-registration DfltAccessPolicy
    Protocol RADIUS AAA-server 10.70.99.10
    AAA authentication enable LOCAL console
    the ssh LOCAL console AAA authentication
    LOCAL AAA authorization command
    Enable http server
    http 192.168.1.2 255.255.255.255 management
    No snmp server location
    No snmp Server contact
    Server enable SNMP traps snmp authentication linkup, linkdown cold start
    Crypto ipsec transform-set ESP-DES-MD5 esp - esp-md5-hmac
    Crypto ipsec transform-set ESP-DES-SHA esp - esp-sha-hmac
    Dynamic crypto map SYSTEM_DEFAULT_CRYPTO_MAP 65535 set pfs
    SYSTEM_DEFAULT_CRYPTO_MAP game 65535 dynamic-map crypto transform-set ESP-DES-SHA ESP-DES-MD5
    outside_map card crypto 65535-isakmp dynamic ipsec SYSTEM_DEFAULT_CRYPTO_MAP
    outside_map interface card crypto outside
    crypto ISAKMP allow outside
    crypto ISAKMP policy 10
    preshared authentication
    the Encryption
    md5 hash
    Group 2
    life 86400
    Crypto isakmp nat-traversal 30
    Telnet 0.0.0.0 0.0.0.0 inside
    Telnet timeout 5
    SSH 0.0.0.0 0.0.0.0 outdoors
    SSH timeout 5
    Console timeout 0
    outside access management
    dhcpd manage 192.168.1.2 - 192.168.1.5
    dhcpd enable management
    !
    a basic threat threat detection
    Statistics-list of access threat detection
    !
    class-map inspection_default
    match default-inspection-traffic
    block-url-class of the class-map
    class-map imblock
    match any
    class-map P2P
    game port tcp eq www
    !
    !
    type of policy-card inspect dns migrated_dns_map_1
    parameters
    message-length maximum 512
    Policy-map global_policy
    class inspection_default
    inspect the migrated_dns_map_1 dns
    inspect the ftp
    inspect h323 h225
    inspect the h323 ras
    inspect the netbios
    inspect the rsh
    inspect the rtsp
    inspect the skinny
    inspect esmtp
    inspect sqlnet
    inspect sunrpc
    inspect the tftp
    inspect the sip
    inspect xdmcp
    Policy-map IM_P2P
    class imblock
    class P2P
    !
    global service-policy global_policy
    vpn-vidrul group policy internal
    vpn-vidrul group policy attributes
    Protocol-tunnel-VPN IPSec
    Split-tunnel-policy tunnelspecified
    Split-tunnel-network-list value vpn-vidrul_splitTunnelAcl
    value by default-field vidrul - ao.com
    test 274Y4GRAbNElaCoV of encrypted password privilege 0 username
    username admin privilege 15 encrypted password bTpUzgLxalekyhxQ
    attributes of user admin name
    Strategy-Group-VPN-vpn-vidrul
    username, password suporte zjQEaX/fm0NjEp4k encrypted privilege 15
    type tunnel-group vidrul-vpn remote access
    vpn-vidrul general-attributes tunnel-group
    address clientvpngroup pool
    Group Policy - by default-vpn-vidrul
    IPSec-vpn-vidrul tunnel group attributes
    pre-shared-key *.
    context of prompt hostname
    Cryptochecksum:d84e64c87cc5b263c84567e22400591c
    : end

    What you need to configure is to imitate the configuration on the tunnel-group and group strategy and to configure access to specific network you need.

    Currently, you have configured the following:

    vpn-vidrul group policy internal
    vpn-vidrul group policy attributes
    Protocol-tunnel-VPN IPSec
    Split-tunnel-policy tunnelspecified
    Split-tunnel-network-list value vpn-vidrul_splitTunnelAcl
    value by default-field vidrul - ao.com

    type tunnel-group vidrul-vpn remote access
    vpn-vidrul general-attributes tunnel-group
    address clientvpngroup pool
    Group Policy - by default-vpn-vidrul
    IPSec-vpn-vidrul tunnel group attributes
    pre-shared-key *.

    What you need is to create new group policy and the new tunnel-group and configure the tunnel split ACL to allow access to specific access required.

    The user must then connect with the new group name and the new pre-shared key (password).

    Hope that helps.

  • How to subtract a grunge texture of a layer with several objects?

    Hi all

    I've been struggling with this for hours. Would be very grateful for help.

    I'm trying to apply a vintage texture to some vector illustrations.

    In the screenshot, the work I created is in the layer of "work." I tried to simplify the work as much as possible by using the union, merge, expand, etc. I wanted to do a piece of work, but that seems so simple that I can get it.

    When I try to perform a subtraction on the layer 'work' with the layer of 'texture', basically everything disappears.

    I suspect that it does not work as expected because the layer of my work is always composed of several objects and not a unified object.

    Vintage texture superimposed on the work looks good, but I need this subtraction in vector because I intend to get it printed on a shirt in one color, so I can't get the holes filled with color.

    Any help would be greatly appreciated.

    Thank you!

    Ray

    Screenshot 2016-01-28 20.10.50.pngScreenshot 2016-01-28 20.16.20.png

    How can I activate art in a single compound path?

    First of all, to unite all paths with the first button in the Pathfinder palette. Then choose object > compound path > make (or press Command-8).

    But don't do it this way. Use an opacity mask.

    If I use the grunge texture as an opacity mask, do I still to transform the art into a single compound path first or I can do to a layer with several objects as if it were now?

    No, you have not; You can keep a complete editibility of your design. You need to just design group firstly, why the opacity mask applies to all this.

    Opacity masks are easier to use if you uncheck "Clip" after their creation. Then, all parts of the mask that is black will be transparent; each part that is white will be opaque, and the things which are shades of gray will be partially transparent. Just like in Photoshop. Right now, your grunge texture is white, so you would need to change to black after making your opacity mask.

  • Some build-in panels are grouped with my plugin Panel in InDesign CC

    My settings for panelist look like this:

    resources LocaleIndex (kSDKDefPanelResourceID)

    {kViewRsrcType,

    {kWildFS, k_Wild, kSDKDefPanelResourceID + index_enUS}

    };

    / * Definition of panelist.

    */

    resources involved (kSDKDefPanelResourceID)

    {

    {

    1 group in the list

    kSDKDefPanelResourceID, / / resource ID for this Panel (use SDK by default ID rsrc)

    kMyPluginID, / / ID of the plugin which holds this Panel

    kIsResizable,

    kMTPanelWidgetActionID, / / Action ID to show/hide Panel

    "MyPlugin:Test", / / appears in the list window.

    "", / / Substitute the form menu path ' hand: Foo "If you want your palette in the second menu item to place

    0.0, / / menu replacing the position of the alternative Menu to determine the order of the menu

    kMyImageRsrcID, kMyPluginID, / / Rsrc ID, ID Plugin for a PNG icon resource to use for this palette (when it is not active)

    c_Panel

    }

    };

    resources MTPanelWidget (kSDKDefPanelResourceID + index_enUS)

    {

    __FILE__, __LINE__, / / macro location

    kMTPanelWidgetID, / / WidgetID

    kPMRsrcID_None, / / RsrcID

    kBindAll, / / Binding (0 = none)

    0, 0, 320, 386, / / framework: left, top, right, bottom.

    kTrue, kTrue, / / Visible, Enabled

    kTrue, / / clear the background

    kInterfacePaletteFill, / / Color Erase

    kMTPanelTitleKey, / / name of Panel

    {

    }

    "MyPlugin" / / name of contextual menu (internal)

    };

    The problem is that when I click on the window-> MyPlugin-> Test liquid integrated panels layout of the article and references are also open and grouped with my Panel. I don't understand what I'm doing wrong. The situation is the same with all the examples in the sdk that have signs. Help, please. 10 x in advance.

    My previous post was that this panelMgr-> GetPanelFromWidgetID (kMyPanelWidgetID) does not find the Panel when it is called in the initializer.

    I'm now deep - search the trees from the GetRootPaletteNode paletteRef, using PaletteRefUtils methods. For kTabPanelContainerType, I used panelMgr-> GetPanelFromPaletteContainer() to check the ID of widget, fortunately callers already works. For other types, I just go down the children.

    Three steps more to move the paletteRef found in a new floating window:

    columnRef = PaletteRefUtils::NewFloatingTabGroupContainerPalette(); kTabPaneType

    groupRef = PaletteRefUtils::NewTabGroupPalette (column); kTabGroupType

    PaletteRefUtils::ReparentPalette(myPanelRef,groupRef,PaletteRef());

    If you have several panels, Iterate step 2 + 3 to create groups (lines) per Panel, or just step 3 if they all end up stacked.

    Edit: I can find why GetPanelFromWidgetID did not work: apparently initializer priority is considered by plugin, so one of my signs in a plugin different just was not yet registered. Forget all that deep-search tips.

  • Import Illustrator file with several layers and sublayers as composition by keeping all layers and sous-calques

    I drew a map of the country with several layers and sous-calques, including the streets, cities, railroads...

    Now, I want to create a template for my colleagues in After Effects.

    To do this, I want to import the AI in After Effects, holding all sublayers and layers in compositions and subcompositions.

    When I do, I get the first two layers correctly, others just in a merged publication.

    What can I do to get all the layers of each as a layer in After Effects?

    Best regards, Raphael

    Illustrator imports only the first level of the layers. If you have groups or paths below you must release layer groups. The layers will then be stacked in the order in the form of layers. Then you need to move them to the top.

    The procedure is to select a layer in the layer Illustrator Panel, then without selecting any which specific element in the layer click on the menu in the upper right corner and select release to layers. All an elements in this layer will be transformed into layers. Now, select all, and drag them to the top above the original layer. This will leave the empty original layer and put all the elements in new layers.

    Here is a tutorial I made there are a bunch of years and who uses this technique to transform a mixture in a morphing.

    Morphing with Adobe Illustrator. I hope this helps.

  • [8i] grouping with delicate conditions (follow-up)

    I am posting this as a follow-up question to:
    [8i] grouping with tricky conditions

    This is a repeat of my version information:
    Still stuck on an old database a little longer, and I'm trying out some information...

    BANNER

    --------------------------------------------------------------------------------
    Oracle8i Enterprise Edition Release 8.1.7.2.0 - Production
    PL/SQL Release 8.1.7.2.0 - Production
    CORE 8.1.7.0.0-Production
    AMT for HP - UX: 8.1.7.2.0 - Production Version
    NLSRTL Version 3.4.1.0.0 - Production

    Now for the sample data. I took an order of my real data set and cut a few columns to illustrate how the previous solution didn't find work. My real DataSet still has thousands of orders, similar to this one.
    CREATE TABLE     test_data
    (     item_id     CHAR(25)
    ,     ord_id     CHAR(10)
    ,     step_id     CHAR(4)
    ,     station     CHAR(5)
    ,     act_hrs     NUMBER(11,8)
    ,     q_comp     NUMBER(13,4)
    ,     q_scrap     NUMBER(13,4)
    );
    
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0005','S509',0,10,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0010','S006',0,10,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0020','A501',0.85,10,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0026','S011',0.58,10,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0030','S970',0,10,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0040','S970',0,10,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0050','S003',0,10,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0055','S600',0,10,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0060','Z108',6.94,10,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0070','Z108',7,10,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0080','Z310',4.02,10,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0085','Z409',2.17,10,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0090','S500',0.85,10,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0095','S502',1.63,10,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0110','S006',0,10,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0112','S011',0.15,10,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0117','S903',0,10,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0118','S900',0,9,1);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0119','S950',0,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0120','S906',0,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0140','S903',0,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0145','S950',0,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0150','S906',0,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0160','S903',0,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0170','S900',0,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0220','S902',0,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0230','S906',0,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0240','S903',0,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0250','S003',0,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0260','S006',0,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0270','S012',0.95,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0280','Z417',0.68,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0285','Z417',0.68,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0290','Z426',1.78,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0300','Z426',2.07,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0305','Z426',1.23,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0310','Z402',3.97,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0315','Z308',8.09,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0410','Z409',4.83,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0430','S500',3.6,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0435','S502',0.43,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0450','S002',0.35,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0460','S001',1,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0470','Z000',2.6,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0476','S011',1,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0478','S510',0,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0480','S903',0,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0490','S003',1.2,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0500','S500',1.37,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0530','B000',0.28,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0536','S011',0.65,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0538','S510',0,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0540','S923',0,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0560','S003',0,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0565','S001',0.85,0,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0570','S012',2.15,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0575','S509',0,0,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0580','B000',3.78,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0590','S011',0.27,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0600','S510',0,9,0);
    Instead of grouping all sequential steps with 'OUTPR' station, I am gathering all the sequential steps with "S9%" station, then here is the solution changed to this:
    SELECT       item_id
    ,        ord_id
    ,       MIN (step_id)          AS step_id
    ,       station
    ,       SUM (act_hrs)          AS act_hrs
    ,       MIN (q_comp)          AS q_comp
    ,       SUM (q_scrap)          AS q_scrap
    FROM       (          -- Begin in-line view to compute grp_id
               SELECT  test_data.*
               ,           ROW_NUMBER () OVER ( PARTITION BY  item_id, ord_id
                                                ORDER BY        step_id
                              )
                  - ROW_NUMBER () OVER ( PARTITION BY  item_id, ord_id
                                          ,                CASE
                                            WHEN  station LIKE 'S9%'
                                            THEN  NULL
                                            ELSE  step_id
                                           END
                                ORDER BY      step_id
                              )     AS grp_id
               ,           ROW_NUMBER () OVER ( PARTITION BY  item_id, ord_id
                                                ORDER BY        step_id
                              )                    AS r_num1
               ,       ROW_NUMBER () OVER ( PARTITION BY  item_id, ord_id
                                          ,                CASE
                                            WHEN  station LIKE 'S9%'
                                            THEN  NULL
                                            ELSE  step_id
                                           END
                                ORDER BY      step_id
                              )                    AS r_num2
               FROM    test_data
           )          -- End in-line view to compute grp_id
    GROUP BY  item_id
    ,            ord_id
    ,       station
    ,       grp_id
    ORDER BY  item_id
    ,            step_id
    ;
    If just run the subquery to calculate grp_id, you can see that it sometimes affects the same number of group in two stages that are not side by side. For example, the two step 285 and 480 are they assigned group 32...

    I don't know if it's because my orders have many more steps that the orders of the sample I provided, or what...

    I tried this version too (by replacing all the names of the stations "S9%" by "OUTPR"):
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0005','S509',0,10,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0010','S006',0,10,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0020','A501',0.85,10,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0026','S011',0.58,10,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0030','OUTPR',0,10,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0040','OUTPR',0,10,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0050','S003',0,10,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0055','S600',0,10,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0060','Z108',6.94,10,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0070','Z108',7,10,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0080','Z310',4.02,10,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0085','Z409',2.17,10,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0090','S500',0.85,10,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0095','S502',1.63,10,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0110','S006',0,10,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0112','S011',0.15,10,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0117','OUTPR',0,10,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0118','OUTPR',0,9,1);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0119','OUTPR',0,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0120','OUTPR',0,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0140','OUTPR',0,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0145','OUTPR',0,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0150','OUTPR',0,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0160','OUTPR',0,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0170','OUTPR',0,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0220','OUTPR',0,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0230','OUTPR',0,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0240','OUTPR',0,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0250','S003',0,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0260','S006',0,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0270','S012',0.95,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0280','Z417',0.68,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0285','Z417',0.68,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0290','Z426',1.78,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0300','Z426',2.07,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0305','Z426',1.23,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0310','Z402',3.97,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0315','Z308',8.09,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0410','Z409',4.83,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0430','S500',3.6,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0435','S502',0.43,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0450','S002',0.35,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0460','S001',1,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0470','Z000',2.6,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0476','S011',1,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0478','S510',0,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0480','OUTPR',0,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0490','S003',1.2,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0500','S500',1.37,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0530','B000',0.28,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0536','S011',0.65,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0538','S510',0,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0540','OUTPR',0,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0560','S003',0,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0565','S001',0.85,0,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0570','S012',2.15,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0575','S509',0,0,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0580','B000',3.78,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0590','S011',0.27,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0600','S510',0,9,0);
    
    
    SELECT       item_id
    ,        ord_id
    ,       MIN (step_id)          AS step_id
    ,       station
    ,       SUM (act_hrs)          AS act_hrs
    ,       MIN (q_comp)          AS q_comp
    ,       SUM (q_scrap)          AS q_scrap
    FROM       (          -- Begin in-line view to compute grp_id
               SELECT  test_data.*
               ,           ROW_NUMBER () OVER ( PARTITION BY  item_id, ord_id
                                                ORDER BY        step_id
                              )
                  - ROW_NUMBER () OVER ( PARTITION BY  item_id, ord_id
                                          ,                CASE
                                            WHEN  station = 'OUTPR'
                                            THEN  NULL
                                            ELSE  step_id
                                           END
                                ORDER BY      step_id
                              )     AS grp_id
               ,           ROW_NUMBER () OVER ( PARTITION BY  item_id, ord_id
                                                ORDER BY        step_id
                              )                    AS r_num1
               ,       ROW_NUMBER () OVER ( PARTITION BY  item_id, ord_id
                                          ,                CASE
                                            WHEN  station = 'OUTPR'
                                            THEN  NULL
                                            ELSE  step_id
                                           END
                                ORDER BY      step_id
                              )                    AS r_num2
               FROM    test_data
           )          -- End in-line view to compute grp_id
    GROUP BY  item_id
    ,            ord_id
    ,       station
    ,       grp_id
    ORDER BY  item_id
    ,            step_id
    ;
    and it shows the same problem.

    Help?

    Hello

    I'm glad that you understood the problem.

    Here's a little explanation of the approach of the fixed difference. I can refer to this page later, so I will explain some things you obviously already understand, but I jump you will find helpful.
    Your problem has additional feature that, according to the station, some lines can never combine in large groups. For now, we will greatly simplify the problem. In view of the CREATE TABLE statement, you have posted and these data:

    INSERT INTO test_data (item_id, ord_id, step_id, station)  VALUES ('abc-123', '0001715683', '0010', 'Z417');
    INSERT INTO test_data (item_id, ord_id, step_id, station)  VALUES ('abc-123', '0001715683', '0011', 'S906');
    INSERT INTO test_data (item_id, ord_id, step_id, station)  VALUES ('abc-123', '0001715683', '0012', 'S906');
    INSERT INTO test_data (item_id, ord_id, step_id, station)  VALUES ('abc-123', '0001715683', '0140', 'S906');
    INSERT INTO test_data (item_id, ord_id, step_id, station)  VALUES ('abc-123', '0001715683', '0170', 'Z417');
    INSERT INTO test_data (item_id, ord_id, step_id, station)  VALUES ('abc-123', '0001715683', '0175', 'Z417');
    INSERT INTO test_data (item_id, ord_id, step_id, station)  VALUES ('abc-123', '0001715683', '0200', 'S906');
    INSERT INTO test_data (item_id, ord_id, step_id, station)  VALUES ('abc-123', '0001715683', '0205', 'S906');
    

    Let's say that we want this output:

    `                  FIRST LAST
                       _STEP _STEP
    ITEM_ID ORD_ID     _ID   _ID   STATION  CNT
    ------- ---------- ----- ----- ------- ----
    abc-123 0001715683 0010  0010  Z417       1
    abc-123 0001715683 0011  0140  S906       3
    abc-123 0001715683 0170  0175  Z417       2
    abc-123 0001715683 0200  0205  S906       2
    

    Where each line of output represents a contiguous set of rows with the same item_id, ord_id and station. "Contguous" is determined by step_id: lines with "0200" = step_id = step_id "0205' are contiguous in this example of data because there is no step_ids between '0200' and '0205". "
    The expected results include the step_id highest and lowest in the group, and the total number of original lines of the group.

    GROUP BY (usually) collapses the results of a query within lines. A production line can be 1, 2, 3, or any number of lines in the original. This is obviously a problem of GROUP BY: we sometimes want several lines in the original can be combined in a line of output.

    GROUP BY guess, just by looking at a row, you can tell which group it belongs. Looking at all the 2 lines, you can always know whether or not they belong to the same group. This isn't quite the case in this issue. For example, these lines

    INSERT INTO test_data (item_id, ord_id, step_id, station)  VALUES ('abc-123', '0001715683', '0140', 'S906');
    INSERT INTO test_data (item_id, ord_id, step_id, station)  VALUES ('abc-123', '0001715683', '0200', 'S906');
    

    These 2 rows belong to the same group or not? We cannot tell. Looking at just 2 lines, what we can say is that they pourraient belonging to the same group, since they have the same item_id, ord_id and station. It is true that members of same groups will always be the same item_id, the ord_id and train station; If one of these columns differ from one line to the other, we can be sure that they belong to different groups, but if they are identical, we cannot be certain that they are in the same group, because item_id, ord_id and station only tell part of the story. A group is not just a bunch or rows that have the same item_id, ord_id and station: a group is defined as a sequence of adjacent to lines that have these columns in common. Before we can make the GROUP BY, we need to use the analytical functions to see two lines are in the same contiguous streak. Once we know that, we can store this data in a new column (which I called grp_id), and then GROUP BY all 4 columns: item_id, ord_id, station and grp_id.

    First of all, let's recognize a basic difference in 3 columns in the table that will be included in the GROUP BY clause: item_id, ord_id and station.
    Item_id and ord_id always identify separate worlds. There is never any point comparing lines with separate item_ids or ord_ids to the other. Different item_ids never interact; different ord_ids have nothing to do with each other. We'll call item_id and ord_id column 'separate world '. Separate planet do not touch each other.
    The station is different. Sometimes, it makes sense to compare lines with different stations. For example, this problem is based on questions such as "these adjacent lines have the same station or not? We will call a "separate country" column of the station. There is certainly a difference between separate countries, but countries affect each other.

    The most intuitive way to identify groups of contiguous lines with the same station is to use a LAG or LEAD to look at adjacent lines. You can certainly do the job, but it happens to be a better way, using ROW_NUMBER.
    Help the ROW_NUMBER, we can take the irregular you are ordering step_id and turn it into a dial of nice, regular, as shown in the column of r_num1 below:

    `                                 R_             R_ GRP
    ITEM_ID ORD_ID     STEP STATION NUM1 S906 Z417 NUM2 _ID
    ------- ---------- ---- ------- ---- ---- ---- ---- ---
    abc-123 0001715683 0010 Z417       1         1    1   0
    abc-123 0001715683 0011 S906       2    1         1   1
    abc-123 0001715683 0012 S906       3    2         2   1
    abc-123 0001715683 0140 S906       4    3         3   1
    abc-123 0001715683 0170 Z417       5         2    2   3
    abc-123 0001715683 0175 Z417       6         3    3   3
    abc-123 0001715683 0200 S906       7    4         4   3
    abc-123 0001715683 0205 S906       8    5         5   3
    

    We could also assign consecutive integers to the lines in each station, as shown in the two columns, I called S906 and Z417.
    Notice how the r_num1 increases by 1 for each line to another.
    When there is a trail of several rows of S906 consectuive (for example, step_ids ' 0011 'by '0140'), the number of s906 increases by 1 each line to another. Therefore, during the duration of a streak, the difference between r_num1 and s906 will be constant. For 3 lines of the first series, this difference is being 1. Another series of S906s contiguous started step_id = '0200 '. the difference between r_num1 and s906 for this whole series is set to 3. This difference is what I called grp_id.
    There is little meaning for real numbers, and, as you have noticed, streaks for different stations can have as by chance the same grp_id. (it does not happen to be examples of that in this game of small sample data.) However, two rows have the same grp_id and station if and only if they belong to the same streak.

    Here is the query that produced the result immediately before:

    SELECT    item_id
    ,        ord_id
    ,        step_id
    ,       station
    ,       r_num1
    ,       CASE WHEN station = 'S906' THEN r_num2 END     AS s906
    ,       CASE WHEN station = 'Z417' THEN r_num2 END     AS Z417
    ,       r_num2
    ,       grp_id
    FROM       (          -- Begin in-line view to compute grp_id
               SELECT  test_data.*
               ,           ROW_NUMBER () OVER ( PARTITION BY  item_id, ord_id
                                                ORDER BY        step_id
                              )
                  - ROW_NUMBER () OVER ( PARTITION BY  item_id, ord_id, station
                                ORDER BY      step_id
                              )     AS grp_id
               ,           ROW_NUMBER () OVER ( PARTITION BY  item_id, ord_id
                                                ORDER BY        step_id
                              )                    AS r_num1
               ,       ROW_NUMBER () OVER ( PARTITION BY  item_id, ord_id, station
                                ORDER BY      step_id
                              )                    AS r_num2
               FROM    test_data
           )          -- End in-line view to compute grp_id
    ORDER BY  item_id
    ,            ord_id
    ,       step_id
    ;
    

    Here are a few things to note:
    All analytical ORDER BY clauses are the same. In most of the problems, there will be only an ording regime that matters.
    Analytical PARTITION BY clauses include the columns of 'distinct from the planet', item_id and ord_id.
    The analytical PARTITION BY clauses also among the column 'split the country', station.

    To get the results we want in the end, we add a GROUP BY clause from the main query. Yet once, this includes the columns of the 'separate world', column 'split the country', and the column 'fixed the difference', grp_id.
    Eliminating columns that have been includied just to make the output easier to understand, we get:

    SELECT    item_id
    ,        ord_id
    ,        MIN (step_id)          AS first_step_id
    ,       MAX (step_id)          AS last_step_id
    ,       station
    ,       COUNT (*)          AS cnt
    FROM       (          -- Begin in-line view to compute grp_id
               SELECT  test_data.*
               ,           ROW_NUMBER () OVER ( PARTITION BY  item_id, ord_id
                                                ORDER BY        step_id
                              )
                  - ROW_NUMBER () OVER ( PARTITION BY  item_id, ord_id, station
                                ORDER BY      step_id
                              )     AS grp_id
               FROM    test_data
           )          -- End in-line view to compute grp_id
    GROUP BY  item_id
    ,            ord_id
    ,       station
    ,       grp_id
    ORDER BY  item_id
    ,            ord_id
    ,       first_step_id
    ;
    

    This prioduces the output displayed much earlier in this message.

    This example shows the fixed difference indicated. Specific problem you is complicated a little what you should use an expression BOX based on station rather than the station iteself.

  • Shortcut for the new window with several tabs

    Is it possible to have a shortcut on the desktop to launch a new window with several tabs? For example - home page is set to "www.google.com", but the shortcut opens new window with tabs "www.cnn.com" and "www.youtube.com".

    Then far,.../firefox.exe-nouveau-fenetre followed by the two URL in quotes will open two new windows. Entry - new-window "www.cnn.com" - new-tab 'www.youtube.com' will open two new windows if no instance of firefox is open, but if another window is already open, youtube will add a tab to the already open window, rather than the new window with cnn.com

    In reply to myself... withdrawal-order new window (and-new-tab), now it works.

    Looks like I'm too complicated it.

    shortened final was "C:\Program Files (x 86) \Mozilla ' 'url1'"url2.

  • Functions defined by the user with several parameters

    I set features three following user using "Define."

    UF1 takes a single patameter;

    UF2 takes two parameters;

    and UFX takes two parameters - with the second is 'X' in the definition.

    Œuvres F1.  F2 is the EVAL of F1 version and it works too.  User functions only seem to work fine.

    F3, a function of two user settings, produces a graph of NaN.

    F4 is the EVAL of F3 version.  Note that 'B 'is not replaced by 1'.  Also produces a NaN chart.

    F5 produces a graph of NaN.

    F6 is EVAL of F5.  The 'X' is not replaced (even with the ' B' above), and even if it looks like 'X * X', it also produces a graph of NaN.

    Is it possible to get defined by the user, with several parameters, features work by tracing the curve?

    Hi!, Fortin:

    If you download and install the ultimate Firmware with the version of the software: 2015 6 17 (8151), with the number of Version: 1.1.2 - 11, you can trace your examples of definition of the function, with curves and values, without NaN.

  • Background with several pictures

    How can I create a wallpaper for my iPad, iPhone, MacBook Air with multiple photos?

    (El Capitan, iOS 9.2.1 Photos)

    To create a collage of photos, you can create a photo book project and select a template page with several photos. Fill it with photos of your choice and print the page in PDF format.

    The themes of the book are different according to the terms of the photos and the number of photos per page.

    It's the theme travel plans:

    If you have installed iWork apps try Keynote to create a slide with many photos, arranged freely.

Maybe you are looking for