Insert query problem

I have an insert query that I have not changed, but for some reason it won't insert anything in the database.

The data is entered by a form of Ajax submission and goes to insert.php

{if (isset($_POST['message_wall']))}
/ * The database connection * /.
include ('config.php');

/ * Remove the HTML tag to prevent the injection of the query * /.
$message = mysql_real_escape_string($_POST['message_wall']);
$to = mysql_real_escape_string($_POST['profile_to']);

$sql = "INSERT INTO wall (VALUES) (message)
« '. $message. " »)';
mysql_query ($SQL);

I want to be able to add a user_id in the database too

The ajax code:

$(document) .ready (function () {}
{$("form#submit_wall").submit (function ()}

var message_wall is $('#message_wall').attr ('value');.

$.ajax({)
type: 'POST',
URL: "insert.php"
data: "message_wall ="+ message_wall, ".
success: function() {}
$("ul#wall").prepend ("< style li =' display: none" > "+ message_wall +"< /li > < br > < HR >");
$("ul #wall li:first").fadeIn();)
}
});
Returns false;
});
});

Hello

As it is a form ajax post then the form data should be inserted into your database using the insert.php script. All in the form of ajax jQuery is passed to the script of treatment if you process in the insert script it should work o/k.

You must then include a text response to aid a statement simple echo in your insert script, this should include everything that you want to appear on your page.

Php in your insert script would be similar to.

At the beginning of the script.

$date = $_POST ['msg_date'];

At the bottom of the script.

If {($success)
echo "Inserted on $date ';
} else {}
echo "there was a problem processing your information.';
}

The jQuery to achieve code would be-

Perform tasks of post-tabling
function showResponse (responseText, statusText) {}
$('.response').text (responseText);
}

More-

success: showResponse,

Treatment options in your ajax form

And just include a

where you want to display in your html code.

PZ

www.pziecina.com

Tags: Dreamweaver

Similar Questions

  • Insert - Performance problem

    Hi Experts,

    I am new to Oracle. Ask for your help to fix the performance of a query of insertion problem.

    I have an insert query that is go search for records of the partitioned table.

    Background: the user indicates that the query was running in 30 minutes to 10 G. The database is upgraded to 12 by one of my colleague. Now the query works continuously for hours, but no result. Check the settings and SGA is 9 GB, Windows - 4 GB. DB block size is 8192, DB Multiblock read file Count is 128. Overall target of PGA is 2457M.

    The parameters are given below


    VALUE OF TYPE NAME
    ------------------------------------ ----------- ----------
    DBFIPS_140 boolean FALSE
    O7_DICTIONARY_ACCESSIBILITY boolean FALSE
    whole active_instance_count
    aq_tm_processes integer 1
    ARCHIVE_LAG_TARGET integer 0
    asm_diskgroups chain
    asm_diskstring chain
    asm_power_limit integer 1
    asm_preferred_read_failure_groups string
    audit_file_dest string C:\APP\ADM
    audit_sys_operations Boolean TRUE

    AUDIT_TRAIL DB string
    awr_snapshot_time_offset integer 0
    background_core_dump partial string
    background_dump_dest string C:\APP\PRO
    \RDBMS\TRA
    BACKUP_TAPE_IO_SLAVES boolean FALSE
    bitmap_merge_area_size integer 1048576
    blank_trimming boolean FALSE
    buffer_pool_keep string
    buffer_pool_recycle string
    cell_offload_compaction ADAPTIVE channel


    cell_offload_decryption Boolean TRUE
    cell_offload_parameters string
    cell_offload_plan_display string AUTO
    cell_offload_processing Boolean TRUE
    cell_offloadgroup_name string
    whole circuits
    whole big client_result_cache_lag 3000
    client_result_cache_size big integer 0
    clonedb boolean FALSE
    cluster_database boolean FALSE
    cluster_database_instances integer 1


    cluster_interconnects chain
    commit_logging string
    commit_point_strength integer 1
    commit_wait string
    string commit_write
    common_user_prefix string C#.
    compatible string 12.1.0.2.0
    connection_brokers string ((TYPE = DED
    ((TYPE = EM
    control_file_record_keep_time integer 7
    control_files string G:\ORACLE\

    TROL01. CTL
    FAST_RECOV
    NTROL02. CT
    control_management_pack_access string diagnostic
    core_dump_dest string C:\app\dia
    bal12\cdum
    cpu_count integer 4
    create_bitmap_area_size integer 8388608
    create_stored_outlines string
    cursor_bind_capture_destination memory of the string + tell
    CURSOR_SHARING EXACT string

    cursor_space_for_time boolean FALSE
    db_16k_cache_size big integer 0
    db_2k_cache_size big integer 0
    db_32k_cache_size big integer 0
    db_4k_cache_size big integer 0
    db_8k_cache_size big integer 0
    db_big_table_cache_percent_target string 0
    db_block_buffers integer 0
    db_block_checking FALSE string
    db_block_checksum string TYPICAL
    Whole DB_BLOCK_SIZE 8192

    db_cache_advice string WE
    db_cache_size large integer 0
    db_create_file_dest chain
    db_create_online_log_dest_1 string
    db_create_online_log_dest_2 string
    db_create_online_log_dest_3 string
    db_create_online_log_dest_4 string
    db_create_online_log_dest_5 string
    db_domain chain
    db_file_multiblock_read_count integer 128
    db_file_name_convert chain

    DB_FILES integer 200
    db_flash_cache_file string
    db_flash_cache_size big integer 0
    db_flashback_retention_target around 1440
    chain of db_index_compression_inheritance NONE
    DB_KEEP_CACHE_SIZE big integer 0
    chain of db_lost_write_protect NONE
    db_name string ORCL
    db_performance_profile string
    db_recovery_file_dest string G:\Oracle\
    y_Area


    whole large db_recovery_file_dest_size 12840M
    db_recycle_cache_size large integer 0
    db_securefile string PREFERRED
    channel db_ultra_safe
    db_unique_name string ORCL
    db_unrecoverable_scn_tracking Boolean TRUE
    db_writer_processes integer 1
    dbwr_io_slaves integer 0
    DDL_LOCK_TIMEOUT integer 0
    deferred_segment_creation Boolean TRUE
    dg_broker_config_file1 string C:\APP\PRO


    \DATABASE\
    dg_broker_config_file2 string C:\APP\PRO
    \DATABASE\
    dg_broker_start boolean FALSE
    diagnostic_dest channel directory
    disk_asynch_io Boolean TRUE
    dispatchers (PROTOCOL = string
    12XDB)
    distributed_lock_timeout integer 60
    dml_locks whole 2076
    whole dnfs_batch_size 4096

    dst_upgrade_insert_conv Boolean TRUE
    enable_ddl_logging boolean FALSE
    enable_goldengate_replication boolean FALSE
    enable_pluggable_database boolean FALSE
    event string
    exclude_seed_cdb_view Boolean TRUE
    fal_client chain
    fal_server chain
    FAST_START_IO_TARGET integer 0
    fast_start_mttr_target integer 0
    fast_start_parallel_rollback string LOW


    file_mapping boolean FALSE
    fileio_network_adapters string
    filesystemio_options chain
    fixed_date chain
    gcs_server_processes integer 0
    global_context_pool_size string
    global_names boolean FALSE
    global_txn_processes integer 1
    hash_area_size integer 131072
    channel heat_map
    hi_shared_memory_address integer 0

    hs_autoregister Boolean TRUE
    iFile file
    inmemory_clause_default string
    inmemory_force string by DEFAULT
    inmemory_max_populate_servers integer 0
    inmemory_query string ENABLE
    inmemory_size big integer 0
    inmemory_trickle_repopulate_servers_ integer 1
    percent
    instance_groups string
    instance_name string ORCL


    instance_number integer 0
    instance_type string RDBMS
    instant_restore boolean FALSE
    java_jit_enabled Boolean TRUE
    java_max_sessionspace_size integer 0
    JAVA_POOL_SIZE large integer 0
    java_restrict string no
    java_soft_sessionspace_limit integer 0
    JOB_QUEUE_PROCESSES around 1000
    LARGE_POOL_SIZE large integer 0
    ldap_directory_access string NONE


    ldap_directory_sysauth string no.
    license_max_sessions integer 0
    license_max_users integer 0
    license_sessions_warning integer 0
    listener_networks string
    LOCAL_LISTENER (ADDRESS = string
    = i184borac
    (NET) (PORT =
    lock_name_space string
    lock_sga boolean FALSE
    log_archive_config string


    Log_archive_dest chain
    Log_archive_dest_1 chain
    LOG_ARCHIVE_DEST_10 string
    log_archive_dest_11 string
    log_archive_dest_12 string
    log_archive_dest_13 string
    log_archive_dest_14 string
    log_archive_dest_15 string
    log_archive_dest_16 string
    log_archive_dest_17 string
    log_archive_dest_18 string


    log_archive_dest_19 string
    LOG_ARCHIVE_DEST_2 string
    log_archive_dest_20 string
    log_archive_dest_21 string
    log_archive_dest_22 string
    log_archive_dest_23 string
    log_archive_dest_24 string
    log_archive_dest_25 string
    log_archive_dest_26 string
    log_archive_dest_27 string
    log_archive_dest_28 string


    log_archive_dest_29 string
    log_archive_dest_3 string
    log_archive_dest_30 string
    log_archive_dest_31 string
    log_archive_dest_4 string
    log_archive_dest_5 string
    log_archive_dest_6 string
    log_archive_dest_7 string
    log_archive_dest_8 string
    log_archive_dest_9 string
    allow the chain of log_archive_dest_state_1


    allow the chain of log_archive_dest_state_10
    allow the chain of log_archive_dest_state_11
    allow the chain of log_archive_dest_state_12
    allow the chain of log_archive_dest_state_13
    allow the chain of log_archive_dest_state_14
    allow the chain of log_archive_dest_state_15
    allow the chain of log_archive_dest_state_16
    allow the chain of log_archive_dest_state_17
    allow the chain of log_archive_dest_state_18
    allow the chain of log_archive_dest_state_19
    allow the chain of LOG_ARCHIVE_DEST_STATE_2

    allow the chain of log_archive_dest_state_20
    allow the chain of log_archive_dest_state_21
    allow the chain of log_archive_dest_state_22
    allow the chain of log_archive_dest_state_23
    allow the chain of log_archive_dest_state_24
    allow the chain of log_archive_dest_state_25
    allow the chain of log_archive_dest_state_26
    allow the chain of log_archive_dest_state_27
    allow the chain of log_archive_dest_state_28
    allow the chain of log_archive_dest_state_29
    allow the chain of log_archive_dest_state_3

    allow the chain of log_archive_dest_state_30
    allow the chain of log_archive_dest_state_31
    allow the chain of log_archive_dest_state_4
    allow the chain of log_archive_dest_state_5
    allow the chain of log_archive_dest_state_6
    allow the chain of log_archive_dest_state_7
    allow the chain of log_archive_dest_state_8
    allow the chain of log_archive_dest_state_9
    log_archive_duplex_dest string
    log_archive_format string ARC%S_%R.%
    log_archive_max_processes integer 4

    log_archive_min_succeed_dest integer 1
    log_archive_start Boolean TRUE
    log_archive_trace integer 0
    whole very large log_buffer 28784K
    log_checkpoint_interval integer 0
    log_checkpoint_timeout around 1800
    log_checkpoints_to_alert boolean FALSE
    log_file_name_convert chain
    whole MAX_DISPATCHERS
    max_dump_file_size unlimited string
    max_enabled_roles integer 150


    whole max_shared_servers
    max_string_size string STANDARD
    memory_max_target big integer 0
    memory_target large integer 0
    NLS_CALENDAR string GREGORIAN
    nls_comp BINARY string
    nls_currency channel u
    string of NLS_DATE_FORMAT DD-MON-RR
    nls_date_language channel ENGLISH
    string nls_dual_currency C
    nls_iso_currency string UNITED KIN

    nls_language channel ENGLISH
    nls_length_semantics string OCTET
    string nls_nchar_conv_excp FALSE
    nls_numeric_characters chain.,.
    nls_sort BINARY string
    nls_territory string UNITED KIN
    nls_time_format HH24.MI string. SS
    nls_time_tz_format HH24.MI string. SS
    chain of NLS_TIMESTAMP_FORMAT DD-MON-RR
    NLS_TIMESTAMP_TZ_FORMAT string DD-MON-RR
    noncdb_compatible boolean FALSE


    object_cache_max_size_percent integer 10
    object_cache_optimal_size integer 102400
    olap_page_pool_size big integer 0
    open_cursors integer 300
    Open_links integer 4
    open_links_per_instance integer 4
    optimizer_adaptive_features Boolean TRUE
    optimizer_adaptive_reporting_only boolean FALSE
    OPTIMIZER_CAPTURE_SQL_PLAN_BASELINES boolean FALSE
    optimizer_dynamic_sampling integer 2
    optimizer_features_enable string 12.1.0.2

    optimizer_index_caching integer 0
    OPTIMIZER_INDEX_COST_ADJ integer 100
    optimizer_inmemory_aware Boolean TRUE
    the string ALL_ROWS optimizer_mode
    optimizer_secure_view_merging Boolean TRUE
    optimizer_use_invisible_indexes boolean FALSE
    optimizer_use_pending_statistics boolean FALSE
    optimizer_use_sql_plan_baselines Boolean TRUE
    OPS os_authent_prefix string $
    OS_ROLES boolean FALSE
    parallel_adaptive_multi_user Boolean TRUE


    parallel_automatic_tuning boolean FALSE
    parallel_degree_level integer 100
    parallel_degree_limit string CPU
    parallel_degree_policy chain MANUAL
    parallel_execution_message_size integer 16384
    parallel_force_local boolean FALSE
    parallel_instance_group string
    parallel_io_cap_enabled boolean FALSE
    PARALLEL_MAX_SERVERS integer 160
    parallel_min_percent integer 0
    parallel_min_servers integer 16

    parallel_min_time_threshold string AUTO
    parallel_server boolean FALSE
    parallel_server_instances integer 1
    parallel_servers_target integer 64
    parallel_threads_per_cpu integer 2
    pdb_file_name_convert string
    pdb_lockdown string
    pdb_os_credential string
    permit_92_wrap_format Boolean TRUE
    pga_aggregate_limit great whole 4914M
    whole large pga_aggregate_target 2457M

    -
    Plscope_settings string IDENTIFIER
    plsql_ccflags string
    plsql_code_type chain INTERPRETER
    plsql_debug boolean FALSE
    plsql_optimize_level integer 2
    plsql_v2_compatibility boolean FALSE
    plsql_warnings DISABLE channel: AL
    PRE_PAGE_SGA Boolean TRUE
    whole process 300
    processor_group_name string
    query_rewrite_enabled string TRUE


    applied query_rewrite_integrity chain
    rdbms_server_dn chain
    read_only_open_delayed boolean FALSE
    recovery_parallelism integer 0
    Recyclebin string on
    redo_transport_user string
    remote_dependencies_mode string TIMESTAMP
    remote_listener chain
    Remote_login_passwordfile string EXCLUSIVE
    REMOTE_OS_AUTHENT boolean FALSE
    remote_os_roles boolean FALSE

    replication_dependency_tracking Boolean TRUE
    resource_limit Boolean TRUE
    resource_manager_cpu_allocation integer 4
    resource_manager_plan chain
    result_cache_max_result integer 5
    whole big result_cache_max_size K 46208
    result_cache_mode chain MANUAL
    result_cache_remote_expiration integer 0
    resumable_timeout integer 0
    rollback_segments chain
    SEC_CASE_SENSITIVE_LOGON Boolean TRUE

    sec_max_failed_login_attempts integer 3
    string sec_protocol_error_further_action (DROP, 3)
    sec_protocol_error_trace_action string PATH
    sec_return_server_release_banner boolean FALSE
    disable the serial_reuse chain
    service name string ORCL
    session_cached_cursors integer 50
    session_max_open_files integer 10
    entire sessions 472
    Whole large SGA_MAX_SIZE M 9024
    Whole large SGA_TARGET M 9024


    shadow_core_dump string no
    shared_memory_address integer 0
    SHARED_POOL_RESERVED_SIZE large integer 70464307
    shared_pool_size large integer 0
    whole shared_server_sessions
    SHARED_SERVERS integer 1
    skip_unusable_indexes Boolean TRUE
    smtp_out_server chain
    sort_area_retained_size integer 0
    sort_area_size integer 65536
    spatial_vector_acceleration boolean FALSE


    SPFile string C:\APP\PRO
    \DATABASE\
    sql92_security boolean FALSE
    SQL_Trace boolean FALSE
    sqltune_category string by DEFAULT
    standby_archive_dest channel % ORACLE_HO
    standby_file_management string MANUAL
    star_transformation_enabled string TRUE
    statistics_level string TYPICAL
    STREAMS_POOL_SIZE big integer 0
    tape_asynch_io Boolean TRUE

    temp_undo_enabled boolean FALSE
    entire thread 0
    threaded_execution boolean FALSE
    timed_os_statistics integer 0
    TIMED_STATISTICS Boolean TRUE
    trace_enabled Boolean TRUE
    tracefile_identifier chain
    whole of transactions 519
    transactions_per_rollback_segment integer 5
    UNDO_MANAGEMENT string AUTO
    UNDO_RETENTION integer 900

    undo_tablespace string UNDOTBS1
    unified_audit_sga_queue_size integer 1048576
    use_dedicated_broker boolean FALSE
    use_indirect_data_buffers boolean FALSE
    use_large_pages string TRUE
    user_dump_dest string C:\APP\PRO
    \RDBMS\TRA
    UTL_FILE_DIR chain
    workarea_size_policy string AUTO
    xml_db_events string enable

    Thanks in advance

    Firstly, thank you for posting the 10g implementation plan, which was one of the key things that we were missing.

    Second, you realize that you have completely different execution plans, so you can expect different behavior on each system.

    Your package of 10g has a total cost of 23 959 while your plan of 12 c has a cost of 95 373 which is almost 4 times more.  All things being equal, cost is supposed to relate directly to the time spent, so I expect the 12 c plan to take much more time to run.

    From what I can see the 10g plan begins with a scan of full table on DEALERS, and then a full scan on SCARF_VEHICLE_EXCLUSIONS table, and then a full scan on CBX_tlemsani_2000tje table, and then a full scan on CLAIM_FACTS table.  The first three of these analyses tables have a very low cost (2 each), while the last has a huge cost of 172K.  Yet once again, the first three scans produce very few lines in 10g, less than 1,000 lines each, while the last product table scan 454 K lines.

    It also looks that something has gone wrong in the 10g optimizer plan - maybe a bug, which I consider that Jonathan Lewis commented.  Despite the full table scan with a cost of 172 K, NESTED LOOPS it is part of the only has a cost of 23 949 or 24 K.  If the math is not in terms of 10g.  In other words, maybe it's not really optimal plan because 10g optimizer may have got its sums wrong and 12 c might make his right to the money.  But luckily this 'imperfect' 10g plan happens to run fairly fast for one reason or another.

    The plan of 12 starts with similar table scans but in a different order.  The main difference is that instead of a full table on CLAIM_FACTS scan, it did an analysis of index on CLAIM_FACTS_AK9 beach at the price of 95 366.  It is the only main component of the final total cost of 95 373.

    Suggestions for what to do?  It is difficult, because there is clearly an anomaly in the system of 10g to have produced the particular execution plan that he uses.  And there is other information that you have not provided - see later.

    You can try and force a scan of full table on CLAIM_FACTS by adding a suitable example suspicion "select / * + full (CF) * / cf.vehicle_chass_no...". "However, the tips are very difficult to use and does not, guarantee that you will get the desired end result.  So be careful.  For the essay on 12 c, it may be worth trying just to see what happens and what produces the execution plan looks like.  But I would not use such a simple, unique tip in a production system for a variety of reasons.  For testing only it might help to see if you can force the full table on CLAIM_FACTS scan as in 10g, and if the performance that results is the same.

    The two plans are parallel ones, which means that the query is broken down into separate, independent steps and several steps that are executed at the same time, i.e. several CPUS will be used, and there will be several readings of the disc at the same time.  (It is a mischaracterization of the works of parallel query how).  If 10g and 12 c systems do not have the SAME hardware configuration, then you would naturally expect different time elapsed to run the same parallel queries.  See the end of this answer for the additional information that you may provide.

    But I would be very suspicious of the hardware configuration of the two systems.  Maybe 10 g system has 16-core processors or more and 100's of discs in a matrix of big drives and maybe the 12 c system has only 4 cores of the processor and 4 disks.  That would explain a lot about why the 12 c takes hours to run when the 10 g takes only 30 minutes.

    Remember what I said in my last reply:

    "Without any contrary information I guess the filter conditions are very low, the optimizer believes he needs of most of the data in the table and that a table scan or even a limited index scan complete is the"best"way to run this SQL.  In other words, your query takes just time because your tables are big and your application has most of the data in these tables. "

    When dealing with very large tables and to do a full table parallel analysis on them, the most important factor is the amount of raw hardware, you throw the ball to her.  A system with twice the number of CPUS and twice the number of disks will run the same parallel query in half of the time, at least.  It could be that the main reason for the 12 c system is much slower than the system of 10g, rather than on the implementation plan itself.

    You may also provide us with the following information which would allow a better analysis:

    • Row counts in each tables referenced in the query, and if one of them are partitioned.
    • Hardware configurations for both systems - the 10g and the 12 a.  Number of processors, the model number and speed, physical memory, CPU of discs.
    • The discs are very important - 10g and 12 c have similar disk subsystems?  You use simple old records, or you have a San, or some sort of disk array?  Are the bays of identical drives in both systems?  How are they connected?  Fast Fibre Channel, or something else?  Maybe even network storage?
    • What is the size of the SGA in both systems?  of values for MEMORY_TARGET and SGA_TARGET.
    • The fact of the CLAIM_FACTS_AK9 index exist on the system of 10g.  I guess he does, but I would like that it confirmed to be safe.

    John Brady

  • How to perform an addition of column values in an insert query that would insert in the 3rd column, and the values how to insert into another table.

    I have two tables (2) RESULT TAB (1)

    CREATE TABLE TAB

    (

    NUMBER OF SNO

    A NUMBER,

    B THE NUMBER.

    NUMBER OF THE SUM

    );

    CREATE AN ARRAY OF RESULT

    (

    NUMBER OF SNO

    NUMBER OF THE SUM

    )

    my doubt is:

    (1) I want to insert a table TAB, my question is how to insert a column to the SUM using the column A AND B... Here im adding two values of the column and store result in the AMOUNT column.

    SNO   A  SUM           

    1 100 150 250

    2 300 100 400

    I want to like this, it is possible with single insert query?


    (2) at the time of the insertion TAB of values that SNO, and the values of table TAB $ insert in the table of RESULTS... is it possible these two inserts at the same time?

    in fact, im using another this table.fro TAB and easy to understand I write like that, please solve this problem

    First, you post in the wrong forum as this one is only for Oracle's SQL developer tool. So you might ask your question in the general forum of SQL.

    Second, you might solve your problems with bind variable:

    Insert tab

    (sno, a, b, sum)

    values

    (: SNO,: A: B: A + B :))

    You should not use sum as column name because it is a reserved word.

    More you cannot insert into two different tables with a single SQL, but you can use PL/SQL to do this:

    Start

    insert into tab values (: SNO,: A: B: A + B :);)

    insert into result values (: SNO,: A + B :);)

    end;

    If you meet sno from a sequence, you could do something like this:

    Start

    insert into values tab (seq_sno.nextval,:,: B,: A +: B) return sno in: SNO.

    insert into result values (: SNO,: A + B :);)

    end;

    Hope that helps,

    dhalek

  • Insert query syntax error

    I have a flash form that is used for inserts a record in an Access database table. In the because there are 4 datefields, several fields of text, and then on several fields. When I submit the form, I get a syntax error that reads:

    Running database query. [Macromedia] [SequeLink JDBC Driver] [ODBC Socket] [Microsoft] [ODBC Microsoft Access driver] Syntax error in INSERT INTO statement.
    The error occurred on line 184. Complex object types cannot be converted to simple values.

    184 line is the last line of the values in the Insert query. The query looks like this:

    < CFQUERY DATASOURCE = '#REQUEST. DataSource #">"
    INSERT INTO EstimateNumber)
    BidNumber,
    Project,
    Construction,
    EstimatedBy,
    Region,
    Company,
    Division,
    InquiryNumber,
    SafetyChecklist,
    SafetyChecklistDate,
    QCChecklist,
    QCChecklistDate,
    EstimatedValue,
    UserUsername,
    UserPassword
    Updated,.
    ReviewDate,
    ReviewedBy,
    Discipline,
    BidDate,
    JobNumber,
    UpdatedBy
    )
    VALUES)
    #FORM. BidNumber #.
    ' #FORM. Project #',
    ' #FORM. Site #',
    ' #FORM. EstimatedBy #'.
    #FORM. Region #.
    #FORM.Company #,.
    #FORM. Division #.
    ' #FORM. InquiryNumber #'.
    #FORM. SafetyChecklist #.
    #FORM. SafetyChecklistDate #.
    #FORM. QCChecklist #.
    ' #FORM. QCChecklistDate #'.
    #FORM. EstimatedValue #.
    ' #FORM. UserUsername #',
    ' #FORM. UserPassword #',
    #FORM. # Update,
    #FORM. ReviewDate #.
    #FORM. ReviewedBy #.
    #FORM. Discipline #.
    #FORM. BidDate #.
    #FORM. JobNumber #.
    ' #FORM. "UpdatedBy #
    )
    < / CFQUERY >

    Any recommendations?

    chrispilie wrote:
    > Any recommendations?

    (1) enable debugging in order to visualize the generated actual query and post the SQL here
    (2) empty the form field. Are all of the form simple strings of values?

    (3) for the values inserted into a date/time column, you use cfqueryparam or CreateODBCDate, CreateODBCDateTime to convert the string to an object of good time. This ensures that the value is inserted properly

    Although I personally recommend to use cfqueryparam to subject parameters (not only date values).

  • foreign key ALTER TABLE QUERY PROBLEM

    HAI ALL,

    ANY SUGGESTION PLEASE?

    UNDER: foreign key ALTER TABLE QUERY PROBLEM

    I want TO CREATE AND ALTER TABLE foreign key:

    1.TABLE:HAEMATOLOGY1
    COLUMN: HMTLY_PATIENT_NUM
    WITH
    TABLE: PATIENTS_MASTER1
    COLUMN: PATIENT_NUM (THIS IS THE KEY PRIMARY AND UNIQUE)

    1.TABLE:HAEMATOLOGY1
    COLUMN: HMTLY_TEST_NAME
    WITH
    TABLE: TESTS_MASTER1
    COLUMN: TEST_NAME ((C'EST LA CLÉ UNIQUE))
    ---------------


    SQL + QUERY DATA:
    -----------
    ALTER TABLE HAEMATOLOGY1
    Key constraint SYS_C002742_1 foreign (HMTLY_PATIENT_NUM)
    references PATIENTS_MASTER1 (PATIENT_NUM);

    ERROR on line 2:
    ORA-01735: invalid option of ALTER TABLE

    NOTE: THE NAME OF THE CONSTRAINTS: SYS_C002742_1 TAKEN FROM ORACLE ENTP TABLE DETAILS. MGR.
    ---------
    ALTER TABLE HAEMATOLOGY1
    Key constraint SYS_C002735_1 foreign (HMTLY_TEST_NAME)
    references TESTS_MASTER1 (TEST_NAME);

    ERROR on line 2:
    ORA-01735: invalid option of ALTER TABLE

    NOTE: THE NAME OF THE CONSTRAINTS: SYS_C002735_1 TAKEN FROM ORACLE ENTP TABLE DETAILS. MGR.

    ==============

    4 TABLES OF LABORATORY CLINIC FOR DATA ENTRY AND GET REPORT ONLY FOR THE TESTS CARRIED OUT FOR PARTICULAR

    PATIENT.

    TABLE1:PATIENTS_MASTER1
    COLUMNS: PATIENT_NUM, PATIENT_NAME,

    VALUES:
    PATIENT_NUM
    1
    2
    3
    4
    PATIENT_NAME
    BENAMER
    GIROT
    KKKK
    PPPP
    ---------------
    TABLE2:TESTS_MASTER1
    COLUMNS: TEST_NUM, TEST_NAME

    VALUES:
    TEST_NUM
    1
    2
    TEST_NAME
    HEMATOLOGY
    DIFFERENTIAL LEUKOCYTE COUNT
    -------------

    TABLE3:HAEMATOLOGY1
    COLUMNS:
    HMTLY_NUM, HMTLY_PATIENT_NUM, HMTLY_TEST_NAME, HMTLY_RBC_VALUE, HMTLY_RBC_NORMAL_VALUE

    VALUES:
    HMTLY_NUM
    1
    2
    HMTLY_PATIENT_NUM
    1
    3
    MTLY_TEST_NAME
    HEMATOLOGY
    HEMATOLOGY
    HMTLY_RBC_VALUE
    5
    4
    HMTLY_RBC_NORMAL_VALUE
    4.6 - 6.0
    4.6 - 6.0
    ------------

    TABLE4:DIFFERENTIAL_LEUCOCYTE_COUNT1
    COLUMNS: DLC_NUM, DLC_PATIENT_NUM, DLC_TEST_NAME, DLC_POLYMORPHS_VALUE, DLC_POLYMORPHS_

    NORMAL_VALUE,

    VALUES:
    DLC_NUM
    1
    2
    DLC_PATIENT_NUM
    2
    3
    DLC_TEST_NAME
    DIFFERENTIAL LEUKOCYTE COUNT
    DIFFERENTIAL LEUKOCYTE COUNT
    DLC_POLYMORPHS_VALUE
    42
    60
    DLC_POLYMORPHS_NORMAL_VALUE
    40-65
    40-65
    -----------------


    Thank you
    RCS
    E-mail:[email protected]
    --------

    ALTER TABLE HAEMATOLOGY1
    ADD Key constraint SYS_C002742_1 foreign (HMTLY_PATIENT_NUM)
    references PATIENTS_MASTER1 (PATIENT_NUM);

  • Problem with PL/SQL insert query

    Hello to all the genius... Vikram im, Im new in the world of the apex and pl/sql... I need everything that you guys help... This is my first application user (for example)

    name of the table - form
    name of the column - f_no number, name varchar2, number of salary.
    Apex page n - p1_f_no, p1_name, p1_sal

    Now my problem is the query that is below works in the workshop of sql (insertion of data in the table in shape) and can be seen using the select query... but when I implement this in the apex... It shows - in all areas:

    declare
    v_no number (3);
    v_Name varchar2 (20);
    v_sal number (10);
    Start
    Insert in the form values (: v_no,: v_name,: v_sal);
    end;

    IM using this query in the Process button,

    Thank you

    -Best regards,.
    Vikram

    Mahir M. Quluzade have already responded.

    Published by: Gokhan Atil on 03.May.2011 12:45

  • Insert the problem using a SELECT table with an index by TRUNC function

    I came across this problem when you try to insert a select query, select returns the correct results, but when you try to insert the results into a table, the results are different. I found a work around by forcing a selection order, but surely this is a bug in Oracle as how the value of select statements may differ from the insert?

    Platform: Windows Server 2008 R2
    11.2.3 Oracle Enterprise Edition
    (I've not tried to reproduce this on other versions)

    Here are the scripts to create the two tables and the data source:
    CREATE TABLE source_data
    (
      ID                 NUMBER(2),
      COUNT_DATE       DATE
    );
    
    CREATE INDEX IN_SOURCE_DATA ON SOURCE_DATA (TRUNC(count_date, 'MM'));
    
    INSERT INTO source_data VALUES (1, TO_DATE('20120101', 'YYYYMMDD'));
    INSERT INTO source_data VALUES (1, TO_DATE('20120102', 'YYYYMMDD'));
    INSERT INTO source_data VALUES (1, TO_DATE('20120103', 'YYYYMMDD'));
    INSERT INTO source_data VALUES (1, TO_DATE('20120201', 'YYYYMMDD'));
    INSERT INTO source_data VALUES (1, TO_DATE('20120202', 'YYYYMMDD'));
    INSERT INTO source_data VALUES (1, TO_DATE('20120203', 'YYYYMMDD'));
    INSERT INTO source_data VALUES (1, TO_DATE('20120301', 'YYYYMMDD'));
    INSERT INTO source_data VALUES (1, TO_DATE('20120302', 'YYYYMMDD'));
    INSERT INTO source_data VALUES (1, TO_DATE('20120303', 'YYYYMMDD'));
    
    CREATE TABLE result_data
    (
      ID                 NUMBER(2),
      COUNT_DATE       DATE
    );
    Now, execute the select statement:
    SELECT id, TRUNC(count_date, 'MM')
    FROM source_data
    GROUP BY id, TRUNC(count_date, 'MM')
    You should get the following:
    1     2012/02/01
    1     2012/03/01
    1     2012/01/01
    Now insert in the table of results:
    INSERT INTO result_data
    SELECT id, TRUNC(count_date, 'MM')
    FROM source_data
    GROUP BY id, TRUNC(count_date, 'MM');
    Select the table, and you get:
    1     2012/03/01
    1     2012/03/01
    1     2012/03/01
    The most recent month is repeated for each line.

    Truncate your table and insert the following statement and results should now be correct:
    INSERT INTO result_data
    SELECT id, TRUNC(count_date, 'MM')
    FROM source_data
    GROUP BY id, TRUNC(count_date, 'MM')
    ORDER BY 1, 2;
    If someone has encountered this problem before, could you please let me know, I don't see what I make a mistake because the selection results are correct, they should not be different from what is being inserted.

    Published by: user11285442 on May 13, 2013 05:16

    Published by: user11285442 on May 13, 2013 06:15

    Most likely a bug in 11.2.0.3. I can reproduce on Red Hat Linux and AIX.

    You can perform a search on MOS to see if this is a known bug (very likely), if not then you have a pretty simple test box to open a SR with.

    John

  • insert calendar problem

    Hi I have a scheduled task that executes the code below, I have problems with it.

    what I have to do is

    1. get all the records of appoint_table where the servertime is lessthan present (which I'm sure that my request is ok).

    2. define a TotalSMSLeft session that gets users credit left.

    3. If the credit users is SUPERIOR to 0, follow this
    3. send an email
    UPDATE appoint_table 3 b. delete the server time
    3 c. Select all records that servertime is empty
    3D. Insert in SMS_Records where servertime is empty
    3. remove all records where servertime is empty.

    4. If not, remove all records in which servertime is empty.

    Maybe I owe a lot of code for what I m doing?

    can some help me please with the best code to use

    Thank you very much

    I think it's get sent twice because your tag is inside a of your GetSchH1 request, but you also have your tag a loop in the query (query = "GetSchH1"). Delete the query = "GetSchH1" of your tag and it should only send 1 copy of each.

    Also, I would perform your deletions outside your loops - in this way, you only need to do 1 connection to the database to delete everything and you won't delete other records before send you by e-mail.

  • Eurotherm EPower controller: query problem

    Hello

    I have a problem with communicating with a controller Eurotherm EPower (with 4 units) by OPC server.

    I use:

    -LV 8.5.1

    -iTools OPC Server 7.50 with a number of product formed the social reason (active license)

    -EPower Firmware v3.01

    Communication between LV and OPC server is made by "DataSocket write" and "Read DataSocket".

    I can write => values without problem I only write Main.SP on the evolution of events user and everything's fine.

    But on reading, I would like to read Meas.I Meas.P, Meas.V, continuous for each unit (up to 12 url request)

    The first reading is ok, so I have values, the url are ok. But after that, it is impossible to put in a loop-online no error, (status = 0 on LV and OPC Server) but the values are not updated with the following query.

    I tried to play with the option "wait for updated value", but without success.

    The only way I've found is to open the interface for server, the device, right click and select "Synchronize Active device". But this option is only a shot and very long because, I guess, he gets everything in the device...

    Another point: it works perfectly with the Engineering Studio, commonly called iTools-iTools. It is a client of the OPC server on my application.

    Another point on the other: when I run my application is the ID_EPower.exe in the list of Windows processes and consume resources (0 or 1%) so he lives.

    Complete, with one hour, I thank them but not way with Eurotherm hotline.

    My question is: Y at - it a particular way to obtain ongoing measures such as sending a query each time or a general property to get updated data?

    I used to cruise control disc from the same manufacturer with no problem and no need to send a request to get the values, but here it is perhaps a different case.

    List of requested URL:

    OPC:/Eurotherm.ModbusServer.1/CLA_Gradateurs.192-168-1-222-502-ID001-ePower.network.1.MEAs.V

    OPC:/Eurotherm.ModbusServer.1/CLA_Gradateurs.192-168-1-222-502-ID001-ePower.network.1.MEAs.I

    OPC:/Eurotherm.ModbusServer.1/CLA_Gradateurs.192-168-1-222-502-ID001-ePower.network.1.MEAs.P

    OPC:/Eurotherm.ModbusServer.1/CLA_Gradateurs.192-168-1-222-502-ID001-ePower.network.2.MEAs.V

    OPC:/Eurotherm.ModbusServer.1/CLA_Gradateurs.192-168-1-222-502-ID001-ePower.network.2.MEAs.I

    OPC:/Eurotherm.ModbusServer.1/CLA_Gradateurs.192-168-1-222-502-ID001-ePower.network.2.MEAs.P

    ... Ditto for network 3 and 4

    I joined the part of the code that is dedicated for the reading of values, but it is not really helpful because I think that my method of reading is correct (data arrive but are never updated) is a 'lack' of command to request that the server acquires new data or a bug with the LV-iTools OPC server interface...

    For LabViewers advanced, I know that my attached code is not optimized and not the smartest, but keep cool, I do not need crazy performance

    Any suggestion?

    So, I was not on the spot for the tests, but now it's ok.

    Several series shares have been made to the walkthrough to the problem, but it's hard to know if all were beneficial:

    -An update of software Eurotherm: 7.50 to 7.68

    -An update of the firmware EPower: 3.01 to 3.03

    - and probably the main cause of the problem: change the cable through a cross-wired (we thought it was, but not in reality...). But even if it is the main cause: how it was possible to receive data by forcing the update in the OPC server... strange.

    Thanks for your help.

  • How to remove suspicion used in an insert query - help plss

    I just know that when we create a materialized view it inturn creates a table too and this new object are in the user_mviews and also user_tables.
    So begins my question. We have received a report of the times long SQLs and that is the request below:

    INSERT /*+ BYPASS_RECURSIVE_CHECK */ INTO MVIEW_TEST SELECT * FROM TABLE_TEST;
    

    There is no source/code in my entire DB which has this stmt. But the creation of the mview - MVIEW_TEST has the below script:;

    CREATE MATERIALIZED VIEW MVIEW_TEST 
    REFRESH FORCE
    ON DEMAND
    AS SELECT * FROM TABLE_TEST;
    

    This means when the mview - MVIEW_TEST's get updated that it performs an insert in the underlying table for the MVIEW_TEST too?
    The biggest problem for me here, is that the insert stmt using BYPASS_RECURSIVE_CHECK hint that im not sure why it is used. I want to get that replaced by / * + APPEND * /. But how can I do since I don't raise this INSERT manually.
    Is anyway to stop creating the table when my mview is created? Or y at - it a way to stop letting use this hint - BYPASS_RECURSIVE_CHECK when inserting into the table MVIEW_TEST.

    I want to use the indicator - ADD instead.

    2733376 wrote:

    I want to use the indicator - ADD instead.

    If you want the MVIew using APPEND, then do a refresh by using ATOMIC_REFRESH = FALSE.

    Why do you use APPEND? You already know that it is your bottleneck?

    See also: https://asktom.oracle.com/pls/asktom/f?p=100:11:0:P11_QUESTION_ID:15695764787749

    on the reduction of repeat:.

    and the only way to reduce that would be to use a custom job to refresh the MV and the job would: a) disable indexes b) call refresh c) rebuild indexes with no logging
    


  • Concatenation of the name of the table in the Insert query

    Greetings,

    Oracle Version - Oracle9i Enterprise Edition Release 9.2.0.6.0 - 64 bit Production


    I want to write a procedure in which we give the name of the table in the Insert at runtime through cursor


    But the problem is

    insert into tra_temp

    Select * tra_smi23 of

    in statement above 23 must be dynamic like tra_smi | 23


    Here is my code, but something is not

    Kindly help


    create or replace procedure as

    cursor c1 is

    Select op_ID, operators OP_NAME where OP_NAME like '% TRA '.

    and OP_ID <>9;

    Start

    run immediately 'truncate table tra_temp;

    for rec in c1

    loop

    insert into tra_temp

    Select * from tra_SMI | ' recomm. OP_ID';

    commit;

    When exit c1% notfound;

    end;

    Hello

    Then you must change the insert statement for: run immediately ' insert into tra_temp select * from tra_SMI'| recomm. OP_ID;

    Concerning

    Mr. Mahir Quluzade

  • Connect query problem

    Hello world

    I stuck with a query. Let me explain the situation: I have a table I store the IDS of the logically equal records.

    For example;
    A = B
    B = C
    X = Y
    Z = Y

    My query must return all equivalent records. If you call the query with the parameter 'A', the result set must contain B and C. And if you call the query with the parameter 'Y', the result set contains X AND Z. I thought I can write the spiritual query using start with connect by statement. But the query does not work as I expected. Here's my code and sample data:



    create table temptable (ID1, ID2 number); /

    insert into a values (11,12) temptable; /
    insert into a values (12,13) temptable; /
    insert into a values (13,14) temptable; /
    insert into a values (13,15) temptable; /



    SELECT distinct from ID1
    (
    SELECT * FROM temptable
    START WITH 13 = ID1 OR ID2 = 13
    CONNECT TO NOCYCLE
    (
    (PREREQUISITE ID1 = ID1) OR
    (PREREQUISITE ID1 = ID2) OR
    (PREREQUISITE ID2 = ID1) OR
    (PREREQUISITE ID2 = ID2))
    ) WHERE ID1 <>13
    Union
    SELECT distinct from ID2
    (
    SELECT * FROM temptable
    START WITH 13 = ID1 OR ID2 = 13
    CONNECT TO NOCYCLE
    ((PRÉALABLE ID1 = ID1) OR)
    (PREREQUISITE ID1 = ID2) OR
    (PREREQUISITE ID2 = ID1) OR
    (PREREQUISITE ID2 = ID2))
    ) WHERE ID2 <>13


    In my example, the definitions of equality;
    11 = 12
    12 = 13
    13 = 14
    13 = 15

    When I call the query with parameters 13, I expect to get 11,12,14,15. But it returns only 12,14 and 15.

    Thanks for any help or suggestion.
    with t as (
                select  id1,
                        id2
                  from  temptable
               union
                select  id2,
                        id1
                  from  temptable
              )
    select  distinct id2
      from  t
      where id2 != 13
      start with id1 = 13
      connect by nocycle id1 = prior id2
    /
    
           ID2
    ----------
            11
            14
            12
            15
    
    SQL> 
    

    SY.

  • Insert query

    Hi all

    DB version: 10.2

    I have table with 8 columns namely num1, num2, num3, num4, num5, num6, num7, num8. I have to insert all these column values in a table with a column "Num".
    create table nums_tb
    (
    key char(16),
    num1  char(16)
    ,num2  char(16)
    ,num3  char(16)
    ,num4  char(16)
    ,num5  char(16)
    ,num6   char(16)
    ,num7   char(16)
    ,num8   char(16)
    )   ;
    
    create table nums_tmp
    (
    num1  char(16)
    )   ;
    
    I have to insert all column values into nums_tmp. I have written query like this:
    
    insert into nums_tmp
    select * from (
    select distinct num1
    from  nums_tb
    
    union
    select distinct num2
    from  nums_tb
    union
    
    select distinct num3
    from  nums_tb
    union
    
    select distinct num4
    from  nums_tb
    union
    
    select distinct num5
    from  nums_tb
    union
    
    select distinct num6
    from  nums_tb
    union
    
    select distinct num7
    from  nums_tb
    union
    
    select distinct num8
    from  nums_tb);
    The nums_tb table contains 20 million records. So this insert takes a lot of time around 5 hours. Is there a better way to write this query to achieve the right?

    Kind regards
    SK

    Is it possible that we can write the query to improve performance?

    Put separate on the outer query not every query internal, using the union all in inline mode.

    Use parallel DML. Consider using a manual sort area size.

    alter session enable parallel dml;
    
    insert /*+ parallel */
    into nums_tmp
    select distinct nums.num1
    from (
      select num1 num1
      from  nums_tb
      union all
      select num2 num1
      from nums_tb
      union all
      select num3 num1
      from nums_tb
      union all
      select num4 num1
      from nums_tb
      union all
      select num5 num1
      from nums_tb
      union all
      select num6 num1
      from nums_tb
      union all
      select num7 num1
      from nums_tb
      union all
      select num8 num1
      from  nums_tb) nums;
    
  • Printable report query problem

    I try to develop a printable report query. I created a report under shared components query which consists of 12 separate queries that collect data are all related to a single page element. The intention is to create a PDF document that is printed to the user on demand that will display all this information on the page element they have chosen.

    The query of the report gathers information and generates an XML file in the following format.
    <DOCUMENT>
        <ROWSET1>
            <ROWSET1_ROW>
                             *Data from query 1*
            </ROWSET1_ROW>
        </ROWSET1>  
        <ROWSET2>
            <ROWSET2_ROW>
                              *Data from query 2*
            </ROWSET2_ROW>
            <ROWSET2_ROW>
                              * Data from query 2*
            </ROWSET2_ROW>
       </ROWSET2>
       <ROWSET3>
           <ROWSET3_ROW>
                              * Data from query 3*
           </ROWSET3_ROW>
           <ROWSET3_ROW>
                              * Data from query 3*
           </ROWSET3_ROW>
       </ROWSET3>
    ......
    </DOCUMENT>
    Then, I took this XML file and developed a model RTF using Office BI Publisher and imported as a layout of the report.
    I then connected this provision of the RTF to the report query and he ran. I don't have all the data to print.

    I found the reason why that it did not work, it is that the XML file that is generated by the report query is not static. The following XML file that was generated by the report query looked like this:
    <DOCUMENT>
        <ROWSET1>
            <ROWSET1_ROW>
                               * Data from query 3*
            </ROWSET1_ROW>
            <ROWSET1_ROW>
                               * Data from query 3*
            </ROWSET1_ROW>
        </ROWSET1>  
        <ROWSET2>
            <ROWSET2_ROW>
                               * Data from query 7*
            </ROWSET2_ROW>
            <ROWSET2_ROW>
                               * Data from query 7*
            </ROWSET2_ROW>
            <ROWSET2_ROW>
                               * Data from query 7*
            </ROWSET2_ROW>
       </ROWSET2>
       <ROWSET3>
           <ROWSET3_ROW>
                               * Data from query 1*
           </ROWSET3_ROW>
       </ROWSET3>
    ......
    </DOCUMENT>
    So I can't develop a RTF model to display the data, if I do not know where these data will appear in the generated XML.

    Questions (I'll offer several POINTS to anyone who can answer all of these questions!)

    I use APEX version 3.1

    * 1. Why the report query seems randomly to renumber and reorganize the XML it produces? *

    * 2. Is it possible to make the report query to display the XML data in the order in which the individual queries are classified each time? *

    * 3. Is it possible to designate exactly the lines of sets of lines or lines supported by APEX? *

    * 4. Are there other methods can I Explorer to produce this report? *

    * 5. Is it a problem because I'm on an older version? Is it not a problem on 4.1? *

    Published by: bhenderson on February 1, 2012 08:22

    So, you have 12 separate petitions? No there is no way to join them in a union to build the required data set? Or even using views inline in a query for each set of values?

    Thank you

    Tony Miller
    Webster, TX

  • Format for a number field in af:query problem. Is this a Bug of the ADF?

    Hi, OTN,.

    Requirement: Format DepartmentId in query Panel

    I created a view of criteria of the Employees (HR diagram) table. I have four items in the view criteria called EmployeeId, DepartmentId, Firstname, LastName and all elements including the posting as selectively requiredproperty. Attribute DepartmentId have Format Type of the property UI tips = number and Format = * 0000 * and Auto submit = true.
    But in the user interface, I'm not able to search the af:Query Panel. It works fine without setting the property UI tips Format and Type of format *.

    Step 1: Type 123 in the DepartmentId field
    Step 2: Click on the search button
    Error: Please provide a value for at least one of the specified areas
    Error message: http://www.freeimagehosting.net/24d51

    Please see the link for downloading the sample application below

    http://formatissue.googlecode.com/svn/trunk/FormatTest/FormatTest.zip
    http://formatissue.googlecode.com/svn/trunk/FormatTest (SVN version)

    Note:

    JDev Version: 11.1.1.5.0
    I use BC ADF and ADF Faces components

    All of the recommendations fully appreciated

    Thank you
    Jean-Marc Mithra

    Published by: Fanny Mithra November 23, 2011 16:51

    Published by: Fanny Mithra November 23, 2011 17:21

    Has filed a bug, bug # copied to the end of this thread - edited previous comment, the problem occurs even on release after 11.1.1.5.0.

    Published by: Jobinesh on December 2, 2011 10:38

Maybe you are looking for

  • How can I disable the history in firefox?

    When I connect to, for example, my yahoo mail account, when I start typing my user name in this field, a list of users appears, as if my email is "[email protected]", all users whose name begins with 'J' will be displayed. As I type away, say "Ji

  • iPod Nano 5 G vs 7G

    I am looking to buy an iPod Nano. I heard that the click wheel on the 5G breaks, I don't want to deal with fingerprints. Which device is more user-friendly and accessible, because for me, the G-7 resembles a small iPod Touch without internet.

  • Problem with Skype desktop snap Windows 10.

    First of all I want to clarify one thing: I don't speak of Aero Snap.I want to talk about the automatic ability of Office Skype to align a corner of the screen when it is quite near the corner of the screen.Given that Microsoft has made the invisible

  • Bad sign appears urgent keyboard buttons on the Satellite L series

    I recently bought the Satellite L series above, and when I learned typing on the keyboard of some buttons seem to do things that they should not for example a lag and 2 should be "but its actually @ and shift and @ should be but is actually". "" Can

  • AEBS with you watching TV

    Hello I currently have a 5th gen AirPort Extreme as my router, attached to a modem in Bridge mode talk talk. Really satisfied with this set upward, up to discover you receive it (IPTV) boost channels cannot be read. After searching for solutions, it