Wanted: simple example of the class of data with a signal?

Been looking for a tutorial on that in the examples and find nothing.  Maybe I missed it?

I'm looking for an example of simple data class that has a public method to define an observer, similar to this:

class MyDataClass : QObject // I'm guessing I would subclass QObject
{
  public:
  int getSomeValue();

  signals:
   void setObserver(const QObject* receiver, ...?);

  private:
  int m_someValue;
};

// ...and cpp file with setting the connection

There in this code example?  If this isn't the case, could someone put an example of class?

I've seen samples in QML, but I'm looking for C++.

Thank you!

#include 

class Settings : public QObject
{
    Q_OBJECT
public:
    explicit Settings(QObject *parent = 0);
    static Settings *sharedInstance();

    QString getTargetIPAddress();
    void setTargetIPAddress(const QString &ipAddress);
signals:
    void targetIPAddressChanged();
};

---

#include 

#include "Settings.h"

const char *targetIPAddressKey = "targetIPAddress";

Settings::Settings(QObject *parent) :
    QObject(parent)
{
}

Settings *Settings::sharedInstance()
{
    static Settings settings;
    return &settings;
}

QString Settings::getTargetIPAddress()
{
    QSettings settings;
    QVariant v = settings.value(targetIPAddressKey);
    if (v.isNull())
        return "192.168.0.1";
    return v.toString();
}

void Settings::setTargetIPAddress(const QString &ipAddress)
{
    QSettings settings;
    settings.setValue(targetIPAddressKey, QVariant(ipAddress));
    settings.sync();
    emit targetIPAddressChanged();
}

---

Some other class:

.h:
protected slots:
    void targetAddressChanged();

.cpp:
[...constructor...]
    Settings *settings = Settings::sharedInstance();
    QObject::connect(settings, SIGNAL(targetIPAddressChanged()),
                 this, SLOT(targetAddressChanged()));
    targetAddressChanged(); // for initial setup
[...]

void UDPClient::targetAddressChanged()
{
    Settings *settings = Settings::sharedInstance();
    // Do something with settings->getTargetIPAddress()
}

I hope this will help.

Tags: BlackBerry Developers

Similar Questions

  • Find the store of data with more free space.

    So I'm trying to build a script to configure the virtual machine.

    I'll have it retrieve the node to keep newly available vm out based on the cluster, I said to use. I'm just using the Random function to do that then I select a node in the cluster.

    Where I have questions, how can I

    find the data store less used, based on a particular data store naming scheme.

    I want to say is:

    Hypathetically, I for example named data warehouses:

    store data-prod-01 200 GB free

    store data-prod-02 500 GB free

    free data-prod-03 10 GB store

    free data-qa-01 200 GB store

    free data-qa-02 1000 GB store

    I want to throw in a piece of code to tell him to watch data warehouses "datastore-prod *" and place the virtual machine on the store of data with the most space. (that's assuming that the vm will agree on the DS and let fresh generals)

    I guess I want to know if it is possible?

    I would also be concerned about scenario that perhaps the vm should I build just will not match on any of my other data store. I guess I need a logic to check if it is still possible.

    This is more than a wish rather than a necessity. I'm thinking if I just read the info, or use a cvs file after running the script. Any recommendations would be greatly appreacted.

    Hello, drivera01-

    You should be able to do this with a very small amount of code.  Download all data warehouses that correspond to the model name, sort free space (in descending order) and select the top one.  As:

    ## get the datastore matching datastore-prod* that has the most freespace$oDatastoreWithMostFree = Get-Datastore datastore-prod* | Sort-Object -Property FreespaceGB -Descending:$true | Select-Object -First 1
    
    ## if the freespace plus a bit of buffer space is greater than the size needed for the new VMif (($oDatastoreWithMostFree.FreespaceGB + 20) -gt $intNewVMDiskSize) {<# do the provisioning to this datastore #>}else {"oh, no -- not enough freespace on datastore '$($oDatastoreWithMostFree.Name)' to provision new VM"}
    

    The second part, where it checks for sufficient freespace on the data store that has the most free, can be updated to behave as you need, but that should be the basis.  How does this look?

  • Error: The lines of data with unmapped dimensions exist for period "1 April 2014".

    Expert Hi

    The below error when I click on the button Execute in order to load data in the area of data loading in 11.1.2.3 workspace. Actually, I already put in the tabs global mapping (add records of 12 months), mapping of Application (add records of 12 months) and map sources (add a month "1 April 2014' as the name of period with Type = Explicit mapping") in the service of the period mapping. What else should I check to fix this? Thank you.

    2014-04-29 06:10:35, 624 [AIF] INFO: beginning of the process FDMEE, process ID: 56
    2014-04-29 06:10:35, 625 [AIF] INFO: recording of the FDMEE level: 4
    2014-04-29 06:10:35, 625 [AIF] INFO: FDMEE log file: null\outbox\logs\AAES_56.log
    2014-04-29 06:10:35, 625 [AIF] INFO: user: admin
    2014-04-29 06:10:35, 625 [AIF] INFO: place: AAESLocation (Partitionkey:2)
    2014-04-29 06:10:35, 626 [AIF] INFO: period name: Apr 1, 2014 (period key: 4/1/14-12:00 AM)
    2014-04-29 06:10:35, 627 [AIF] INFO: category name: AAESGCM (category key: 2)
    2014-04-29 06:10:35, 627 [AIF] INFO: name rule: AAESDLR (rule ID:7)
    2014-04-29 06:10:37, 504 [AIF] INFO: Jython Version: 2.5.1 (Release_2_5_1:6813, September 26 2009, 13:47:54)
    [JRockit (R) Oracle (Oracle Corporation)]
    2014-04-29 06:10:37, 504 [AIF] INFO: Java platform: java1.6.0_37
    2014-04-29 06:10:39, 364 INFO [AIF]: - START IMPORT STEP -
    2014-04-29 06:10:45, 727 INFO [AIF]:
    Import of Source data for the period "1 April 2014".
    2014-04-29 06:10:45, 742 INFO [AIF]:
    Import data from Source for the book "ABC_LEDGER".
    2014-04-29 06:10:45, 765 INFO [AIF]: monetary data lines imported from Source: 12
    2014-04-29 06:10:45, 783 [AIF] INFO: Total of lines of data from the Source: 12
    2014-04-29 06:10:46, 270 INFO [AIF]:
    Map data for period "1 April 2014".
    2014-04-29 06:10:46, 277 [AIF] INFO:
    Treatment of the column mappings 'ACCOUNT '.
    2014-04-29 06:10:46, 280 INFO [AIF]: data rows updated EXPLICIT mapping rule: 12
    2014-04-29 06:10:46, 280 INFO [AIF]:
    Treatment of the "ENTITY" column mappings
    2014-04-29 06:10:46, 281 [AIF] INFO: rows of data updates to EXPLICIT mapping rule: 12
    2014-04-29 06:10:46, 281 [AIF] INFO:
    Treatment of the column mappings "UD1.
    2014-04-29 06:10:46, 282 [AIF] INFO: rows of data updates to EXPLICIT mapping rule: 12
    2014-04-29 06:10:46, 282 [AIF] INFO:
    Treatment of the column mappings "node2".
    2014-04-29 06:10:46, 283 [AIF] INFO: rows of data updates to EXPLICIT mapping rule: 12
    2014-04-29 06:10:46, 312 [AIF] INFO:
    Scene for period data "1 April 2014".
    2014-04-29 06:10:46, 315 [AIF] INFO: number of deleted lines of TDATAMAPSEG: 171
    2014-04-29 06:10:46, 321 [AIF] INFO: number of lines inserted in TDATAMAPSEG: 171
    2014-04-29 06:10:46, INFO 324 [AIF]: number of deleted lines of TDATAMAP_T: 171
    2014-04-29 06:10:46, 325 [AIF] INFO: number of deleted lines of TDATASEG: 12
    2014-04-29 06:10:46, 331 [AIF] INFO: number of lines inserted in TDATASEG: 12
    2014-04-29 06:10:46, 332 [AIF] INFO: number of deleted lines of TDATASEG_T: 12
    2014-04-29 06:10:46, 366 [AIF] INFO: - END IMPORT STEP -
    2014-04-29 06:10:46, 408 [AIF] INFO: - START NEXT STEP -
    2014-04-29 06:10:46, 462 [AIF] INFO:
    Validate the data maps for the period "1 April 2014".
    2014-04-29 06:10:46, 473 INFO [AIF]: data rows marked as invalid: 12
    2014-04-29 06:10:46, ERROR 473 [AIF]: error: the lines of data with unmapped dimensions exist for period "1 April 2014".
    2014-04-29 06:10:46, 476 [AIF] INFO: Total lines of data available for export to the target: 0
    2014-04-29 06:10:46, 478 FATAL [AIF]: error in CommMap.validateData
    Traceback (most recent call changed):
    Folder "< string >", line 2348 in validateData
    RuntimeError: [u "error: the lines of data with unmapped dimensions exist for period" 1 April 2014' ""]

    2014-04-29 06:10:46, 551 FATAL [AIF]: COMM error validating data
    2014-04-29 06:10:46, 556 INFO [AIF]: end process FDMEE, process ID: 56

    Thanks to all you guys

    This problem is solved after I maped all dimensions in order of loading the data. I traced only Entity, account, Custom1 and Custom2 at first because there is no source map Custom3, Custom4 and PIC. After doing the mapping for Custom3, Custom4 and PKI, the problem is resolved. This is why all dimensions should be mapped here.

  • Looking for example of the Manager to OBTAIN with the PL/SQL source type

    Hello

    I currently have a Manager GET job with a running QUERY source type in ADR 3.0.  The call returns the number of records (hundreds, or even thousands of 10) in JSON and allows the user to navigate through the results.

    The call accepts several parameters that I convey just in the place where the clause of the query in the Manager.  Data is retrieved by the beach date (s).

    To prevent queries runaway with ranges for a long time, I've implemented a logic in the sql statement to limit the applicant.  The applicant requested an error thrown instead.  There are other limits that I'm imposing on other entries I need to provide useful errors for example.

    I would like to change the call GET use PL/SQL type instead of the type of REQUEST so that I can do validation on entries and send useful return error messages if necessary, however I can't find examples online.

    Are there any examples (or alternatives) would be greatly appreciated.

    Thank you

    Anthony

    Hello

    You can use the Pipelined functions to generate your recordset plsql and then just use a select statement of the function in the pipeline in the service definition.

  • Creating folders in the view of data with powercli store

    Hello

    We try to automate some parts of a build script and we want to create a folder in the view of data warehouses to move all the local disk.  the only place that I can create a folder has been the point of view of Cluster, DataCenter and VM.  is there a way to do this?

    Thank you

    Matt

    This is a hidden folder named "datastore".

    You can do

    $dsHome = Get-Folder -Name datastoreNew-Folder -Name MyFolder -Location $dsHome
    

    Note that there is 1 folder "data store" by the data center.

    If you have more than 1 data center in your vCenter, you need to indicate what you want "datastore" folder from the data center.

    $dc = Get-Datacenter -Name DC$dsHome = Get-Folder -Name datastore -Location $dcNew-Folder -Name MyFolder -Location $dsHome
    
  • Synchronize the dictionary of data with model only works for models imported?

    When I imported data dictionary model (file-> import-> data dictionary) then in relational model the two buttons "Synchronize Data with Model Dictionary" and "Synchronize model with Data Dictionary" works very well.

    But when I model created from scratch and I'm clicking on the buttons "Synchronize data dictionary with the model" or "Synchronize model with Data Dictionary" nothing happens.

    It works only for models imported?

    (Data model EA 3.3)

    Hello

    Yes, Synchronize only works for objects that were imported to the original (like Synchronize uses the information entered during the import to determine which database connection and the database object to compare to).

    If your model is not imported, you can achieve the same effect as follows:
    -Open the template in Data Modeler and also open the relevant physical model.
    -Do an Import of data dictionary, select the objects you want to compare with that.

    After the import phase, this will display the dialog box to compare, showing the differences between the objects imported from the database and your model.

    Note that if you intend to generate the DDL to update your database of the difference (as in "synchronize with Model Data Dictionary"), you must select the "Swap Target Model" option in step 2 (select database schema) data dictionary import wizard.

    David

  • Foreign key constraint, not recognized during the synchronization of data with the model dictionary

    Hello

    Data Modeler is a foreign key constraints do not recognize when synchronizing data with the model dictionary, although the foreign key is there (in the database that a data dictionary is read). I can't find any criterion when a foreign key is not recognized by the Data Modeler. Are there limits to the length of the attribute, or the number of columns in a foreign key, or other limitations which may lead to this behavior not to recognize a fk by Data Modeler? I have columns more than 32 characters. I compared with the fk is recognized by DM, but I can't find anything that indicates why it is not recognized.

    I wonder if someone also has constraints of foreign keys that are not recognized in the comparison of data bases and model?

    Thank you

    Robert

    Hi Robert,.

    Thanks for the comments, I logged a bug.

    Philippe

  • REMOVAL OF THE LAST SQL DATA with php

    Hello

    I have a chat software that I would like to delete the oldest cat after 5 cats. That last 5 discussions appear and nothing after 5 is deleted. Thank you. This in the code.

    Querying database

    $id = $_SESSION ['id'];

    $collect = mysql_query ("SELECT * FROM cats WHERE userid ='$id '");

    $collectnum = mysql_num_rows ($collect);

    if($collectnum>5)

    {

    remove old cats if the user has more than 5 cats


    $delete = mysql_query ("DELETE * FROM cats WHERE userid = '$id' ORDER BY ASC LIMIT 1");

    I hoped the code above would be to remove the oldest cat in the database for the user, but after the Cat 5 the Remove feature does not work as expected.

    CSA code want to delete things in ascending order, making it the fifth cat deleted first.

    My problem is that the code above does not work.

    }

    on the other

    {

    While (Rows = mysql_fetch_assoc ($Collect))

    {

    All the CATS OF the INDIVIDUAL USER.

    }

    }

    Tony404 wrote:

    I have a chat software that I would like to delete the oldest cat after 5 cats. That last 5 discussions appear and nothing after 5 is deleted.

    Installs on the Dreamweaver application development forum, which is more suitable for this kind of question.

    I wonder why you want to delete the older hardware? If you simply want to display on the latest five, assuming that the primary key for the table is chat_id, you can do this:

    $id = $_SESSION['id'];
    $result = mysql_query("SELECT * FROM chats WHERE userid=$id
                           ORDER BY chat_id DESC LIMIT 5");
    

    However, if you really want to delete the older records, you can do it like this:

    $id = $_SESSION['id'];
    $result = mysql_query("SELECT COUNT(*) FROM chats WHERE userid = $id");
    
    // if more than 5 results, delete the old ones
    if (mysql_num_rows($result) > 5) {
      // get the primary key of the sixth oldest
      $result = mysql_query("SELECT chat_id FROM chats WHERE userid = $id
                             ORDER BY chat_id DESC LIMIT 5,1");
      $row = mysql_fetch_assoc($result);
      $no6 = $row['chat_id'];
      mysql_query("DELETE FROM chats WHERE userid = $id
                   AND chat_id <= $no6");
    }
    

    You can then get all the records and show. Replace "chat_id" in the previous code with the name of the primary key column in your table of cats.

  • Signal low iPhone in the use of the battery (6 apps with weak signal)

    Hello world

    I bought a second hand Iphone, it works fine but the battery runs out quickly when I use it (especially on safari). In the use of the battery, I see always weak signal under the apps and always under phone, even though I have 4/5 bar. I have yet found a solution...

    https://drive.Google.com/file/d/0B0aW-K-0VfulQW1LVVJCVHc0OW8/view?USP=sharing (here's a screenshot)

    Thank you in advance, I hope that your answers will help me.

    Hi luca9903,

    I see you are a new user here in the Apple Support communities - welcome! I hope we find you often contributing in the future.

    If you have short battery life when you are using your iPhone, you can use the information on this page to help extend - Batteries - maximize Performance - Apple

    Thank you for using communities of Apple Support.

    Sincerely.

  • Write the table of data (with a fixed size)

    Hi fellow users of LabView,.

    My problem is very basic, I'm sorry about that, I'm just a beginner

    I get continuous data of a function and I want to write them in a table with a fixed size. The fixed size is because I want to get the max/min of the latest x items entry. I tried this for an hour now with different approaches, but nothing seems to work, so I'm quite frustrated to post my problem here.

    I hope that the VI is understandable even with foreign securities. Basically, I want to make two light up if my data within a period equal to or less than a reference value.

    I'm grateful for any hint of help

    Use min & max ptbypt with the length of the buffer desired story.

  • Develop the encryption Transparent data with Oracle 10 g XE

    Currently I develop an application that will require encrypted in some tables columns, I will recommended to the customer buying an Oracle database for the application and that you have installed Oracle 10 g XE to begin development, I found that I can't create tables with columns TDE tho I can't create a portfolio. I searched the forums and found that a portfolio manager is not available with Oracle XE.

    My plan was to develop the application and then provide scripts for creating the DBA of the customer so that they can create data tables in their Oracle database... Can I develop the application without transparent data encryption and then say s/n, which must be implemented in the version of the application? The application needs to know the password of portfolio/TDE to encrypt/decrypt the columns!

    Any ideas how I could go on the development of the customer Oracle XE database without access to CDW?

    The T in TDE is transparent, so that your application should need not even be aware that all columns or storage are encrypted. Transparent data encryption are generally implemented in systems that were never designed to encrypt data, so in theory it should be 'perfectly safe' to develop not encrypted and have the client encrypt the columns during installation.

    Of course, when marketing people start talking about things that are 'perfectly safe', it is always a sign of coming danger. Although I have never heard of a case where encrypt a column caused a problem for an application, I would be very doubtful to the development in an environment different from that of production. This includes the exact version of the database (I guess that the customer has installed the last patchsets, so they run 10.2.0.4, for example) as well as editing. If you decide to rely on the fact that everything should go smoothly when you promote to a different version of a different edition of the database with a different schema definition, even if it would normally, you virtually guarantee that you will end up with a problem that will be difficult to solve.

    In your case, I would use XE to the development. It would be much safer to develop against the personal edition. It's not free, but it's the database licensed Enterprise edition to run on developer machines. It is not free, but it is much less than an enterprise edition license.

    Justin

  • Where can I find the class in accordance with IVI - C header files

    I installed the IVI of the IVI Foundation Web site files, but the installation includes all files for IVI - COM Not IVI - C.

    I'm looking for consistent header of class files, IVIdmm.h, IVIscope.h, IVIspecana.h etc. are those in the include directory for COM, IE IVIdmmTypeLib.h

    I asked the question on the forum of the IVI Foundation and they said that IVI - C has nothing to do with them, that it has been maintained by National Instruments.

    While NEITHER has a download of these files?

    I think someone at the ivi Foundation is lying, but of course, NOR has them.

    http://www.NI.com/download/IVI-compliance-package-4.5/3065/en/

  • How can I check the voltage values on the acquisition of data with a data acquisition assistant and the custom scale?

    I use a custom scale and the DAQ assistant to acquire data from a USB DAQ device.  How can I display voltage gross values at the same time?

    Thank you

    David

    Do not use the ladder custom, but read the raw data using the daq assistant and adapts the data later. Or the scale of the data using the inverse of the custom scale.

  • WRT 1900AC slows down the flow of data with Brighthouse

    I recently bought a WRT 1900 AC.  Synchronize straight up with my Brighthouse modem, but I noticed that the maximum data rate I can get through wired connections is 7.2 Mbps.  When Brighthouse is out focus on the system, they read 22 Mbit/s on the modem.

    One reason why the router should be reduced my flow so much?

    Excellent advice.  It's the correction that I needed

  • Pivot out the aggregation of data with

    Hello

    I have it here is output

    node_id | object_name | att_value | ATTRIBUTE_NAME
    469988 | Serum sample. Project | GSKMDR - status
    469988 | Serum sample. 1     | GSKMDR - Version

    based on the above output I required to get a way below output

    node_id | object_name | status | Version
    469988 | Serum sample. Project | 1

    I tried to use pivot and wm_concat but no luck. can you please any action ideas.

    My query is given below.

    {noformat} {noformat}
    WITH first_req AS
    (SELECT node_id object_name, att_value, attribute_name)
    FROM (SELECT n.node_id, o.NAME object_name, o.object_id,)
    att.name_display_code att_ndc, att. VALUE att_value,
    attribute_name atty.NAME, n.deletion_date
    OF vp40.nodes n.
    vp40. O objects,
    vp40. Att ATTRIBUTES,
    Atty vp40.attribute_types
    WHERE n.object_id = o.object_id
    AND o.object_id = att.object_id
    AND att.attribute_type_id = atty.attribute_type_id) t
    WHERE deletion_date = January 1, 1900"
    AND IN attribute_name ('GSKMDR - Version', "GSKMDR - Status")
    AND = node_id: node_id
    -Node id of the object "GSKMDR - Concept Template"
    )
    Select * from first_req
    {noformat} {noformat}

    ravt261 wrote:
    Hello

    I have it here is output

    node_id | object_name | att_value | ATTRIBUTE_NAME
    469988 | Serum sample. Project | GSKMDR - status
    469988 | Serum sample. 1     | GSKMDR - Version

    based on the above output I required to get a way below output

    node_id | object_name | status | Version
    469988 | Serum sample. Project | 1

    I tried to use pivot and wm_concat but no luck. can you please any action ideas.

    Well PIVOT should work, if you have not shown that you tried in this regard.
    WM_CONCAT is a undocumented function, then you'd be a fool to use, because functionality may change or it may be removed in future versions.

    It will pivot the data based on what you have provided (and it will work in older versions of 11 g)

    select note_id, object_name
          ,max(decode(attribute_name,'GSKMDR - Status',att_value)) as status
          ,max(decode(attribute_name,'GSKMDR - Version',att_value)) as version
    group by note_id, object_name
    order by note_id
    

    My query is given below.

    {noformat} {noformat}
    WITH first_req AS
    (SELECT node_id object_name, att_value, attribute_name)
    FROM (SELECT n.node_id, o.NAME object_name, o.object_id,)
    att.name_display_code att_ndc, att. VALUE att_value,
    attribute_name atty.NAME, n.deletion_date
    OF vp40.nodes n.
    vp40. O objects,
    vp40. Att ATTRIBUTES,
    Atty vp40.attribute_types
    WHERE n.object_id = o.object_id
    AND o.object_id = att.object_id
    AND att.attribute_type_id = atty.attribute_type_id) t
    WHERE deletion_date = January 1, 1900"

    Dates should be treated as dates, no chains.

                   WHERE deletion_date = to_date('01-JAN-1900','DD-MON-YYYY')
    

    or (as it is just a date and no time)...

                   WHERE deletion_date = date '1900-01-01'
    

Maybe you are looking for

  • Satellite L350D - unabel to install Win7

    I tried to install Win7 on this laptop - but installation has failed every time with errormessage: "Der computer wurde unerwartet neu gestartet...» » Installation procedure: -Start using F12 bootmenu DVD player-remove all old scores-create a new part

  • HP jet pro 8625 desktop: built-in web server

    The URL of the printer are not accessible from public wi - fi networks.  Usually receive message: cannot open the Page, the server does not, check the URL and try again.  However, the connection to the home network built-in web server to which the pr

  • WRT610N (new) packet loss

    I've read through the forums for a few hours and and I've seen others with this problem, but no clear solution. I installed my new WRT610N (FW 1.00.18) and I experience packet loss when ping. When I ping from my laptop to an AT & T (4.2.2.2) DNS serv

  • Chinese PC troubleshooting Windows 7 32-bit slow and random freezes

    A user has a Windows 7 PC from Dell's traditional Chinese version. During the investigation into his slow and problem of slow performance (random freezes), I tried running sfc/scannow in command prompt. However the command stopped at 56% with an erro

  • How to store values by default when a work of recognition processor creates a new document due to the separator page?

    Environment : business Capture 11.1.1.8.0Question : How do I keep the default values when a work of recognition processor creates a new document due to the separator page?Background :1. we have a job to import processor.2 failing a metadata to the se