Unnamed: 0 int64 0 350k | level_0 int64 0 351k | ApplicationNumber int64 9.75M 96.1M | ArtUnit int64 1.6k 3.99k | Abstract stringlengths 1 8.37k | Claims stringlengths 3 292k | abstract-claims stringlengths 68 293k | TechCenter int64 1.6k 3.9k |
|---|---|---|---|---|---|---|---|
6,600 | 6,600 | 15,087,081 | 2,162 | Embodiments are disclosed for improving scalability and efficiency of an online transaction processing (OLTP) system. In the context of a method, an example embodiment includes assigning, by revisioning circuitry and in response to receiving a change data instruction to edit one or more data tables stored by the OLTP system, a global revision number to the change data instruction, wherein the global revision number is unique within the OLTP system, and updating, by data modeling circuitry, one or more records in the one or more data tables stored by the OLTP system based on the change data instruction. The example method further includes inserting, by data auditing circuitry, one or more audit records corresponding to the one or more updated records into one or more audit tables corresponding to the one or more data tables. Corresponding apparatuses and computer program products are also provided. | 1. A method for improving scalability and efficiency of an online transaction processing (OLTP) system, the method comprising:
assigning, by revisioning circuitry and in response to receiving a change data instruction to edit one or more data tables stored by the OLTP system, a global revision number to the change data instruction, wherein the global revision number is unique within the OLTP system; updating, by data modeling circuitry, one or more records in the one or more data tables stored by the OLTP system based on the change data instruction; and inserting, by data auditing circuitry, one or more audit records corresponding to the one or more updated records into one or more audit tables corresponding to the one or more data tables, wherein each audit record includes a revision number field identifying the global revision number and a revision type field indicating whether a corresponding updated record of the one or more updated records is newly added, modified from a previous version, or deleted. 2. The method of claim 1, wherein assigning the global revision number to the revision includes:
generating the global revision number; and storing the global revision number in a global revision tracking table. 3. The method of claim 1, wherein inserting the one or more audit records corresponding to the one or more updated records into the one or more audit tables corresponding to the one or more tables includes:
generating insert statements for the one or more audit tables; and for each particular audit table corresponding to a particular data table of the one or more data tables,
invoking the insert statement generated for the particular audit table to add a subset of the one or more audit records into the particular audit table that correspond to a subset of the updated records that are stored in the particular data table. 4. The method of claim 3, wherein generating the insert statements includes:
analyzing metadata of the one or more data tables; building insert statements for the one or more audit tables based on the metadata of each corresponding data table; and caching the insert statements. 5. The method of claim 3, wherein invoking the insert statement generated for a particular audit table includes:
binding, to the insert statement generated for the particular audit table, row data in the particular data table that describes the subset of the updated records that are stored in the particular data table; and causing execution of the insert statement generated for the particular audit table. 6. The method of claim 3, wherein inserting the one or more audit records into the one or more audit tables includes invoking insert statements corresponding to multiple audit tables in a batch process. 7. The method of claim 1, wherein the change data instruction is received from:
a Java 2 Platform, Enterprise Edition (J2EE) application using a Java database connectivity (JDBC) driver; or a data warehouse extract, transform, and load (ETL) process. 8. An apparatus for improving scalability and efficiency of an online transaction processing (OLTP) system, the apparatus comprising at least one processor and at least one memory storing computer-executable instructions, that, when executed by the at least one processor, cause the apparatus to:
assign, in response to receiving a change data instruction to edit one or more data tables stored by the OLTP system, a global revision number to the change data instruction, wherein the global revision number is unique within the OLTP system; update, one or more records in the one or more data tables stored by the OLTP system based on the change data instruction; and insert one or more audit records corresponding to the one or more updated records into one or more audit tables corresponding to the one or more data tables, wherein each audit record includes a revision number field identifying the global revision number and a revision type field indicating whether a corresponding updated record of the one or more updated records is newly added, modified from a previous version, or deleted. 9. The apparatus of claim 8, wherein the computer-executable instructions, when executed by the at least one processor, cause the apparatus to assign the global revision number to the revision by causing the apparatus to:
generate the global revision number; and store the global revision number in a global revision tracking table. 10. The apparatus of claim 8, wherein the computer-executable instructions, when executed by the at least one processor, cause the apparatus to insert the one or more audit records corresponding to the one or more updated records into the one or more audit tables corresponding to the one or more tables by causing the apparatus to:
generate insert statements for the one or more audit tables; and for each particular audit table corresponding to a particular data table of the one or more data tables,
invoke the insert statement generated for the particular audit table to add a subset of the one or more audit records into the particular audit table that correspond to a subset of the updated records that are stored in the particular data table. 11. The apparatus of claim 10, wherein the computer-executable instructions, when executed by the at least one processor, cause the apparatus to generate the insert statements by causing the apparatus to:
analyze metadata of the one or more data tables; build insert statements for the one or more audit tables based on the metadata of each corresponding data table; and cache the insert statements. 12. The apparatus of claim 10, wherein the computer-executable instructions, when executed by the at least one processor, cause the apparatus to invoke the insert statement generated for a particular audit table by causing the apparatus to:
bind, to the insert statement generated for the particular audit table, row data in the particular data table that describes the subset of the updated records that are stored in the particular data table; and cause execution of the insert statement generated for the particular audit table. 13. The apparatus of claim 10, wherein the computer-executable instructions, when executed by the at least one processor, cause the apparatus to insert the one or more audit records into the one or more audit tables by causing the apparatus to invoke insert statements corresponding to multiple audit tables in a batch process. 14. The apparatus of claim 8, wherein the computer-executable instructions, when executed by the at least one processor, cause the apparatus to receive the change data instruction from:
a Java 2 Platform, Enterprise Edition (J2EE) application using a Java database connectivity (JDBC) driver; or a data warehouse extract, transform, and load (ETL) process. 15. A computer program product comprising at least one non-transitory computer-readable storage medium for improving scalability and efficiency of an online transaction processing (OLTP) system, the at least one non-transitory computer-readable storage medium storing computer-executable instructions that, when executed, cause an apparatus to:
assign, in response to receiving a change data instruction to edit one or more data tables stored by the OLTP system, a global revision number to the change data instruction, wherein the global revision number is unique within the OLTP system; update one or more records in the one or more data tables stored by the OLTP system based on the change data instruction; and insert one or more audit records corresponding to the one or more updated records into one or more audit tables corresponding to the one or more data tables, wherein each audit record includes a revision number field identifying the global revision number and a revision type field indicating whether a corresponding updated record of the one or more updated records is newly added, modified from a previous version, or deleted. 16. The computer program product of claim 15, wherein the computer-executable instructions, when executed, cause the apparatus to assign the global revision number to the revision by causing the apparatus to:
generate the global revision number; and store the global revision number in a global revision tracking table. 17. The computer program product of claim 15, wherein the computer-executable instructions, when executed, cause the apparatus to insert the one or more audit records corresponding to the one or more updated records into the one or more audit tables corresponding to the one or more tables by causing the apparatus to:
generate insert statements for the one or more audit tables; and for each particular audit table corresponding to a particular data table of the one or more data tables,
invoke the insert statement generated for the particular audit table to add a subset of the one or more audit records into the particular audit table that correspond to a subset of the updated records that are stored in the particular data table. 18. The computer program product of claim 17, wherein the computer-executable instructions, when executed, cause the apparatus to generate the insert statements by causing the apparatus to:
analyze metadata of the one or more data tables; build insert statements for the one or more audit tables based on the metadata of each corresponding data table; and cache the insert statements. 19. The computer program product of claim 17, wherein the computer-executable instructions, when executed, cause the apparatus to invoke the insert statement generated for a particular audit table by causing the apparatus to:
bind, to the insert statement generated for the particular audit table, row data in the particular data table that describes the subset of the updated records that are stored in the particular data table; and cause execution of the insert statement generated for the particular audit table. 20. The computer program product of claim 17, wherein the computer-executable instructions, when executed, cause the apparatus to insert the one or more audit records into the one or more audit tables by causing the apparatus to invoke insert statements corresponding to multiple audit tables in a batch process. | Embodiments are disclosed for improving scalability and efficiency of an online transaction processing (OLTP) system. In the context of a method, an example embodiment includes assigning, by revisioning circuitry and in response to receiving a change data instruction to edit one or more data tables stored by the OLTP system, a global revision number to the change data instruction, wherein the global revision number is unique within the OLTP system, and updating, by data modeling circuitry, one or more records in the one or more data tables stored by the OLTP system based on the change data instruction. The example method further includes inserting, by data auditing circuitry, one or more audit records corresponding to the one or more updated records into one or more audit tables corresponding to the one or more data tables. Corresponding apparatuses and computer program products are also provided.1. A method for improving scalability and efficiency of an online transaction processing (OLTP) system, the method comprising:
assigning, by revisioning circuitry and in response to receiving a change data instruction to edit one or more data tables stored by the OLTP system, a global revision number to the change data instruction, wherein the global revision number is unique within the OLTP system; updating, by data modeling circuitry, one or more records in the one or more data tables stored by the OLTP system based on the change data instruction; and inserting, by data auditing circuitry, one or more audit records corresponding to the one or more updated records into one or more audit tables corresponding to the one or more data tables, wherein each audit record includes a revision number field identifying the global revision number and a revision type field indicating whether a corresponding updated record of the one or more updated records is newly added, modified from a previous version, or deleted. 2. The method of claim 1, wherein assigning the global revision number to the revision includes:
generating the global revision number; and storing the global revision number in a global revision tracking table. 3. The method of claim 1, wherein inserting the one or more audit records corresponding to the one or more updated records into the one or more audit tables corresponding to the one or more tables includes:
generating insert statements for the one or more audit tables; and for each particular audit table corresponding to a particular data table of the one or more data tables,
invoking the insert statement generated for the particular audit table to add a subset of the one or more audit records into the particular audit table that correspond to a subset of the updated records that are stored in the particular data table. 4. The method of claim 3, wherein generating the insert statements includes:
analyzing metadata of the one or more data tables; building insert statements for the one or more audit tables based on the metadata of each corresponding data table; and caching the insert statements. 5. The method of claim 3, wherein invoking the insert statement generated for a particular audit table includes:
binding, to the insert statement generated for the particular audit table, row data in the particular data table that describes the subset of the updated records that are stored in the particular data table; and causing execution of the insert statement generated for the particular audit table. 6. The method of claim 3, wherein inserting the one or more audit records into the one or more audit tables includes invoking insert statements corresponding to multiple audit tables in a batch process. 7. The method of claim 1, wherein the change data instruction is received from:
a Java 2 Platform, Enterprise Edition (J2EE) application using a Java database connectivity (JDBC) driver; or a data warehouse extract, transform, and load (ETL) process. 8. An apparatus for improving scalability and efficiency of an online transaction processing (OLTP) system, the apparatus comprising at least one processor and at least one memory storing computer-executable instructions, that, when executed by the at least one processor, cause the apparatus to:
assign, in response to receiving a change data instruction to edit one or more data tables stored by the OLTP system, a global revision number to the change data instruction, wherein the global revision number is unique within the OLTP system; update, one or more records in the one or more data tables stored by the OLTP system based on the change data instruction; and insert one or more audit records corresponding to the one or more updated records into one or more audit tables corresponding to the one or more data tables, wherein each audit record includes a revision number field identifying the global revision number and a revision type field indicating whether a corresponding updated record of the one or more updated records is newly added, modified from a previous version, or deleted. 9. The apparatus of claim 8, wherein the computer-executable instructions, when executed by the at least one processor, cause the apparatus to assign the global revision number to the revision by causing the apparatus to:
generate the global revision number; and store the global revision number in a global revision tracking table. 10. The apparatus of claim 8, wherein the computer-executable instructions, when executed by the at least one processor, cause the apparatus to insert the one or more audit records corresponding to the one or more updated records into the one or more audit tables corresponding to the one or more tables by causing the apparatus to:
generate insert statements for the one or more audit tables; and for each particular audit table corresponding to a particular data table of the one or more data tables,
invoke the insert statement generated for the particular audit table to add a subset of the one or more audit records into the particular audit table that correspond to a subset of the updated records that are stored in the particular data table. 11. The apparatus of claim 10, wherein the computer-executable instructions, when executed by the at least one processor, cause the apparatus to generate the insert statements by causing the apparatus to:
analyze metadata of the one or more data tables; build insert statements for the one or more audit tables based on the metadata of each corresponding data table; and cache the insert statements. 12. The apparatus of claim 10, wherein the computer-executable instructions, when executed by the at least one processor, cause the apparatus to invoke the insert statement generated for a particular audit table by causing the apparatus to:
bind, to the insert statement generated for the particular audit table, row data in the particular data table that describes the subset of the updated records that are stored in the particular data table; and cause execution of the insert statement generated for the particular audit table. 13. The apparatus of claim 10, wherein the computer-executable instructions, when executed by the at least one processor, cause the apparatus to insert the one or more audit records into the one or more audit tables by causing the apparatus to invoke insert statements corresponding to multiple audit tables in a batch process. 14. The apparatus of claim 8, wherein the computer-executable instructions, when executed by the at least one processor, cause the apparatus to receive the change data instruction from:
a Java 2 Platform, Enterprise Edition (J2EE) application using a Java database connectivity (JDBC) driver; or a data warehouse extract, transform, and load (ETL) process. 15. A computer program product comprising at least one non-transitory computer-readable storage medium for improving scalability and efficiency of an online transaction processing (OLTP) system, the at least one non-transitory computer-readable storage medium storing computer-executable instructions that, when executed, cause an apparatus to:
assign, in response to receiving a change data instruction to edit one or more data tables stored by the OLTP system, a global revision number to the change data instruction, wherein the global revision number is unique within the OLTP system; update one or more records in the one or more data tables stored by the OLTP system based on the change data instruction; and insert one or more audit records corresponding to the one or more updated records into one or more audit tables corresponding to the one or more data tables, wherein each audit record includes a revision number field identifying the global revision number and a revision type field indicating whether a corresponding updated record of the one or more updated records is newly added, modified from a previous version, or deleted. 16. The computer program product of claim 15, wherein the computer-executable instructions, when executed, cause the apparatus to assign the global revision number to the revision by causing the apparatus to:
generate the global revision number; and store the global revision number in a global revision tracking table. 17. The computer program product of claim 15, wherein the computer-executable instructions, when executed, cause the apparatus to insert the one or more audit records corresponding to the one or more updated records into the one or more audit tables corresponding to the one or more tables by causing the apparatus to:
generate insert statements for the one or more audit tables; and for each particular audit table corresponding to a particular data table of the one or more data tables,
invoke the insert statement generated for the particular audit table to add a subset of the one or more audit records into the particular audit table that correspond to a subset of the updated records that are stored in the particular data table. 18. The computer program product of claim 17, wherein the computer-executable instructions, when executed, cause the apparatus to generate the insert statements by causing the apparatus to:
analyze metadata of the one or more data tables; build insert statements for the one or more audit tables based on the metadata of each corresponding data table; and cache the insert statements. 19. The computer program product of claim 17, wherein the computer-executable instructions, when executed, cause the apparatus to invoke the insert statement generated for a particular audit table by causing the apparatus to:
bind, to the insert statement generated for the particular audit table, row data in the particular data table that describes the subset of the updated records that are stored in the particular data table; and cause execution of the insert statement generated for the particular audit table. 20. The computer program product of claim 17, wherein the computer-executable instructions, when executed, cause the apparatus to insert the one or more audit records into the one or more audit tables by causing the apparatus to invoke insert statements corresponding to multiple audit tables in a batch process. | 2,100 |
6,601 | 6,601 | 14,337,189 | 2,159 | The approaches described herein provide an efficient way for a database server to support storage and retrieval for any of a growing number of semi-structured data formats. In one embodiment, a set of generic semi-structured data operators are provided that enable users to query, update, and validate data stored in any of a number of semi-structured data formats. In this context, a “generic” semi-structured data operator means a data operator that may be configured to operate on any number of different semi-structured data formats. For example, according to one embodiment, the same set of generic semi-structured data operators may be used to operate on data stored according to the XML, JSON, or any number of other semi-structured data formats. | 1. A method comprising:
receiving a query expression against a collection of semi-structured data stored in one or more tables of a database, wherein the query expression includes a generic semi-structured data operator; wherein the semi-structured data operator includes a parameter specifying a particular semi-structured data format; in response to receiving the query expression, performing an operation specified by the semi-structured data operator against the semi-structured data; wherein the method is performed by one or more computing devices. 2. The method of claim 1, further comprising:
wherein the query expression is a first query expression and the semi-structured data format is a first semi-structured data format; receiving a second query expression including the generic semi-structured data operator, wherein the semi-structured data operator specifies a second semi-structured data format that is different than the first semi-structured data format; in response to receiving the second query expression, performing the operation against the semi-structured data. 3. The method of claim 1, wherein the semi-structured data operator is a value operator which applies a specified semi-structured data query language expression to specified input data, wherein applying the specified semi-structured data query language expression to the specified input data includes extracting a value from the specified input data and casting the value to a SQL data type. 4. The method of claim 1, wherein the semi-structured data operator is an exists operator which applies a specified semi-structured query language expression to input data and returns a value based on whether or not the expression returns one or more data items. 5. The method of claim 1, wherein the semi-structured data operator is a query operator which applies a semi-structured query language expression to input data and returns a result that is part of the input data. 6. The method of claim 1, wherein the semi-structured data operator is a validity operator which validates input data for conformity with a particular semi-structured data format. 7. The method of claim 1, wherein the semi-structured data operator is a table function which maps a result of one or more semi-structured query language expressions into one or more relational rows and columns. 8. The method of claim 1, wherein the semi-structured data operator is an object operator which generates a data object representing semi-structured output data, wherein the semi-structured output data is generated based on one or more SQL expressions. 9. The method of claim 1, wherein the semi-structured data operator is an update operator which modifies, inserts, or deletes one or more components of the semi-structured data. 10. The method of claim 1, wherein the semi-structured data operator is executed based on an implementation module associated with the particular semi-structured data format. 11. The method of claim 1, wherein the semi-structured data operator processes a set of input rows and produces a set of output rows. 12. The method of claim 10, wherein the implementation module includes a compiler interface configured to compile semi-structured query language expressions. 13. A non-transitory computer-readable medium storing instructions which, when executed by one or more processors, cause the one or more processors to perform a method, comprising:
receiving a query expression against a collection of semi-structured data stored in one or more tables of a database, wherein the query expression includes a generic semi-structured data operator; wherein the semi-structured data operator includes a parameter specifying a particular semi-structured data format; in response to receiving the query expression, performing an operation specified by the semi-structured data operator against the semi-structured data. 14. The non-transitory computer-readable medium of claim 13, further comprising:
wherein the query expression is a first query expression and the semi-structured data format is a first semi-structured data format; receiving a second query expression including the generic semi-structured data operator, wherein the semi-structured data operator specifies a second semi-structured data format that is different than the first semi-structured data format; in response to receiving the second query expression, performing the operation against the semi-structured data. 15. The non-transitory computer-readable medium of claim 13, wherein the semi-structured data operator is a value operator which applies a specified semi-structured data query language expression to specified input data, wherein applying the specified semi-structured data query language expression to the specified input data includes extracting a value from the specified input data and casting the value to a SQL data type. 16. The non-transitory computer-readable medium of claim 13, wherein the semi-structured data operator is an exists operator configured which applies a specified semi-structured query language expression to input data and returns a value based on whether or not the expression returns one or more data items. 17. The non-transitory computer-readable medium of claim 13, wherein the semi-structured data operator is a query operator which applies a semi-structured query language expression to input data and returns a result that is part of the input data. 18. The non-transitory computer-readable medium of claim 13, wherein the semi-structured data operator is a validity operator which validates input data for conformity with a particular semi-structured data format. 19. The non-transitory computer-readable medium of claim 13, wherein the semi-structured data operator is a table function which maps a result of one or more semi-structured query language expressions into one or more relational rows and columns. 20. The non-transitory computer-readable medium of claim 13, wherein the semi-structured data operator is an object operator which generates a data object representing semi-structured output data, wherein the semi-structured output data is generated based on one or more SQL expressions. 21. The non-transitory computer-readable medium of claim 13, wherein the semi-structured data operator is an update operator which modifies, inserts, or deletes one or more components of the semi-structured data. 22. The non-transitory computer-readable medium of claim 13, wherein the semi-structured data operator is executed based on an implementation module associated with the particular semi-structured data format. 23. The non-transitory computer-readable medium of claim 13, wherein the semi-structured data operator processes a set of input rows and produces a set of output rows. 24. The non-transitory computer-readable medium of claim 22, wherein the implementation module includes a compiler interface configured to compile semi-structured query language expressions. | The approaches described herein provide an efficient way for a database server to support storage and retrieval for any of a growing number of semi-structured data formats. In one embodiment, a set of generic semi-structured data operators are provided that enable users to query, update, and validate data stored in any of a number of semi-structured data formats. In this context, a “generic” semi-structured data operator means a data operator that may be configured to operate on any number of different semi-structured data formats. For example, according to one embodiment, the same set of generic semi-structured data operators may be used to operate on data stored according to the XML, JSON, or any number of other semi-structured data formats.1. A method comprising:
receiving a query expression against a collection of semi-structured data stored in one or more tables of a database, wherein the query expression includes a generic semi-structured data operator; wherein the semi-structured data operator includes a parameter specifying a particular semi-structured data format; in response to receiving the query expression, performing an operation specified by the semi-structured data operator against the semi-structured data; wherein the method is performed by one or more computing devices. 2. The method of claim 1, further comprising:
wherein the query expression is a first query expression and the semi-structured data format is a first semi-structured data format; receiving a second query expression including the generic semi-structured data operator, wherein the semi-structured data operator specifies a second semi-structured data format that is different than the first semi-structured data format; in response to receiving the second query expression, performing the operation against the semi-structured data. 3. The method of claim 1, wherein the semi-structured data operator is a value operator which applies a specified semi-structured data query language expression to specified input data, wherein applying the specified semi-structured data query language expression to the specified input data includes extracting a value from the specified input data and casting the value to a SQL data type. 4. The method of claim 1, wherein the semi-structured data operator is an exists operator which applies a specified semi-structured query language expression to input data and returns a value based on whether or not the expression returns one or more data items. 5. The method of claim 1, wherein the semi-structured data operator is a query operator which applies a semi-structured query language expression to input data and returns a result that is part of the input data. 6. The method of claim 1, wherein the semi-structured data operator is a validity operator which validates input data for conformity with a particular semi-structured data format. 7. The method of claim 1, wherein the semi-structured data operator is a table function which maps a result of one or more semi-structured query language expressions into one or more relational rows and columns. 8. The method of claim 1, wherein the semi-structured data operator is an object operator which generates a data object representing semi-structured output data, wherein the semi-structured output data is generated based on one or more SQL expressions. 9. The method of claim 1, wherein the semi-structured data operator is an update operator which modifies, inserts, or deletes one or more components of the semi-structured data. 10. The method of claim 1, wherein the semi-structured data operator is executed based on an implementation module associated with the particular semi-structured data format. 11. The method of claim 1, wherein the semi-structured data operator processes a set of input rows and produces a set of output rows. 12. The method of claim 10, wherein the implementation module includes a compiler interface configured to compile semi-structured query language expressions. 13. A non-transitory computer-readable medium storing instructions which, when executed by one or more processors, cause the one or more processors to perform a method, comprising:
receiving a query expression against a collection of semi-structured data stored in one or more tables of a database, wherein the query expression includes a generic semi-structured data operator; wherein the semi-structured data operator includes a parameter specifying a particular semi-structured data format; in response to receiving the query expression, performing an operation specified by the semi-structured data operator against the semi-structured data. 14. The non-transitory computer-readable medium of claim 13, further comprising:
wherein the query expression is a first query expression and the semi-structured data format is a first semi-structured data format; receiving a second query expression including the generic semi-structured data operator, wherein the semi-structured data operator specifies a second semi-structured data format that is different than the first semi-structured data format; in response to receiving the second query expression, performing the operation against the semi-structured data. 15. The non-transitory computer-readable medium of claim 13, wherein the semi-structured data operator is a value operator which applies a specified semi-structured data query language expression to specified input data, wherein applying the specified semi-structured data query language expression to the specified input data includes extracting a value from the specified input data and casting the value to a SQL data type. 16. The non-transitory computer-readable medium of claim 13, wherein the semi-structured data operator is an exists operator configured which applies a specified semi-structured query language expression to input data and returns a value based on whether or not the expression returns one or more data items. 17. The non-transitory computer-readable medium of claim 13, wherein the semi-structured data operator is a query operator which applies a semi-structured query language expression to input data and returns a result that is part of the input data. 18. The non-transitory computer-readable medium of claim 13, wherein the semi-structured data operator is a validity operator which validates input data for conformity with a particular semi-structured data format. 19. The non-transitory computer-readable medium of claim 13, wherein the semi-structured data operator is a table function which maps a result of one or more semi-structured query language expressions into one or more relational rows and columns. 20. The non-transitory computer-readable medium of claim 13, wherein the semi-structured data operator is an object operator which generates a data object representing semi-structured output data, wherein the semi-structured output data is generated based on one or more SQL expressions. 21. The non-transitory computer-readable medium of claim 13, wherein the semi-structured data operator is an update operator which modifies, inserts, or deletes one or more components of the semi-structured data. 22. The non-transitory computer-readable medium of claim 13, wherein the semi-structured data operator is executed based on an implementation module associated with the particular semi-structured data format. 23. The non-transitory computer-readable medium of claim 13, wherein the semi-structured data operator processes a set of input rows and produces a set of output rows. 24. The non-transitory computer-readable medium of claim 22, wherein the implementation module includes a compiler interface configured to compile semi-structured query language expressions. | 2,100 |
6,602 | 6,602 | 15,384,193 | 2,119 | A robot system is configured to identify gestures performed by an end-user proximate to a work piece. The robot system then determines a set of modifications to be made to the work piece based on the gestures. A projector coupled to the robot system projects images onto the work piece that represent the modification to be made and/or a CAD model of the work piece. The robot system then performs the modifications. | 1. A computer-implemented method for controlling a robot, the method comprising:
processing sensor data to determine a first gesture performed by an end-user proximate to a work piece; generating a tool path for modifying the work piece based on the first gesture; projecting the tool path onto the work piece; and causing the robot to modify the work piece according to the tool path. 2. The computer-implemented method of claim 1, further comprising projecting a CAD model onto a first surface of the work piece. 3. The computer-implemented method of claim 2, wherein the CAD model depicts an internal portion of the work piece proximate to the first surface. 4. The computer-implemented method of claim 2, wherein the CAD model depicts at least one predicted effect of modifying the work piece according to the tool path. 5. The computer-implemented method of claim 2, further comprising generating the CAD model based on the sensor data. 6. The computer-implemented method of claim 1, wherein the first gesture indicates two or more locations on the work piece where modifications are to be made, and wherein the tool path intersects the two or more locations. 7. The computer-implemented method of claim 1, wherein the gesture comprises a pointing gesture indicating a location on the work piece where a hole should be drilled. 8. The computer-implemented method of claim 1, wherein the gesture comprises a sweeping gesture indicating an arc on the surface of the work piece along which a cut should be made. 9. A non-transitory computer-readable medium storing program instructions that, when executed by a processor, cause the processor to control a robot by performing the steps of:
processing sensor data to determine a first gesture performed by an end-user proximate to a work piece; generating a tool path for modifying the work piece based on the first gesture; projecting the tool path onto the work piece; and causing the robot to modify the work piece according to the tool path. 10. The non-transitory computer-readable medium of claim 9, further comprising projecting a CAD model onto a first surface of the work piece. 11. The non-transitory computer-readable medium of claim 10, wherein the CAD model depicts an internal portion of the work piece proximate to the first surface. 12. The non-transitory computer-readable medium of claim 10, wherein the CAD model depicts at least one predicted effect of modifying the work piece according to the tool path. 13. The non-transitory computer-readable medium of claim 10, further comprising generating the CAD model based on the sensor data. 14. The non-transitory computer-readable medium of claim 9, wherein the first gesture indicates two or more locations on the work piece where modifications are to be made, and wherein the tool path intersects the two or more locations. 15. The non-transitory computer-readable medium of claim 9, wherein the gesture comprises a pointing gesture indicating a location on the work piece where a hole should be drilled. 16. The non-transitory computer-readable medium of claim 9, wherein the gesture comprises a sweeping gesture indicating an arc on the surface of the work piece along which a cut should be made. 17. A system, comprising:
a robot; a memory storing a control engine; and a processor that, upon executing the control engine, is configured to:
process sensor data to determine a first gesture performed by an end-user proximate to a work piece,
generate a tool path for modifying the work piece based on the first gesture,
project the tool path onto the work piece, and
cause the robot to modify the work piece according to the tool path. 18. The system of claim 17, wherein the robot includes a sensor array configured to capture the sensor data. 19. The system of claim 17, wherein the robot includes a projector configured to project the tool path onto the work piece. 20. The system of claim 17, wherein the robot comprises an articulated arm robot. | A robot system is configured to identify gestures performed by an end-user proximate to a work piece. The robot system then determines a set of modifications to be made to the work piece based on the gestures. A projector coupled to the robot system projects images onto the work piece that represent the modification to be made and/or a CAD model of the work piece. The robot system then performs the modifications.1. A computer-implemented method for controlling a robot, the method comprising:
processing sensor data to determine a first gesture performed by an end-user proximate to a work piece; generating a tool path for modifying the work piece based on the first gesture; projecting the tool path onto the work piece; and causing the robot to modify the work piece according to the tool path. 2. The computer-implemented method of claim 1, further comprising projecting a CAD model onto a first surface of the work piece. 3. The computer-implemented method of claim 2, wherein the CAD model depicts an internal portion of the work piece proximate to the first surface. 4. The computer-implemented method of claim 2, wherein the CAD model depicts at least one predicted effect of modifying the work piece according to the tool path. 5. The computer-implemented method of claim 2, further comprising generating the CAD model based on the sensor data. 6. The computer-implemented method of claim 1, wherein the first gesture indicates two or more locations on the work piece where modifications are to be made, and wherein the tool path intersects the two or more locations. 7. The computer-implemented method of claim 1, wherein the gesture comprises a pointing gesture indicating a location on the work piece where a hole should be drilled. 8. The computer-implemented method of claim 1, wherein the gesture comprises a sweeping gesture indicating an arc on the surface of the work piece along which a cut should be made. 9. A non-transitory computer-readable medium storing program instructions that, when executed by a processor, cause the processor to control a robot by performing the steps of:
processing sensor data to determine a first gesture performed by an end-user proximate to a work piece; generating a tool path for modifying the work piece based on the first gesture; projecting the tool path onto the work piece; and causing the robot to modify the work piece according to the tool path. 10. The non-transitory computer-readable medium of claim 9, further comprising projecting a CAD model onto a first surface of the work piece. 11. The non-transitory computer-readable medium of claim 10, wherein the CAD model depicts an internal portion of the work piece proximate to the first surface. 12. The non-transitory computer-readable medium of claim 10, wherein the CAD model depicts at least one predicted effect of modifying the work piece according to the tool path. 13. The non-transitory computer-readable medium of claim 10, further comprising generating the CAD model based on the sensor data. 14. The non-transitory computer-readable medium of claim 9, wherein the first gesture indicates two or more locations on the work piece where modifications are to be made, and wherein the tool path intersects the two or more locations. 15. The non-transitory computer-readable medium of claim 9, wherein the gesture comprises a pointing gesture indicating a location on the work piece where a hole should be drilled. 16. The non-transitory computer-readable medium of claim 9, wherein the gesture comprises a sweeping gesture indicating an arc on the surface of the work piece along which a cut should be made. 17. A system, comprising:
a robot; a memory storing a control engine; and a processor that, upon executing the control engine, is configured to:
process sensor data to determine a first gesture performed by an end-user proximate to a work piece,
generate a tool path for modifying the work piece based on the first gesture,
project the tool path onto the work piece, and
cause the robot to modify the work piece according to the tool path. 18. The system of claim 17, wherein the robot includes a sensor array configured to capture the sensor data. 19. The system of claim 17, wherein the robot includes a projector configured to project the tool path onto the work piece. 20. The system of claim 17, wherein the robot comprises an articulated arm robot. | 2,100 |
6,603 | 6,603 | 15,822,972 | 2,154 | A method begins with a processing unit of a dispersed storage network (DSN) receiving a read request for a data segment, wherein a data segment is dispersed error encoded in accordance with dispersed error encoding parameters to produce a set of encoded data slices (EDSs) that are distributedly stored in a set of storage units (SUs). The method continues with the processing unit identifying the set of SUs storing the set of EDSs and then identifying a read prioritization scheme for the read request. Based on the read prioritization scheme the method continues by selecting a read threshold number of SUs from the set of SUs storing the set of EDSs; and issuing read slice requests to each SU of the read threshold number of SUs. | 1. A method for execution by one or more processing modules of one or more computing devices of a dispersed storage network (DSN), the method comprises:
receiving a read request for a data segment, wherein a data object is segmented into a plurality of data segments that includes the data segment, wherein a data segment of the plurality of data segments is dispersed error encoded in accordance with dispersed error encoding parameters to produce a set of encoded data slices (EDSs) that is of a pillar width and further wherein the set of EDSs are distributedly stored in a set of storage units (SUs); identifying the set of SUs storing the set of EDSs; identifying a read prioritization scheme for the read request; based on the read prioritization scheme, selecting a read threshold number of SUs from the set of SUs storing the set of EDSs; and issuing read slice requests to each SU of the read threshold number of SUs. 2. The method of claim 1, wherein the dispersal parameters include at least one of a pillar width, a write threshold, a read threshold, a decode threshold, an encoding matrix identifier, and an information dispersal algorithm identifier. 3. The method of claim 1, wherein the identifying a read prioritization scheme is based on at least one of the read request, a vault identifier (ID), a data ID, a registry lookup, a directory lookup, a data type indicator, a data size estimator, a segment priority indicator, a performance requirement, and a reliability requirement. 4. The method of claim 1, wherein the read prioritization scheme is one of a minimum latency scheme, a maximum throughput scheme, and a maximum predictability scheme. 5. The method of claim 4 wherein the minimum latency scheme includes a fast retrieval access time for the read request. 6. The method of claim 4 wherein the read prioritization scheme is based on a data size estimator and further wherein the maximum throughput scheme is identified based on the data size estimator indicating a data size greater than a pre-determined threshold. 7. The method of claim 4 wherein the read prioritization scheme is based on a reliability requirement and further wherein the maximum predictability scheme is identified based on the reliability requirement indicating a low standard deviation of retrieval performance. 8. The method of claim 1, wherein the read prioritization scheme is a minimum latency scheme and further wherein the identifying the minimum latency scheme is based on latency information for each of the read threshold number of SUs. 9. The method of claim 8 further comprises:
obtaining latency information by executing at least one of performing a lookup, initiating a query, receiving a response and performing a test. 10. The method of claim 1, wherein the read prioritization scheme is a maximum throughput scheme and further wherein the identifying the maximum throughput scheme is based on throughput information for each of the read threshold number of SUs. 11. The method of claim 10 further comprises:
obtaining throughput information by executing at least one of performing a lookup, initiating a query, receiving a response and performing a test. 12. The method of claim 1, wherein the read prioritization scheme is a maximum predictability scheme and further wherein the identifying the maximum predictability scheme is based on predictability information for each of the read threshold number of SUs. 13. The method of claim 12 further comprises:
obtaining predictability information by executing at least one of performing a lookup, initiating a query, receiving a response and performing a test. 14. A computer readable storage medium comprises:
at least one memory section that stores operational instructions that, when executed by one or more processing modules of one or more computing devices of a (DSN), causes the one or more processing modules to:
receive a read request for a data segment, wherein a data object is segmented into a plurality of data segments that includes the data segment, wherein a data segment of the plurality of data segments is dispersed error encoded in accordance with dispersed error encoding parameters to produce a set of encoded data slices (EDSs) that is of a pillar width and further wherein the set of EDSs are distributedly stored in a set of storage units (SUs);
identify the set of SUs storing the set of EDSs;
identify a read prioritization scheme for the read request, wherein the read prioritization scheme is one of a minimum latency scheme, a maximum throughput scheme, and a maximum predictability scheme;
based on the read prioritization scheme, select a read threshold number of SUs from the set of SUs storing the set of EDSs; and
issue read slice requests to each SU of the read threshold number of SUs. 15. The computer readable storage medium of claim 14, wherein the read prioritization scheme is identified based on at least one of the read request, a vault identifier (ID), a data ID, a registry lookup, a directory lookup, a data type indicator, a data size estimator, a segment priority indicator, a performance requirement, and a reliability requirement. 16. The computer readable storage medium of claim 14, wherein the read prioritization scheme is the minimum latency scheme and further wherein the minimum latency scheme is identified based on latency information for each of the read threshold number of SUs. 17. The computer readable storage medium of claim 14, wherein the read prioritization scheme is the maximum throughput scheme and further wherein the maximum throughput scheme is identified based on throughput information for each of the read threshold number of SUs. 18. The computer readable storage medium of claim 14, wherein the read prioritization scheme is the maximum predictability scheme and further wherein the maximum predictability scheme is identified based on predictability information for each of the read threshold number of SUs. 19. A dispersed storage (DS) processing unit of a dispersed storage network (DSN), the dispersed storage (DS) processing unit comprises:
an interface;
a local memory; and
a processing module operably coupled to the interface and the local memory, wherein the processing module functions to:
receive, via the interface, a read request for a data segment, wherein a data object is segmented into a plurality of data segments that includes the data segment, wherein a data segment of the plurality of data segments is dispersed error encoded in accordance with dispersed error encoding parameters to produce a set of encoded data slices (EDSs) that is of a pillar width and further wherein the set of EDSs are distributedly stored in a set of storage units (SUs);
identify the set of SUs storing the set of EDSs;
identify a read prioritization scheme for the read request, wherein the read prioritization scheme is one of a minimum latency scheme, a maximum throughput scheme, and a maximum predictability scheme;
based on the read prioritization scheme, select a read threshold number of SUs from the set of SUs storing the set of EDSs; and
issue read slice requests to each SU of the read threshold number of SUs. 20. The dispersed storage (DS) processing unit of claim 19, wherein the read prioritization scheme is identified based on at least one of the read request, a vault identifier (ID), a data ID, a registry lookup, a directory lookup, a data type indicator, a data size estimator, a segment priority indicator, a performance requirement, and a reliability requirement. | A method begins with a processing unit of a dispersed storage network (DSN) receiving a read request for a data segment, wherein a data segment is dispersed error encoded in accordance with dispersed error encoding parameters to produce a set of encoded data slices (EDSs) that are distributedly stored in a set of storage units (SUs). The method continues with the processing unit identifying the set of SUs storing the set of EDSs and then identifying a read prioritization scheme for the read request. Based on the read prioritization scheme the method continues by selecting a read threshold number of SUs from the set of SUs storing the set of EDSs; and issuing read slice requests to each SU of the read threshold number of SUs.1. A method for execution by one or more processing modules of one or more computing devices of a dispersed storage network (DSN), the method comprises:
receiving a read request for a data segment, wherein a data object is segmented into a plurality of data segments that includes the data segment, wherein a data segment of the plurality of data segments is dispersed error encoded in accordance with dispersed error encoding parameters to produce a set of encoded data slices (EDSs) that is of a pillar width and further wherein the set of EDSs are distributedly stored in a set of storage units (SUs); identifying the set of SUs storing the set of EDSs; identifying a read prioritization scheme for the read request; based on the read prioritization scheme, selecting a read threshold number of SUs from the set of SUs storing the set of EDSs; and issuing read slice requests to each SU of the read threshold number of SUs. 2. The method of claim 1, wherein the dispersal parameters include at least one of a pillar width, a write threshold, a read threshold, a decode threshold, an encoding matrix identifier, and an information dispersal algorithm identifier. 3. The method of claim 1, wherein the identifying a read prioritization scheme is based on at least one of the read request, a vault identifier (ID), a data ID, a registry lookup, a directory lookup, a data type indicator, a data size estimator, a segment priority indicator, a performance requirement, and a reliability requirement. 4. The method of claim 1, wherein the read prioritization scheme is one of a minimum latency scheme, a maximum throughput scheme, and a maximum predictability scheme. 5. The method of claim 4 wherein the minimum latency scheme includes a fast retrieval access time for the read request. 6. The method of claim 4 wherein the read prioritization scheme is based on a data size estimator and further wherein the maximum throughput scheme is identified based on the data size estimator indicating a data size greater than a pre-determined threshold. 7. The method of claim 4 wherein the read prioritization scheme is based on a reliability requirement and further wherein the maximum predictability scheme is identified based on the reliability requirement indicating a low standard deviation of retrieval performance. 8. The method of claim 1, wherein the read prioritization scheme is a minimum latency scheme and further wherein the identifying the minimum latency scheme is based on latency information for each of the read threshold number of SUs. 9. The method of claim 8 further comprises:
obtaining latency information by executing at least one of performing a lookup, initiating a query, receiving a response and performing a test. 10. The method of claim 1, wherein the read prioritization scheme is a maximum throughput scheme and further wherein the identifying the maximum throughput scheme is based on throughput information for each of the read threshold number of SUs. 11. The method of claim 10 further comprises:
obtaining throughput information by executing at least one of performing a lookup, initiating a query, receiving a response and performing a test. 12. The method of claim 1, wherein the read prioritization scheme is a maximum predictability scheme and further wherein the identifying the maximum predictability scheme is based on predictability information for each of the read threshold number of SUs. 13. The method of claim 12 further comprises:
obtaining predictability information by executing at least one of performing a lookup, initiating a query, receiving a response and performing a test. 14. A computer readable storage medium comprises:
at least one memory section that stores operational instructions that, when executed by one or more processing modules of one or more computing devices of a (DSN), causes the one or more processing modules to:
receive a read request for a data segment, wherein a data object is segmented into a plurality of data segments that includes the data segment, wherein a data segment of the plurality of data segments is dispersed error encoded in accordance with dispersed error encoding parameters to produce a set of encoded data slices (EDSs) that is of a pillar width and further wherein the set of EDSs are distributedly stored in a set of storage units (SUs);
identify the set of SUs storing the set of EDSs;
identify a read prioritization scheme for the read request, wherein the read prioritization scheme is one of a minimum latency scheme, a maximum throughput scheme, and a maximum predictability scheme;
based on the read prioritization scheme, select a read threshold number of SUs from the set of SUs storing the set of EDSs; and
issue read slice requests to each SU of the read threshold number of SUs. 15. The computer readable storage medium of claim 14, wherein the read prioritization scheme is identified based on at least one of the read request, a vault identifier (ID), a data ID, a registry lookup, a directory lookup, a data type indicator, a data size estimator, a segment priority indicator, a performance requirement, and a reliability requirement. 16. The computer readable storage medium of claim 14, wherein the read prioritization scheme is the minimum latency scheme and further wherein the minimum latency scheme is identified based on latency information for each of the read threshold number of SUs. 17. The computer readable storage medium of claim 14, wherein the read prioritization scheme is the maximum throughput scheme and further wherein the maximum throughput scheme is identified based on throughput information for each of the read threshold number of SUs. 18. The computer readable storage medium of claim 14, wherein the read prioritization scheme is the maximum predictability scheme and further wherein the maximum predictability scheme is identified based on predictability information for each of the read threshold number of SUs. 19. A dispersed storage (DS) processing unit of a dispersed storage network (DSN), the dispersed storage (DS) processing unit comprises:
an interface;
a local memory; and
a processing module operably coupled to the interface and the local memory, wherein the processing module functions to:
receive, via the interface, a read request for a data segment, wherein a data object is segmented into a plurality of data segments that includes the data segment, wherein a data segment of the plurality of data segments is dispersed error encoded in accordance with dispersed error encoding parameters to produce a set of encoded data slices (EDSs) that is of a pillar width and further wherein the set of EDSs are distributedly stored in a set of storage units (SUs);
identify the set of SUs storing the set of EDSs;
identify a read prioritization scheme for the read request, wherein the read prioritization scheme is one of a minimum latency scheme, a maximum throughput scheme, and a maximum predictability scheme;
based on the read prioritization scheme, select a read threshold number of SUs from the set of SUs storing the set of EDSs; and
issue read slice requests to each SU of the read threshold number of SUs. 20. The dispersed storage (DS) processing unit of claim 19, wherein the read prioritization scheme is identified based on at least one of the read request, a vault identifier (ID), a data ID, a registry lookup, a directory lookup, a data type indicator, a data size estimator, a segment priority indicator, a performance requirement, and a reliability requirement. | 2,100 |
6,604 | 6,604 | 16,203,411 | 2,191 | A system configured to perform software load verification. The system includes a memory, a network interface, and a processor. The memory is configured to store first data indicating expected load events. The network interface is configured to receive load verification data and a cryptographic signature from a software update target device. The load verification data is descriptive of particular load events related to loading software at the software update target device. The processor is configured to authenticate that the load verification data is received from the software update target device based on the cryptographic signature. The processor is also configured to, responsive to authenticating that the load verification data is received from the software update target device, performing a comparison of the particular load events and the expected load events. The processor is further configured to perform a response action based on results of the comparison. | 1. A system to perform software load verification, the system comprising:
a memory configured to store first data indicating expected load events; a network interface configured to receive load verification data and a cryptographic signature from a software update target device, wherein the load verification data is descriptive of particular load events related to loading software at the software update target device; and a processor configured to:
authenticate that the load verification data is received from the software update target device based on the cryptographic signature;
responsive to authenticating that the load verification data is received from the software update target device, performing a comparison of the particular load events and the expected load events; and
perform a response action based on results of the comparison. 2. The system of claim 1, wherein, responsive to determining that the particular load events match the expected load events, the response action includes sending an approval message to the software update target device. 3. The system of claim 1, wherein responsive to determining that the particular load events do not match the expected load events, the response action includes sending a disable message to the software update target device to prevent execution of the software at the software update target device. 4. The system of claim 1, wherein the memory is further configured to store first configuration data indicating an expected configuration of the software update target device, wherein the load verification data includes second configuration data indicating configuration of the software update target device prior to loading of the software, wherein the processor is configured to perform a second comparison of the first configuration data and the second configuration data, and wherein the response action is further based on results of the second comparison. 5. The system of claim 4, wherein the processor is further configured to, responsive to determining that the particular load events match the expected load events, update the first configuration data based on the load verification data. 6. The system of claim 1, wherein the load verification data is generated by a trusted computing device at the software update target device. 7. The system of claim 1, wherein the software update target device is integrated into an aircraft. 8. An aircraft comprising:
a network interface configured to receive, via a network from an off-board device, a software package that includes software; and a processor configured to:
authenticate, based on a digital signature, that the software is received from the off-board device; and
responsive to authenticating the software:
perform particular load events associated with loading the software;
cause the network interface to send load verification data and a cryptographic signature to the off-board device, the load verification data indicating the particular load events;
receive, via the network interface, a response message from the off-board device, the response message based on analysis of the load verification data and the cryptographic signature at the off-board device; and
selectively execute the software based on the response message. 9. The aircraft of claim 8, further comprising a trusted computing device configured to:
monitor operations performed by the processor during loading of the software; and generate a list of the particular load events based on the monitored operations, wherein the load verification data is based on the list of the particular load events. 10. The aircraft of claim 8, wherein the processor is configured to execute the software based on determining that the response message indicates that the particular load events match expected load events. 11. The aircraft of claim 8, wherein the processor is configured to prevent execution of the software based on determining that the response message indicates that the particular load events do not match expected load events. 12. The aircraft of claim 8, wherein the processor is further configured to, prior to loading the software, generate configuration data indicating configuration of the aircraft, wherein the load verification data includes the configuration data, wherein the response message is based on results of a comparison of the configuration data and second configuration data, the second configuration data indicating an expected configuration of the aircraft. 13. A method of performing software load verification, the method comprising:
accessing, at an off-board device, first data indicating expected load events; receiving load verification data and a cryptographic signature at the off-board device from a software update target device, the load verification data is descriptive of particular load events related to loading software at the software update target device; authenticating, at the off-board device, that the load verification data is received from the software update target device based on the cryptographic signature; responsive to authenticating that the load verification data is received from the software update target device, performing a comparison of the particular load events and the expected load events; and performing, at the off-board device, a response action based on results of the comparison. 14. The method of claim 13, wherein, responsive to determining that the particular load events match the expected load events, the response action includes sending an approval message to the software update target device. 15. The method of claim 13, wherein responsive to determining that the particular load events do not match the expected load events, the response action includes sending a disable message to the software update target device to prevent execution of the software at the software update target device. 16. The method of claim 13, further comprising:
accessing, at the off-board device, first configuration data indicating an expected configuration of the software update target device; and performing, at the off-board device, a second comparison of the first configuration data and second configuration data, wherein the load verification data includes the second configuration data indicating configuration of the software update target device prior to loading of the software, wherein the response action is further based on results of the second comparison. 17. The method of claim 16, further comprising, responsive to determining that the particular load events match the expected load events, updating the first configuration data based on the load verification data. 18. The method of claim 13, further comprising generating the load verification data at a trusted computing device of the software update target device. 19. The method of claim 13, further comprising:
receiving a software package at the software update target device; and authenticating the software package based on a digital signature, wherein the particular load events are performed at the software update target device responsive to the authenticating the software package. 20. The method of claim 19, further comprising:
receiving a response message at the software update target device from the off-board device, wherein performing the response action at the off-board device includes sending the response message from the off-board device to the software update target device; and selectively executing the software at the software update target device based on the response message. | A system configured to perform software load verification. The system includes a memory, a network interface, and a processor. The memory is configured to store first data indicating expected load events. The network interface is configured to receive load verification data and a cryptographic signature from a software update target device. The load verification data is descriptive of particular load events related to loading software at the software update target device. The processor is configured to authenticate that the load verification data is received from the software update target device based on the cryptographic signature. The processor is also configured to, responsive to authenticating that the load verification data is received from the software update target device, performing a comparison of the particular load events and the expected load events. The processor is further configured to perform a response action based on results of the comparison.1. A system to perform software load verification, the system comprising:
a memory configured to store first data indicating expected load events; a network interface configured to receive load verification data and a cryptographic signature from a software update target device, wherein the load verification data is descriptive of particular load events related to loading software at the software update target device; and a processor configured to:
authenticate that the load verification data is received from the software update target device based on the cryptographic signature;
responsive to authenticating that the load verification data is received from the software update target device, performing a comparison of the particular load events and the expected load events; and
perform a response action based on results of the comparison. 2. The system of claim 1, wherein, responsive to determining that the particular load events match the expected load events, the response action includes sending an approval message to the software update target device. 3. The system of claim 1, wherein responsive to determining that the particular load events do not match the expected load events, the response action includes sending a disable message to the software update target device to prevent execution of the software at the software update target device. 4. The system of claim 1, wherein the memory is further configured to store first configuration data indicating an expected configuration of the software update target device, wherein the load verification data includes second configuration data indicating configuration of the software update target device prior to loading of the software, wherein the processor is configured to perform a second comparison of the first configuration data and the second configuration data, and wherein the response action is further based on results of the second comparison. 5. The system of claim 4, wherein the processor is further configured to, responsive to determining that the particular load events match the expected load events, update the first configuration data based on the load verification data. 6. The system of claim 1, wherein the load verification data is generated by a trusted computing device at the software update target device. 7. The system of claim 1, wherein the software update target device is integrated into an aircraft. 8. An aircraft comprising:
a network interface configured to receive, via a network from an off-board device, a software package that includes software; and a processor configured to:
authenticate, based on a digital signature, that the software is received from the off-board device; and
responsive to authenticating the software:
perform particular load events associated with loading the software;
cause the network interface to send load verification data and a cryptographic signature to the off-board device, the load verification data indicating the particular load events;
receive, via the network interface, a response message from the off-board device, the response message based on analysis of the load verification data and the cryptographic signature at the off-board device; and
selectively execute the software based on the response message. 9. The aircraft of claim 8, further comprising a trusted computing device configured to:
monitor operations performed by the processor during loading of the software; and generate a list of the particular load events based on the monitored operations, wherein the load verification data is based on the list of the particular load events. 10. The aircraft of claim 8, wherein the processor is configured to execute the software based on determining that the response message indicates that the particular load events match expected load events. 11. The aircraft of claim 8, wherein the processor is configured to prevent execution of the software based on determining that the response message indicates that the particular load events do not match expected load events. 12. The aircraft of claim 8, wherein the processor is further configured to, prior to loading the software, generate configuration data indicating configuration of the aircraft, wherein the load verification data includes the configuration data, wherein the response message is based on results of a comparison of the configuration data and second configuration data, the second configuration data indicating an expected configuration of the aircraft. 13. A method of performing software load verification, the method comprising:
accessing, at an off-board device, first data indicating expected load events; receiving load verification data and a cryptographic signature at the off-board device from a software update target device, the load verification data is descriptive of particular load events related to loading software at the software update target device; authenticating, at the off-board device, that the load verification data is received from the software update target device based on the cryptographic signature; responsive to authenticating that the load verification data is received from the software update target device, performing a comparison of the particular load events and the expected load events; and performing, at the off-board device, a response action based on results of the comparison. 14. The method of claim 13, wherein, responsive to determining that the particular load events match the expected load events, the response action includes sending an approval message to the software update target device. 15. The method of claim 13, wherein responsive to determining that the particular load events do not match the expected load events, the response action includes sending a disable message to the software update target device to prevent execution of the software at the software update target device. 16. The method of claim 13, further comprising:
accessing, at the off-board device, first configuration data indicating an expected configuration of the software update target device; and performing, at the off-board device, a second comparison of the first configuration data and second configuration data, wherein the load verification data includes the second configuration data indicating configuration of the software update target device prior to loading of the software, wherein the response action is further based on results of the second comparison. 17. The method of claim 16, further comprising, responsive to determining that the particular load events match the expected load events, updating the first configuration data based on the load verification data. 18. The method of claim 13, further comprising generating the load verification data at a trusted computing device of the software update target device. 19. The method of claim 13, further comprising:
receiving a software package at the software update target device; and authenticating the software package based on a digital signature, wherein the particular load events are performed at the software update target device responsive to the authenticating the software package. 20. The method of claim 19, further comprising:
receiving a response message at the software update target device from the off-board device, wherein performing the response action at the off-board device includes sending the response message from the off-board device to the software update target device; and selectively executing the software at the software update target device based on the response message. | 2,100 |
6,605 | 6,605 | 16,021,134 | 2,136 | A data storage system can arrange semiconductor memory into a plurality of die sets that each store a top-level map with each top-level map logging information about user-generated data stored in a die set in which the top-level map is stored. A journal can be stored in at least one die set of the plurality of die sets with each journal logging a change to user-generated data stored in the die set of the plurality of die sets in which the journal and top-level map are each located. | 1. A method comprising:
dividing a semiconductor memory into a first die set and a second die set; storing a first user-generated data to the first die set; logging the first user-generated data in a first top-level map stored in the first die set; updating the first user-generated data; and storing a first journal to the first top-level map in the first die set, the first journal supplementing the first top-level map with information about the updated first user-generated data. 2. The method of claim 1, wherein the second die set comprises a second user-generated data. 3. The method of claim 2, wherein a second top-level map is stored in the second die set, the second top-level map comprising information about the second user-generated data. 4. The method of claim 3, wherein the second top-level map is unique from the first top-level map. 5. The method of claim 3, wherein a second journal is stored in the second die set 6. The method of claim 1, wherein the first top-level map contains information only about user-generated data stored in the first die set. 7. The method of claim 3, wherein the second top-level map contains information only about user-generated data stored in the second die set. 8. The method of claim 5, wherein a third journal is stored in the second die set, the third journal comprising an update to the second top-level map. 9. The method of claim 8, wherein the second journal is unique from the third journal. 10. A method comprising:
dividing a semiconductor memory into a plurality of die sets; storing a top-level map in each of the plurality of die sets, each top-level map logging information about user-generated data stored in a die set in which the top-level map is stored; and storing a journal in at least one die set of the plurality of die sets, each journal logging a change to user-generated data stored in a die set of the plurality of die sets in which the journal and top-level map are each located. 11. The method of claim 10, wherein each die set of the plurality of die sets is assigned to a different host. 12. The method of claim 10, wherein the top-level map for each of the plurality of die sets are loaded by a controller concurrently during a power up initialization. 13. The method of claim 12, wherein the journal is loaded immediately after the top-level map for at least one die set of the plurality of die sets. 14. The method of claim 10, wherein a journal configuration for at least one of the plurality of die sets is altered by a controller in response to a deterministic window interval. 15. The method of claim 10, wherein a mapping configuration for at least one of the plurality of die sets is altered by a controller in response to a deterministic window interval. 16. The method of claim 15, wherein the mapping configuration is altered by changing a location of the top-level map during the deterministic window interval. 17. The method of claim 14, wherein the journal configuration is altered by changing a rate at which a journal is generated. 18. The method of claim 10, wherein the user-generated data is provided to the semiconductor memory by a remote host. 19. A system comprising a semiconductor memory into a plurality of die sets, each of the plurality of die sets storing a top-level map logging information about user-generated data stored in a die set in which the top-level map is stored, at least one die set of the plurality of die sets comprising a journal logging a change to user-generated data stored in a die set of the plurality of die sets in which the journal and top-level map are each located. 20. The system of claim 19, wherein each die set of the plurality of die sets comprises an independent top-level map and each journal is unique to user-generated data stored in the die set of the plurality of die sets in which the journal and top-level map are each located | A data storage system can arrange semiconductor memory into a plurality of die sets that each store a top-level map with each top-level map logging information about user-generated data stored in a die set in which the top-level map is stored. A journal can be stored in at least one die set of the plurality of die sets with each journal logging a change to user-generated data stored in the die set of the plurality of die sets in which the journal and top-level map are each located.1. A method comprising:
dividing a semiconductor memory into a first die set and a second die set; storing a first user-generated data to the first die set; logging the first user-generated data in a first top-level map stored in the first die set; updating the first user-generated data; and storing a first journal to the first top-level map in the first die set, the first journal supplementing the first top-level map with information about the updated first user-generated data. 2. The method of claim 1, wherein the second die set comprises a second user-generated data. 3. The method of claim 2, wherein a second top-level map is stored in the second die set, the second top-level map comprising information about the second user-generated data. 4. The method of claim 3, wherein the second top-level map is unique from the first top-level map. 5. The method of claim 3, wherein a second journal is stored in the second die set 6. The method of claim 1, wherein the first top-level map contains information only about user-generated data stored in the first die set. 7. The method of claim 3, wherein the second top-level map contains information only about user-generated data stored in the second die set. 8. The method of claim 5, wherein a third journal is stored in the second die set, the third journal comprising an update to the second top-level map. 9. The method of claim 8, wherein the second journal is unique from the third journal. 10. A method comprising:
dividing a semiconductor memory into a plurality of die sets; storing a top-level map in each of the plurality of die sets, each top-level map logging information about user-generated data stored in a die set in which the top-level map is stored; and storing a journal in at least one die set of the plurality of die sets, each journal logging a change to user-generated data stored in a die set of the plurality of die sets in which the journal and top-level map are each located. 11. The method of claim 10, wherein each die set of the plurality of die sets is assigned to a different host. 12. The method of claim 10, wherein the top-level map for each of the plurality of die sets are loaded by a controller concurrently during a power up initialization. 13. The method of claim 12, wherein the journal is loaded immediately after the top-level map for at least one die set of the plurality of die sets. 14. The method of claim 10, wherein a journal configuration for at least one of the plurality of die sets is altered by a controller in response to a deterministic window interval. 15. The method of claim 10, wherein a mapping configuration for at least one of the plurality of die sets is altered by a controller in response to a deterministic window interval. 16. The method of claim 15, wherein the mapping configuration is altered by changing a location of the top-level map during the deterministic window interval. 17. The method of claim 14, wherein the journal configuration is altered by changing a rate at which a journal is generated. 18. The method of claim 10, wherein the user-generated data is provided to the semiconductor memory by a remote host. 19. A system comprising a semiconductor memory into a plurality of die sets, each of the plurality of die sets storing a top-level map logging information about user-generated data stored in a die set in which the top-level map is stored, at least one die set of the plurality of die sets comprising a journal logging a change to user-generated data stored in a die set of the plurality of die sets in which the journal and top-level map are each located. 20. The system of claim 19, wherein each die set of the plurality of die sets comprises an independent top-level map and each journal is unique to user-generated data stored in the die set of the plurality of die sets in which the journal and top-level map are each located | 2,100 |
6,606 | 6,606 | 16,668,317 | 2,119 | To ensure the safety of people needing to service a low-voltage network of an electric power distribution system, the dwellings connected to this network may include autonomous units for producing electricity (PV 1 , . . . , PVn), thus generating voltage and endangering the people servicing the work. A step is therefore provided for obtaining first data from consumption records from the meter (C 1 , . . . , Cn) of each dwelling, in regular time intervals, and second data (MET) which are meteorological data in the geographical area of these dwellings, in order to identify at least some weather conditions conducive to the production of energy by autonomous units. A model is then applied for detecting, based on the first and second data, a coincidence between periods of lower consumption measured by a meter and weather conditions conducive to electricity production by autonomous units during these periods. Information on the presence or absence of autonomous units in the dwelling equipped with this meter is thus deduced, this information being stored in a database (MEM) with a corresponding dwelling identifier, and the dwellings likely to include autonomous units are thus identified in the database. | 1-13. (canceled) 14. A method for ensuring the safety of persons needing to service a low-voltage network of an electric power distribution system, wherein:
the low-voltage network is connected to a substation supplying power to a plurality of dwellings within a geographical area, the dwellings are equipped with meters configured to measure and communicate consumption by regular time intervals, at least some of the dwellings are likely to comprise autonomous means of producing electricity using renewable energy, a placing in operation of said autonomous means generating voltage in the low-voltage network despite shutting down the substation during the work, thus endangering people during their servicing, the method, implemented by a server communicating with the meters, comprising:
obtaining first data from consumption records from each meter by regular time intervals,
obtaining second data which are meteorological data in the geographical area, in order to identify at least some weather conditions conducive to the production of energy by said autonomous means,
for each meter, applying a model for detecting, based on at least the first and second data, a coincidence between periods of low consumption measured by the meter and weather conditions conducive to electricity production by said autonomous means during said periods, and deducing, from the application of the model, information on the presence of autonomous means in the dwelling equipped with this meter,
for each dwelling, storing in a database the information on the presence of autonomous means, with a corresponding identifier specific to the dwelling,
and, before performing the servicing work, identifying in the database the dwellings likely to comprise autonomous means. 15. The method according to claim 14, wherein the server further obtains an instantaneous measurement of the voltage in the low-voltage network before the servicing work, and the work is dependent on the server obtaining a zero-voltage measurement in the low-voltage network. 16. The method according to claim 15, wherein each meter is configured to be cut off remotely by a command from the server, and wherein the presence information is a probability score for the presence of autonomous means in the dwelling equipped with said meter, and the method further comprises:
establishing a list of the probability scores for the presence of autonomous means, with respective corresponding meter identifiers, if a non-zero voltage is measured in the low-voltage network, using the meter identifier having the maximum score in said list to cut off that meter remotely, then removing that meter from the list and using again a next meter identifier having a new maximum score in said list to cut off that next meter until a zero-voltage measurement is obtained in the network. 17. The method according to claim 14, wherein the detection model is obtained by a “tree boosting” technique comprising:
a setting of variables in a learning sample composed of dwellings for which the consumption is analyzed,
a defining of explanatory variables of the correlation, taken from the variables set, and
an application of the tree boosting method to the explanatory variables in order to determine a model for calculating a probability score for the presence of autonomous means in a dwelling equipped with a meter measuring and communicating consumption in the dwelling by regular time intervals, said score corresponding to said presence information stored in the database. 18. The method according to claim 17, wherein a cross-validation is further applied in order to consolidate the determination of the calculation model. 19. The method according to claim 14, wherein the detection model is obtained by implementing a convolutional neural network, this implementation comprising:
a setting of variables in a learning sample composed of dwellings for which the consumption is analyzed, a learning of the sample by the neural network, and the determination by the neural network of a model for calculating a probability score for the presence of autonomous means in a dwelling equipped with a meter measuring and communicating consumption in the dwelling by regular time intervals, said score corresponding to said presence information stored in the database. 20. The method according to claim 17, wherein the variables to be set in the learning sample comprise, for each dwelling:
a predictive variable indicating whether or not the dwelling is equipped with autonomous production means; the dwelling consumption, by regular intervals; the weather conditions in the area of the dwelling, including an instantaneous temperature in this area. 21. The method according to claim 14, wherein the autonomous production means whose presence is to be detected is a photovoltaic panel, and wherein the weather conditions include at least the level of sunlight at a given moment. 22. The method according to claim 20, wherein the autonomous production means whose presence is to be detected is a photovoltaic panel, and the weather conditions include at least the level of sunlight at a given moment, and wherein each meter is arranged to measure and communicate consumption at an hourly or sub-hourly interval, said explanatory variables comprising at least one variable among:
an average ratio per meter between a consumption between 10 a.m. and 4 p.m. in a day and a total consumption over the same full day, a slope of the regression line of the consumption between 12 p.m. and 2 p.m. versus level of sunlight, a ratio between an average consumption over all meters between a.m. and 4 p.m. for the X days with higher levels of sunlight and that between 10 a.m. and 4 p.m. for the X days with lower levels of sunlight, an average ratio per meter between the consumption between 10 a.m. and 4 p.m. and the degrees of temperature for the same day between 10 a.m. and 4 p.m., an average ratio of consumption per degree of temperature between 10 a.m. and 4 p.m. for the X days with higher levels of sunlight and consumption per degree of temperature between 10 a.m. and 4 p.m. for the X days with lower levels of sunlight. 23. The method according to claim 20, wherein the autonomous production means whose presence is to be detected is a photovoltaic panel, and wherein the weather conditions include at least the level of sunlight at a given moment, and wherein each meter is arranged to measure and communicate consumption at a daily interval, the explanatory variables comprising at least one variable among:
a ratio of consumption over one day to level of sunlight over the same day; a ratio of consumption over one day to average level of sunlight between 10 a.m. and 4 p.m. of that day; a ratio of consumption over one day to average level of sunlight between 12 p.m. and 2 p.m. of that day; a ratio of consumption over one day to maximum level of sunlight between 10 a.m. and 4 p.m. of that day; a ratio of consumption over one day to maximum level of sunlight between 12 p.m. and 2 p.m. of that day; a ratio between the average daily consumption for the X days with higher levels of sunlight and the average consumption for the X days with lower levels of sunlight; an average ratio of consumption per day per degrees of temperature; an average ratio between consumption per day per degree of temperature for the X days with higher levels of sunlight and consumption per day per degree of temperature for the X days with lower levels of sunlight. 24. A non-transitory computer storage medium, storing instructions of a computer program causing the method according to claim 14 to be implemented, when said instructions are executed by a processor of a processing circuit. 25. A server for implementing the method according to claim 14, comprising a memory for storing the database and a processing circuit configured for applying the model for detecting autonomous production means in a dwelling, based on at least the first data on consumption from the meter of that dwelling and on the second data on weather conditions for the area of the dwelling. 26. A processing circuit of a server for implementing the method according to claim 14, comprising a memory for storing the database and a processing circuit configured for applying the model for detecting autonomous production means in a dwelling, based on at least the first data on consumption from the meter of that dwelling and on the second data on weather conditions for the area of the dwelling, the processing circuit being configured, for the purposes of implementing the method according to claim 14, to apply the model for detecting autonomous production means in a dwelling, based on at least the first data on consumption from the meter of the dwelling and the second data on weather conditions for the area of that dwelling. | To ensure the safety of people needing to service a low-voltage network of an electric power distribution system, the dwellings connected to this network may include autonomous units for producing electricity (PV 1 , . . . , PVn), thus generating voltage and endangering the people servicing the work. A step is therefore provided for obtaining first data from consumption records from the meter (C 1 , . . . , Cn) of each dwelling, in regular time intervals, and second data (MET) which are meteorological data in the geographical area of these dwellings, in order to identify at least some weather conditions conducive to the production of energy by autonomous units. A model is then applied for detecting, based on the first and second data, a coincidence between periods of lower consumption measured by a meter and weather conditions conducive to electricity production by autonomous units during these periods. Information on the presence or absence of autonomous units in the dwelling equipped with this meter is thus deduced, this information being stored in a database (MEM) with a corresponding dwelling identifier, and the dwellings likely to include autonomous units are thus identified in the database.1-13. (canceled) 14. A method for ensuring the safety of persons needing to service a low-voltage network of an electric power distribution system, wherein:
the low-voltage network is connected to a substation supplying power to a plurality of dwellings within a geographical area, the dwellings are equipped with meters configured to measure and communicate consumption by regular time intervals, at least some of the dwellings are likely to comprise autonomous means of producing electricity using renewable energy, a placing in operation of said autonomous means generating voltage in the low-voltage network despite shutting down the substation during the work, thus endangering people during their servicing, the method, implemented by a server communicating with the meters, comprising:
obtaining first data from consumption records from each meter by regular time intervals,
obtaining second data which are meteorological data in the geographical area, in order to identify at least some weather conditions conducive to the production of energy by said autonomous means,
for each meter, applying a model for detecting, based on at least the first and second data, a coincidence between periods of low consumption measured by the meter and weather conditions conducive to electricity production by said autonomous means during said periods, and deducing, from the application of the model, information on the presence of autonomous means in the dwelling equipped with this meter,
for each dwelling, storing in a database the information on the presence of autonomous means, with a corresponding identifier specific to the dwelling,
and, before performing the servicing work, identifying in the database the dwellings likely to comprise autonomous means. 15. The method according to claim 14, wherein the server further obtains an instantaneous measurement of the voltage in the low-voltage network before the servicing work, and the work is dependent on the server obtaining a zero-voltage measurement in the low-voltage network. 16. The method according to claim 15, wherein each meter is configured to be cut off remotely by a command from the server, and wherein the presence information is a probability score for the presence of autonomous means in the dwelling equipped with said meter, and the method further comprises:
establishing a list of the probability scores for the presence of autonomous means, with respective corresponding meter identifiers, if a non-zero voltage is measured in the low-voltage network, using the meter identifier having the maximum score in said list to cut off that meter remotely, then removing that meter from the list and using again a next meter identifier having a new maximum score in said list to cut off that next meter until a zero-voltage measurement is obtained in the network. 17. The method according to claim 14, wherein the detection model is obtained by a “tree boosting” technique comprising:
a setting of variables in a learning sample composed of dwellings for which the consumption is analyzed,
a defining of explanatory variables of the correlation, taken from the variables set, and
an application of the tree boosting method to the explanatory variables in order to determine a model for calculating a probability score for the presence of autonomous means in a dwelling equipped with a meter measuring and communicating consumption in the dwelling by regular time intervals, said score corresponding to said presence information stored in the database. 18. The method according to claim 17, wherein a cross-validation is further applied in order to consolidate the determination of the calculation model. 19. The method according to claim 14, wherein the detection model is obtained by implementing a convolutional neural network, this implementation comprising:
a setting of variables in a learning sample composed of dwellings for which the consumption is analyzed, a learning of the sample by the neural network, and the determination by the neural network of a model for calculating a probability score for the presence of autonomous means in a dwelling equipped with a meter measuring and communicating consumption in the dwelling by regular time intervals, said score corresponding to said presence information stored in the database. 20. The method according to claim 17, wherein the variables to be set in the learning sample comprise, for each dwelling:
a predictive variable indicating whether or not the dwelling is equipped with autonomous production means; the dwelling consumption, by regular intervals; the weather conditions in the area of the dwelling, including an instantaneous temperature in this area. 21. The method according to claim 14, wherein the autonomous production means whose presence is to be detected is a photovoltaic panel, and wherein the weather conditions include at least the level of sunlight at a given moment. 22. The method according to claim 20, wherein the autonomous production means whose presence is to be detected is a photovoltaic panel, and the weather conditions include at least the level of sunlight at a given moment, and wherein each meter is arranged to measure and communicate consumption at an hourly or sub-hourly interval, said explanatory variables comprising at least one variable among:
an average ratio per meter between a consumption between 10 a.m. and 4 p.m. in a day and a total consumption over the same full day, a slope of the regression line of the consumption between 12 p.m. and 2 p.m. versus level of sunlight, a ratio between an average consumption over all meters between a.m. and 4 p.m. for the X days with higher levels of sunlight and that between 10 a.m. and 4 p.m. for the X days with lower levels of sunlight, an average ratio per meter between the consumption between 10 a.m. and 4 p.m. and the degrees of temperature for the same day between 10 a.m. and 4 p.m., an average ratio of consumption per degree of temperature between 10 a.m. and 4 p.m. for the X days with higher levels of sunlight and consumption per degree of temperature between 10 a.m. and 4 p.m. for the X days with lower levels of sunlight. 23. The method according to claim 20, wherein the autonomous production means whose presence is to be detected is a photovoltaic panel, and wherein the weather conditions include at least the level of sunlight at a given moment, and wherein each meter is arranged to measure and communicate consumption at a daily interval, the explanatory variables comprising at least one variable among:
a ratio of consumption over one day to level of sunlight over the same day; a ratio of consumption over one day to average level of sunlight between 10 a.m. and 4 p.m. of that day; a ratio of consumption over one day to average level of sunlight between 12 p.m. and 2 p.m. of that day; a ratio of consumption over one day to maximum level of sunlight between 10 a.m. and 4 p.m. of that day; a ratio of consumption over one day to maximum level of sunlight between 12 p.m. and 2 p.m. of that day; a ratio between the average daily consumption for the X days with higher levels of sunlight and the average consumption for the X days with lower levels of sunlight; an average ratio of consumption per day per degrees of temperature; an average ratio between consumption per day per degree of temperature for the X days with higher levels of sunlight and consumption per day per degree of temperature for the X days with lower levels of sunlight. 24. A non-transitory computer storage medium, storing instructions of a computer program causing the method according to claim 14 to be implemented, when said instructions are executed by a processor of a processing circuit. 25. A server for implementing the method according to claim 14, comprising a memory for storing the database and a processing circuit configured for applying the model for detecting autonomous production means in a dwelling, based on at least the first data on consumption from the meter of that dwelling and on the second data on weather conditions for the area of the dwelling. 26. A processing circuit of a server for implementing the method according to claim 14, comprising a memory for storing the database and a processing circuit configured for applying the model for detecting autonomous production means in a dwelling, based on at least the first data on consumption from the meter of that dwelling and on the second data on weather conditions for the area of the dwelling, the processing circuit being configured, for the purposes of implementing the method according to claim 14, to apply the model for detecting autonomous production means in a dwelling, based on at least the first data on consumption from the meter of the dwelling and the second data on weather conditions for the area of that dwelling. | 2,100 |
6,607 | 6,607 | 14,090,101 | 2,193 | The visual display of the timing of execution of a marker. During a time frame, a first application program interface, which is configured to represent a first marker, is executed on a first thread of execution of an application. The first application program interface generates a first event for visualization on the display, when executed. During the time frame, a second application program interface, which is configured to represent a second marker, is also executed on the first thread of execution of the application. The second application program interface generates a second event for visualization on the display, when executed. A visualization of the first marker and the second marker is displayed on a timeline visualization of activity of the first thread of execution of the application in the context of the time frame. | 1. A method performed by a computing system having a display for visually indicating on the display a timing of execution of a marker, the method comprising:
during a time frame, an act of one or more processors of the computing system executing a first application program interface on a first thread of execution of an application, the first application program interface configured to represent a first marker, the first application program interface generating a first event for visualization on the display, when executed; during the time frame, an act of executing a second application program interface on the first thread of execution of the application, the second application program interface configured to represent a second marker, the second application program interface generating a second event for visualization on the display, when executed; and an act of displaying a visualization of the first marker and the second marker on a timeline visualization of activity of the first thread of execution of the application in the context of the time frame. 2. The method in accordance with claim 1, wherein the first marker includes a mechanism for turning the visualization of the first marker on and off through an event generation infrastructure. 3. The method in accordance with claim 1, wherein the first application program interface includes a text description that is displayed in the timeline visualization of activity. 4. The method in accordance with claim 1, wherein the first marker is displayed as a first vertical bar and the second marker is displayed as a second vertical bar. 5. The method in accordance with claim 1, wherein the visualization of the first marker also includes an association with the first thread of execution. 6. The method in accordance with claim 1, wherein the timeline visualization of the execution of the application includes a plurality of threads. 7. The method in accordance with claim 1, further comprising an act of capturing a first time that the first event executed. 8. The method in accordance with claim 1, wherein the timeline is represented in a horizontal direction proceeding from left to right. 9. The method in accordance with claim 1, wherein the timeline visualization can include any one of: the plurality of threads, disk activity, kernel activity, or processor activity. 10. The method in accordance with claim 1, wherein the timeline visualization provides an option to expand portions of the timeline for a detailed view. 11. The method in accordance with claim 1, wherein the executing thread represented on the timeline is assigned an identifier. 12. A computer program product comprising one or more hardware storage devices having stored thereon computer-executable instructions that, when executed by one or more processors of a computing system having a display for visually indicating on the display a timing of execution of a marker, cause the computer system to perform at least the following:
during a time frame, executing a first application program interface on a first thread of execution of an application, the first application program interface configured to represent a first marker, the first application program interface generating a first event for visualization on the display, when executed; during the time frame, executing a second application program interface on the first thread of execution of the application, the second application program interface configured to represent a second marker, the second application program interface generating a second event for visualization on the display, when executed; and displaying a visualization of the first marker and the second marker on a timeline visualization of activity of the first thread of execution of the application in the context of the time frame. 13. The computer program product in accordance with claim 12, wherein the first marker includes a mechanism for turning the visualization of the first marker on and off through an event generation infrastructure. 14. The computer program product in accordance with claim 12, wherein the first application program interface includes a text description that is displayed in the timeline visualization of activity. 15. The computer program product in accordance with claim 12, wherein the first marker is displayed as a first vertical bar and the second marker is displayed as a second vertical bar. 16. The computer program product in accordance with claim 12, wherein the visualization of the first marker also includes an association with the first thread of execution. 17. The computer program product in accordance with claim 12, further comprising an act of capturing a first time that the first event executed. 18. The computer program product in accordance with claim 12, wherein the timelines is represented in a horizontal direction proceeding from left to right. 19. A computer system, comprising:
one or more processors; a display; and one or more computer-readable media having stored thereon computer-executable instructions that, when executed by the one or more processors, cause the computer system to perform at least the following: during a time frame, executing a first application program interface on a first thread of execution of an application, the first application program interface configured to represent a first marker, the first application program interface generating a first event for visualization on the display, when executed; during the time frame, executing a second application program interface on the first thread of execution of the application, the second application program interface configured to represent a second marker, the second application program interface generating a second event for visualization on the display, when executed; and displaying a visualization of the first marker and the second marker on a timeline visualization of activity of the first thread of execution of the application in the context of the time frame. 20. The computer system in accordance with claim 19, wherein the first application program interface includes a text description that is displayed in the timeline visualization of activity. | The visual display of the timing of execution of a marker. During a time frame, a first application program interface, which is configured to represent a first marker, is executed on a first thread of execution of an application. The first application program interface generates a first event for visualization on the display, when executed. During the time frame, a second application program interface, which is configured to represent a second marker, is also executed on the first thread of execution of the application. The second application program interface generates a second event for visualization on the display, when executed. A visualization of the first marker and the second marker is displayed on a timeline visualization of activity of the first thread of execution of the application in the context of the time frame.1. A method performed by a computing system having a display for visually indicating on the display a timing of execution of a marker, the method comprising:
during a time frame, an act of one or more processors of the computing system executing a first application program interface on a first thread of execution of an application, the first application program interface configured to represent a first marker, the first application program interface generating a first event for visualization on the display, when executed; during the time frame, an act of executing a second application program interface on the first thread of execution of the application, the second application program interface configured to represent a second marker, the second application program interface generating a second event for visualization on the display, when executed; and an act of displaying a visualization of the first marker and the second marker on a timeline visualization of activity of the first thread of execution of the application in the context of the time frame. 2. The method in accordance with claim 1, wherein the first marker includes a mechanism for turning the visualization of the first marker on and off through an event generation infrastructure. 3. The method in accordance with claim 1, wherein the first application program interface includes a text description that is displayed in the timeline visualization of activity. 4. The method in accordance with claim 1, wherein the first marker is displayed as a first vertical bar and the second marker is displayed as a second vertical bar. 5. The method in accordance with claim 1, wherein the visualization of the first marker also includes an association with the first thread of execution. 6. The method in accordance with claim 1, wherein the timeline visualization of the execution of the application includes a plurality of threads. 7. The method in accordance with claim 1, further comprising an act of capturing a first time that the first event executed. 8. The method in accordance with claim 1, wherein the timeline is represented in a horizontal direction proceeding from left to right. 9. The method in accordance with claim 1, wherein the timeline visualization can include any one of: the plurality of threads, disk activity, kernel activity, or processor activity. 10. The method in accordance with claim 1, wherein the timeline visualization provides an option to expand portions of the timeline for a detailed view. 11. The method in accordance with claim 1, wherein the executing thread represented on the timeline is assigned an identifier. 12. A computer program product comprising one or more hardware storage devices having stored thereon computer-executable instructions that, when executed by one or more processors of a computing system having a display for visually indicating on the display a timing of execution of a marker, cause the computer system to perform at least the following:
during a time frame, executing a first application program interface on a first thread of execution of an application, the first application program interface configured to represent a first marker, the first application program interface generating a first event for visualization on the display, when executed; during the time frame, executing a second application program interface on the first thread of execution of the application, the second application program interface configured to represent a second marker, the second application program interface generating a second event for visualization on the display, when executed; and displaying a visualization of the first marker and the second marker on a timeline visualization of activity of the first thread of execution of the application in the context of the time frame. 13. The computer program product in accordance with claim 12, wherein the first marker includes a mechanism for turning the visualization of the first marker on and off through an event generation infrastructure. 14. The computer program product in accordance with claim 12, wherein the first application program interface includes a text description that is displayed in the timeline visualization of activity. 15. The computer program product in accordance with claim 12, wherein the first marker is displayed as a first vertical bar and the second marker is displayed as a second vertical bar. 16. The computer program product in accordance with claim 12, wherein the visualization of the first marker also includes an association with the first thread of execution. 17. The computer program product in accordance with claim 12, further comprising an act of capturing a first time that the first event executed. 18. The computer program product in accordance with claim 12, wherein the timelines is represented in a horizontal direction proceeding from left to right. 19. A computer system, comprising:
one or more processors; a display; and one or more computer-readable media having stored thereon computer-executable instructions that, when executed by the one or more processors, cause the computer system to perform at least the following: during a time frame, executing a first application program interface on a first thread of execution of an application, the first application program interface configured to represent a first marker, the first application program interface generating a first event for visualization on the display, when executed; during the time frame, executing a second application program interface on the first thread of execution of the application, the second application program interface configured to represent a second marker, the second application program interface generating a second event for visualization on the display, when executed; and displaying a visualization of the first marker and the second marker on a timeline visualization of activity of the first thread of execution of the application in the context of the time frame. 20. The computer system in accordance with claim 19, wherein the first application program interface includes a text description that is displayed in the timeline visualization of activity. | 2,100 |
6,608 | 6,608 | 15,608,493 | 2,165 | A system and method for determining analytics based on multimedia content elements. The method includes causing generation of at least one signature for at least one input multimedia content element, wherein each signature represents a concept, wherein each concept is a collection of signatures and metadata representing the concept; comparing the generated at least one signature to a plurality of signatures of reference multimedia content elements to determine at least one matching reference multimedia content element, wherein each reference multimedia content element is associated with at least one predetermined analytic; and determining, based on the comparison, at least one analytic, wherein the determined at least one analytic includes the at least one predetermined analytic associated with each matching reference multimedia content element. | 1. A method for determining analytics based on multimedia content elements, comprising:
causing generation of at least one signature for at least one input multimedia content element, wherein each signature represents a concept, wherein each concept is a collection of signatures and metadata representing the concept; comparing the generated at least one signature to a plurality of signatures of reference multimedia content elements to determine at least one matching reference multimedia content element, wherein each reference multimedia content element is associated with at least one predetermined analytic; and determining, based on the comparison, at least one analytic, wherein the determined at least one analytic includes the at least one predetermined analytic associated with each matching reference multimedia content element. 2. The method of claim 1, further comprising:
creating a profile, wherein the profile includes the determined at least one analytic. 3. The method of claim 2, wherein the created profile indicates at least one consumer preference, wherein the at least one preference includes at least one of: preferences of a particular consumer, and general preferences of a plurality of consumers. 4. The method of claim 1, wherein the signatures of each matching reference multimedia content element match the at least one generated signature above a predetermined threshold. 5. The method of claim 1, wherein determining the at least one analytic further comprises:
sending, to a deep content classification system, at least one of: the at least one input multimedia content element, and the at least one signature generated for the at least one input multimedia content element; receiving, from the deep concept classification system, at least one concept matching the input multimedia content element; and creating at least one analytic based on the metadata representing the matching at least one concept, wherein the determined at least one analytic further includes the created at least one analytic. 6. The method of claim 1, wherein the at least one analytic includes at least one of: at least one movement, at least one interaction with an object, and at least one indication of a person. 7. The method of claim 5, wherein the at least one analytic includes: a path taken by an individual within a target area, at least one body movement, at least one movement within a target area, at least one facial movement, picking up an object, placing an object, and an indication of at least one criminal identified in a criminal database. 8. The method of claim 1, wherein each input multimedia content element is at least one of: an image, graphics, a video stream, a video clip, an audio stream, an audio clip, a video frame, a photograph, images of signals, and a portion thereof. 9. The method of claim 1, wherein each signature is generated by a signature generator system, wherein the signature generator system includes a plurality of at least partially statistically independent computational cores, wherein the properties of each core are set independently of the properties of each other core. 10. A non-transitory computer readable medium having stored thereon instructions for causing a processing circuitry to execute a process, the process comprising:
causing generation of at least one signature for at least one input multimedia content element, wherein each signature represents a concept, wherein each concept is a collection of signatures and metadata representing the concept; comparing the generated at least one signature to a plurality of signatures of reference multimedia content elements to determine at least one matching reference multimedia content element, wherein each reference multimedia content element is associated with at least one predetermined analytic; and determining, based on the comparison, at least one analytic, wherein the determined at least one analytic includes the at least one predetermined analytic associated with each matching reference multimedia content element. 11. A system for determining analytics based on multimedia content elements, comprising:
a processing circuitry; and a memory, the memory containing instructions that, when executed by the processing circuitry, configures the system to: cause generation of at least one signature for at least one input multimedia content element, wherein each signature represents a concept, wherein each concept is a collection of signatures and metadata representing the concept; compare the generated at least one signature to a plurality of signatures of reference multimedia content elements to determine at least one matching reference multimedia content element, wherein each reference multimedia content element is associated with at least one predetermined analytic; and determine, based on the comparison, at least one analytic, wherein the determined at least one analytic includes the at least one predetermined analytic associated with each matching reference multimedia content element. 12. The system of claim 11, wherein the system is further configured to:
create a profile, wherein the profile includes the determined at least one analytic. 13. The system of claim 12, wherein the created profile indicates at least one consumer preference, wherein the at least one preference includes at least one of: preferences of a particular consumer, and general preferences of a plurality of consumers. 14. The system of claim 11, wherein the signatures of each matching reference multimedia content element match the at least one generated signature above a predetermined threshold. 15. The system of claim 11, wherein the system is further configured to:
send, to a deep content classification system, at least one of: the at least one input multimedia content element, and the at least one signature generated for the at least one input multimedia content element; receive, from the deep concept classification system, at least one concept matching the input multimedia content element; and create at least one analytic based on the metadata representing the matching at least one concept, wherein the determined at least one analytic further includes the created at least one analytic. 16. The system of claim 11, wherein the at least one analytic includes at least one of: at least one movement, at least one interaction with an object, and at least one indication of a person. 17. The system of claim 15, wherein the at least one analytic includes: a path taken by an individual within a target area, at least one body movement, at least one movement within a target area, at least one facial movement, picking up an object, placing an object, and an indication of at least one criminal identified in a criminal database. 18. The system of claim 11, wherein each input multimedia content element is at least one of: an image, graphics, a video stream, a video clip, an audio stream, an audio clip, a video frame, a photograph, images of signals, and a portion thereof. 19. The system of claim 11, wherein each signature is generated by a signature generator system, wherein the signature generator system includes a plurality of at least partially statistically independent computational cores, wherein the properties of each core are set independently of the properties of each other core. | A system and method for determining analytics based on multimedia content elements. The method includes causing generation of at least one signature for at least one input multimedia content element, wherein each signature represents a concept, wherein each concept is a collection of signatures and metadata representing the concept; comparing the generated at least one signature to a plurality of signatures of reference multimedia content elements to determine at least one matching reference multimedia content element, wherein each reference multimedia content element is associated with at least one predetermined analytic; and determining, based on the comparison, at least one analytic, wherein the determined at least one analytic includes the at least one predetermined analytic associated with each matching reference multimedia content element.1. A method for determining analytics based on multimedia content elements, comprising:
causing generation of at least one signature for at least one input multimedia content element, wherein each signature represents a concept, wherein each concept is a collection of signatures and metadata representing the concept; comparing the generated at least one signature to a plurality of signatures of reference multimedia content elements to determine at least one matching reference multimedia content element, wherein each reference multimedia content element is associated with at least one predetermined analytic; and determining, based on the comparison, at least one analytic, wherein the determined at least one analytic includes the at least one predetermined analytic associated with each matching reference multimedia content element. 2. The method of claim 1, further comprising:
creating a profile, wherein the profile includes the determined at least one analytic. 3. The method of claim 2, wherein the created profile indicates at least one consumer preference, wherein the at least one preference includes at least one of: preferences of a particular consumer, and general preferences of a plurality of consumers. 4. The method of claim 1, wherein the signatures of each matching reference multimedia content element match the at least one generated signature above a predetermined threshold. 5. The method of claim 1, wherein determining the at least one analytic further comprises:
sending, to a deep content classification system, at least one of: the at least one input multimedia content element, and the at least one signature generated for the at least one input multimedia content element; receiving, from the deep concept classification system, at least one concept matching the input multimedia content element; and creating at least one analytic based on the metadata representing the matching at least one concept, wherein the determined at least one analytic further includes the created at least one analytic. 6. The method of claim 1, wherein the at least one analytic includes at least one of: at least one movement, at least one interaction with an object, and at least one indication of a person. 7. The method of claim 5, wherein the at least one analytic includes: a path taken by an individual within a target area, at least one body movement, at least one movement within a target area, at least one facial movement, picking up an object, placing an object, and an indication of at least one criminal identified in a criminal database. 8. The method of claim 1, wherein each input multimedia content element is at least one of: an image, graphics, a video stream, a video clip, an audio stream, an audio clip, a video frame, a photograph, images of signals, and a portion thereof. 9. The method of claim 1, wherein each signature is generated by a signature generator system, wherein the signature generator system includes a plurality of at least partially statistically independent computational cores, wherein the properties of each core are set independently of the properties of each other core. 10. A non-transitory computer readable medium having stored thereon instructions for causing a processing circuitry to execute a process, the process comprising:
causing generation of at least one signature for at least one input multimedia content element, wherein each signature represents a concept, wherein each concept is a collection of signatures and metadata representing the concept; comparing the generated at least one signature to a plurality of signatures of reference multimedia content elements to determine at least one matching reference multimedia content element, wherein each reference multimedia content element is associated with at least one predetermined analytic; and determining, based on the comparison, at least one analytic, wherein the determined at least one analytic includes the at least one predetermined analytic associated with each matching reference multimedia content element. 11. A system for determining analytics based on multimedia content elements, comprising:
a processing circuitry; and a memory, the memory containing instructions that, when executed by the processing circuitry, configures the system to: cause generation of at least one signature for at least one input multimedia content element, wherein each signature represents a concept, wherein each concept is a collection of signatures and metadata representing the concept; compare the generated at least one signature to a plurality of signatures of reference multimedia content elements to determine at least one matching reference multimedia content element, wherein each reference multimedia content element is associated with at least one predetermined analytic; and determine, based on the comparison, at least one analytic, wherein the determined at least one analytic includes the at least one predetermined analytic associated with each matching reference multimedia content element. 12. The system of claim 11, wherein the system is further configured to:
create a profile, wherein the profile includes the determined at least one analytic. 13. The system of claim 12, wherein the created profile indicates at least one consumer preference, wherein the at least one preference includes at least one of: preferences of a particular consumer, and general preferences of a plurality of consumers. 14. The system of claim 11, wherein the signatures of each matching reference multimedia content element match the at least one generated signature above a predetermined threshold. 15. The system of claim 11, wherein the system is further configured to:
send, to a deep content classification system, at least one of: the at least one input multimedia content element, and the at least one signature generated for the at least one input multimedia content element; receive, from the deep concept classification system, at least one concept matching the input multimedia content element; and create at least one analytic based on the metadata representing the matching at least one concept, wherein the determined at least one analytic further includes the created at least one analytic. 16. The system of claim 11, wherein the at least one analytic includes at least one of: at least one movement, at least one interaction with an object, and at least one indication of a person. 17. The system of claim 15, wherein the at least one analytic includes: a path taken by an individual within a target area, at least one body movement, at least one movement within a target area, at least one facial movement, picking up an object, placing an object, and an indication of at least one criminal identified in a criminal database. 18. The system of claim 11, wherein each input multimedia content element is at least one of: an image, graphics, a video stream, a video clip, an audio stream, an audio clip, a video frame, a photograph, images of signals, and a portion thereof. 19. The system of claim 11, wherein each signature is generated by a signature generator system, wherein the signature generator system includes a plurality of at least partially statistically independent computational cores, wherein the properties of each core are set independently of the properties of each other core. | 2,100 |
6,609 | 6,609 | 16,516,061 | 2,175 | Music compilation methods disclosed herein include providing a database. Data is stored therein associating a user with access credentials for a plurality of music streaming services. A first server is communicatively coupled with the database and with multiple third party servers each of which includes a music library associated with the user. A list is stored in the database listing audio tracks of the libraries. A play selector is displayed on a user interface of a computing device communicatively coupled with the first server. User selection of the play selector initiates playback of a sample set, the sample set including portions of audio tracks in the list. The sample set is determined based on contextual information gathered by the computing device, the contextual information not including any user selection. Music compilation systems disclosed herein include systems configured to carry out the music compilation methods. | 1. A music compilation system, comprising:
one or more first databases; data stored in the one or more first databases associating a user with access credentials for a plurality of third party music streaming services; one or more first servers communicatively coupled with the one or more first databases, the one or more first servers also communicatively coupled with a plurality of third party servers through a telecommunication network, each of the third party servers providing access to a music library associated with the user, each of the third party servers associated with one of the third party music streaming services; a list stored in the one or more first databases listing audio tracks of the music libraries of the third party servers; a computing device communicatively coupled with the one or more first servers through the telecommunication network; and one or more user interfaces displayed on the computing device, the one or more user interfaces comprising:
a play selector configured to, in response to receiving a user selection, initiate play of a sample set using the computing device, wherein the sample set comprises portions of the audio tracks in the list;
wherein the system determines the portions of the audio tracks in the sample set based on contextual information gathered by the computing device, the contextual information not comprising any user selection. 2. The system of claim 1, wherein the sample set comprises multiple partial audio tracks each of which comprises only a portion of a full audio track. 3. The system of claim 1, wherein the one or more first servers are configured to determine, based on the contextual information, one or more of the following: a type of location in which the sample set is being played; a condition of the location; a purpose of an occasion for which the sample set is being played; a social dynamic at the location; a state of mind of the user; a regularity of the occasion; and a time frame. 4. The system of claim 1, wherein the list categorizes each audio track according to one or more of the following criteria: a tempo; an approachability; an engagement; and a sentiment. 5. The system of claim 4, wherein: the tempo is defined as beats per minute; the approachability is defined by one or more of chord progression, time signature, genre, motion of melody, complexity of texture, and instrument composition; the engagement is defined by one or more of dynamics, pan effect, harmony complexity, vocabulary range, and word count; and the sentiment is defined by one or more of chord type, chord progression, and lyric content. 6. The system of claim 1, wherein the system is configured to, in response to receiving the user selection of the play selector, automatically mix audio tracks by transitioning from a first audio track to a second audio track at portions of the two audio tracks that are similar in one or more of harmony, tempo and beat. 7. The system of claim 6, wherein the system is configured to automatically mix the audio tracks based on a determination made by the system of which portions of the two audio tracks are most similar in one or more of harmony, tempo and beat. 8. The system of claim 1, wherein the system determines, in response to the user selection of the play selector, a playback time for the sample set, wherein the playback time is not selected by the user but is based on the contextual information. 9. The system of claim 1, wherein the system is configured to repeatedly gather the contextual information using the computing device and, without receiving any user selection, modify the sample set based on a change in the contextual information. 10. A music compilation method, comprising:
providing one or more first databases; associating, in the one or more first databases, a user with a plurality of third party music streaming services; providing one or more first servers communicatively coupled with the one or more first databases, the one or more first servers also communicatively coupled with a plurality of third party servers through a telecommunication network, each of the third party servers providing access to a music library associated with the user, each of the third party servers associated with one of the third party music streaming services; storing a list in the one or more first databases listing audio tracks of the music libraries of the third party servers; displaying a play selector on a user interface of a computing device communicatively coupled with the one or more first servers through the telecommunication network; in response to receiving a user selection of the play selector, initiating play of a sample set using the computing device, wherein the sample set comprises portions of the audio tracks in the list; wherein the sample set is determined based on contextual information gathered by the computing device, the contextual information not comprising any user selection. 11. The method of claim 10, wherein the sample set comprises only partial audio tracks each of which comprises only a portion of a full audio track. 12. The method of claim 10, further comprising determining, using the one or more first servers, based on the contextual information, one or more of the following: a type of location in which the sample set is being played; a condition of the location; a purpose of an occasion for which the sample set is being played; a social dynamic at the location; a state of mind of the user; a regularity of the occasion; and a time frame. 13. The method of claim 10, further comprising displaying, on a user interface of the computing device, contextual determinations based on the contextual information, and displaying one or more selectors thereon allowing a user to modify the contextual determinations. 14. The method of claim 10, further comprising, using the one or more first servers, categorizing each audio track according to one or more of the following criteria: a tempo; an approachability; an engagement; and a sentiment. 15. The method of claim 10, further comprising, in response to receiving the user selection of the play selector, automatically mixing audio tracks by transitioning from a first audio track to a second audio track at portions of the two audio tracks that are similar in one or more of harmony, tempo and beat. 16. The method of claim 10, further comprising displaying an off selector on a user interface of the computing device and, in response to receiving a user selection of the off selector, ceasing from gathering the contextual information by the computing device and ceasing from determining the sample set based on the contextual information. 17. The method of claim 10, further comprising displaying a do-not-play selector on a user interface of the computing device allowing the user to make one or more selections to prevent one or more audio tracks in the list from being included in the sample set. 18. The method of claim 10, further comprising analyzing the music libraries of the user using the one or more first servers and storing in the one or more first databases, based on the analysis, one or more music preferences of the user, wherein the sample set is determined at least in part based on the one or more music preferences. 19. The method of claim 10, further comprising displaying, on a user interface of the computing device: audio track details of the sample set; a save selector allowing the user to save the sample set; and a share selector allowing the user to share the sample set. 20. A music compilation method, comprising:
providing one or more databases; providing one or more servers communicatively coupled with the one or more databases; storing, in the one or more databases, a list of audio tracks; associating, through the one or more databases, a user with the list of audio tracks; in response to receiving a user selection of a play selector through a user interface of a computing device communicatively coupled with the one or more servers through a telecommunication network, initiating play of a sample set using the computing device, wherein the sample set comprises portions of the audio tracks in the list; wherein the sample set is determined based on contextual information gathered by the computing device, the contextual information not comprising any user selection; and wherein the sample set comprises multiple partial audio tracks each of which comprises only a portion of a full audio track. | Music compilation methods disclosed herein include providing a database. Data is stored therein associating a user with access credentials for a plurality of music streaming services. A first server is communicatively coupled with the database and with multiple third party servers each of which includes a music library associated with the user. A list is stored in the database listing audio tracks of the libraries. A play selector is displayed on a user interface of a computing device communicatively coupled with the first server. User selection of the play selector initiates playback of a sample set, the sample set including portions of audio tracks in the list. The sample set is determined based on contextual information gathered by the computing device, the contextual information not including any user selection. Music compilation systems disclosed herein include systems configured to carry out the music compilation methods.1. A music compilation system, comprising:
one or more first databases; data stored in the one or more first databases associating a user with access credentials for a plurality of third party music streaming services; one or more first servers communicatively coupled with the one or more first databases, the one or more first servers also communicatively coupled with a plurality of third party servers through a telecommunication network, each of the third party servers providing access to a music library associated with the user, each of the third party servers associated with one of the third party music streaming services; a list stored in the one or more first databases listing audio tracks of the music libraries of the third party servers; a computing device communicatively coupled with the one or more first servers through the telecommunication network; and one or more user interfaces displayed on the computing device, the one or more user interfaces comprising:
a play selector configured to, in response to receiving a user selection, initiate play of a sample set using the computing device, wherein the sample set comprises portions of the audio tracks in the list;
wherein the system determines the portions of the audio tracks in the sample set based on contextual information gathered by the computing device, the contextual information not comprising any user selection. 2. The system of claim 1, wherein the sample set comprises multiple partial audio tracks each of which comprises only a portion of a full audio track. 3. The system of claim 1, wherein the one or more first servers are configured to determine, based on the contextual information, one or more of the following: a type of location in which the sample set is being played; a condition of the location; a purpose of an occasion for which the sample set is being played; a social dynamic at the location; a state of mind of the user; a regularity of the occasion; and a time frame. 4. The system of claim 1, wherein the list categorizes each audio track according to one or more of the following criteria: a tempo; an approachability; an engagement; and a sentiment. 5. The system of claim 4, wherein: the tempo is defined as beats per minute; the approachability is defined by one or more of chord progression, time signature, genre, motion of melody, complexity of texture, and instrument composition; the engagement is defined by one or more of dynamics, pan effect, harmony complexity, vocabulary range, and word count; and the sentiment is defined by one or more of chord type, chord progression, and lyric content. 6. The system of claim 1, wherein the system is configured to, in response to receiving the user selection of the play selector, automatically mix audio tracks by transitioning from a first audio track to a second audio track at portions of the two audio tracks that are similar in one or more of harmony, tempo and beat. 7. The system of claim 6, wherein the system is configured to automatically mix the audio tracks based on a determination made by the system of which portions of the two audio tracks are most similar in one or more of harmony, tempo and beat. 8. The system of claim 1, wherein the system determines, in response to the user selection of the play selector, a playback time for the sample set, wherein the playback time is not selected by the user but is based on the contextual information. 9. The system of claim 1, wherein the system is configured to repeatedly gather the contextual information using the computing device and, without receiving any user selection, modify the sample set based on a change in the contextual information. 10. A music compilation method, comprising:
providing one or more first databases; associating, in the one or more first databases, a user with a plurality of third party music streaming services; providing one or more first servers communicatively coupled with the one or more first databases, the one or more first servers also communicatively coupled with a plurality of third party servers through a telecommunication network, each of the third party servers providing access to a music library associated with the user, each of the third party servers associated with one of the third party music streaming services; storing a list in the one or more first databases listing audio tracks of the music libraries of the third party servers; displaying a play selector on a user interface of a computing device communicatively coupled with the one or more first servers through the telecommunication network; in response to receiving a user selection of the play selector, initiating play of a sample set using the computing device, wherein the sample set comprises portions of the audio tracks in the list; wherein the sample set is determined based on contextual information gathered by the computing device, the contextual information not comprising any user selection. 11. The method of claim 10, wherein the sample set comprises only partial audio tracks each of which comprises only a portion of a full audio track. 12. The method of claim 10, further comprising determining, using the one or more first servers, based on the contextual information, one or more of the following: a type of location in which the sample set is being played; a condition of the location; a purpose of an occasion for which the sample set is being played; a social dynamic at the location; a state of mind of the user; a regularity of the occasion; and a time frame. 13. The method of claim 10, further comprising displaying, on a user interface of the computing device, contextual determinations based on the contextual information, and displaying one or more selectors thereon allowing a user to modify the contextual determinations. 14. The method of claim 10, further comprising, using the one or more first servers, categorizing each audio track according to one or more of the following criteria: a tempo; an approachability; an engagement; and a sentiment. 15. The method of claim 10, further comprising, in response to receiving the user selection of the play selector, automatically mixing audio tracks by transitioning from a first audio track to a second audio track at portions of the two audio tracks that are similar in one or more of harmony, tempo and beat. 16. The method of claim 10, further comprising displaying an off selector on a user interface of the computing device and, in response to receiving a user selection of the off selector, ceasing from gathering the contextual information by the computing device and ceasing from determining the sample set based on the contextual information. 17. The method of claim 10, further comprising displaying a do-not-play selector on a user interface of the computing device allowing the user to make one or more selections to prevent one or more audio tracks in the list from being included in the sample set. 18. The method of claim 10, further comprising analyzing the music libraries of the user using the one or more first servers and storing in the one or more first databases, based on the analysis, one or more music preferences of the user, wherein the sample set is determined at least in part based on the one or more music preferences. 19. The method of claim 10, further comprising displaying, on a user interface of the computing device: audio track details of the sample set; a save selector allowing the user to save the sample set; and a share selector allowing the user to share the sample set. 20. A music compilation method, comprising:
providing one or more databases; providing one or more servers communicatively coupled with the one or more databases; storing, in the one or more databases, a list of audio tracks; associating, through the one or more databases, a user with the list of audio tracks; in response to receiving a user selection of a play selector through a user interface of a computing device communicatively coupled with the one or more servers through a telecommunication network, initiating play of a sample set using the computing device, wherein the sample set comprises portions of the audio tracks in the list; wherein the sample set is determined based on contextual information gathered by the computing device, the contextual information not comprising any user selection; and wherein the sample set comprises multiple partial audio tracks each of which comprises only a portion of a full audio track. | 2,100 |
6,610 | 6,610 | 13,853,718 | 2,163 | A new approach is proposed that contemplates systems and methods to discover one or more terms related to one or more query terms submitted by a user for search over a social media network, wherein the related terms discovered are trending and co-occurring with the submitted query terms over the social media network during a specific period of time. The terms related to the submitted keywords can be discovered based on based on various measurements that measure the trending characteristics of the terms in the social media content items collected over a period of time. Once the terms related to the submitted keywords have been discovered, they can be utilized to search or perform aggregated metrics and analytics on the social network together with the user-submitted query terms for content items containing all or most of the query terms and/or the related terms, wherein such content items obtained are presented as the search result to the user or subject to aggregate metrics and analytics presented to the user. | 1. A system, comprising:
a social media content analysis engine, which in operation, discovers one or more terms related to one or more query terms submitted by a user for analysis over a social media network, wherein the related terms discovered are trending and co-occurring with the submitted query terms over the social media network during a specific period of time; a social media content collection engine, which in operation,
accepts the one or more query terms submitted by the user;
utilizes both the query terms submitted and the related terms discovered to search over the social network or compute aggregate metrics and analytics in real time;
retrieves a plurality of content items matching all or at least a subset of the query terms and the related terms and presents the retrieved content items as search result to the user, or computes aggregate metrics and analytics for the said plurality of matching content items and presents the computed metrics/analytics to the user 2. The system of claim 1, wherein:
the social network is a publicly accessible web-based platform or community that enables its users/members to post, share, communicate, and interact with each other. 3. The system of claim 1, wherein:
the social network is one any other web-based communities. 4. The system of claim 1, wherein:
the content items on the social media network include one or more of citations, tweets, replies and/or re-tweets to the tweets, posts, comments to other users' posts, opinions, feeds, connections, references, links to other websites or applications, or any other activities on the social network. 5. The system of claim 1, wherein:
the social media content collection engine continuously retrieves social media content items from the social network in real time. 6. The system of claim 1, wherein:
the social media content analysis engine discovers the related terms by examining a historical archive of recent content items retrieved from the social network for top trending terms co-occurring with the submitted keywords before searching over the social network. 7. The system of claim 1, wherein:
the social media content analysis engine dynamically discovers the related terms by examining social media content stream retrieved from the social network in real time and applies the related terms discovered to search for the content items together with the user-submitted keywords. 8. The system of claim 1, wherein:
the social media content analysis engine discovers the related terms via a significant post index, which includes content items that contain a link or a re-post to another content item. 9. The system of claim 1, wherein:
the social media content analysis engine discovers and/or sorts the related terms based on unexpectedness of the terms, where weight is given to the terms that are uncommon in the general search. 10. The system of claim 1, wherein:
the social media content analysis engine discovers and/or sorts the related terms based on contemporaneousness of the terms, where weight is given to the terms whose rate of co-occurrence with the keywords submitted has increased significantly in a short period of time. 11. The system of claim 1, wherein:
the social media content analysis engine discovers and/or sorts the related terms based on meaningfulness of the terms, where weight is given to the terms whose absolute rate of co-occurrence with the query is larger than others. 12. The system of claim 1, wherein:
the social media content analysis engine discovers and/or sorts the related terms based on intentions of the terms, where weight is given to hashtags because they suggest an intent to query. 13. The system of claim 1, wherein:
the social media content analysis engine discovers and/or sorts the related terms based on momentum of the terms, which measures the combined popularity of the terms and the speed at which that popularity is increasing. 14. The system of claim 1, wherein:
the social media content analysis engine discovers and/or sorts the related terms based on velocity of the terms, which solely measures the speed at which the terms' popularity is increasing, independent of the terms' overall popularity. 15. The system of claim 1, wherein:
the social media content analysis engine discovers and/or sorts the related terms based on peak of the terms, which indicates the time period that has the highest number of content items containing the terms over the time period selected. 16. The system of claim 1, wherein:
the social media content analysis engine discovers and/or sorts the related terms based on influence of the terms, which measures the total number of influential mentions/retweets of a content item containing the terms over the lifetime of the content item. 17. A method, comprising:
accepting one or more keywords submitted by a user for search or analysis of content items over a social media network; discovering one or more terms related to the keywords submitted, wherein such related terms are trending and co-occurring with the submitted keywords over the social media network during a specific period of time; utilizing both the keywords submitted and the related terms discovered to search or analyze for a plurality of content items over the social network in real time; retrieving the content items or aggregate analytics and metrics for items containing all or at least a subset of the keywords and the related terms and presenting the retrieved content items as search result to the user. 18. The method of claim 17, further comprising:
retrieving social media content items or aggregate analytics from the social network continuously in real time. 19. The method of claim 17, further comprising:
discovering the related terms by examining a historical archive of recent content items retrieved from the social network for top trending terms co-occurring with the submitted keywords before searching over the social network. 20. The method of claim 17, further comprising:
discovering dynamically the related terms by examining social media content stream retrieved from the social network in real time and applies the related terms discovered to search or aggregate analytics for the content items together with the user-submitted keywords. 21. The method of claim 17, further comprising:
discovering the related terms via a significant post index, which includes content items that contain a link or a re-post to another content item. 22. The method of claim 17, further comprising:
discovering and/or sorting the related terms based on unexpectedness of the terms, where weight is given to the terms that are uncommon in the general search or analysis. 23. The method of claim 17, further comprising:
discovering and/or sorting the related terms based on contemporaneousness of the terms, where weight is given to the terms whose rate of co-occurrence with the keywords submitted has increased significantly in a short period of time. 24. The method of claim 17, further comprising:
discovering and/or sorting the related terms based on meaningfulness of the terms, where weight is given to the terms whose absolute rate of co-occurrence with the query is larger than others. 25. The method of claim 17, further comprising:
discovering and/or sorting the related terms based on intentions of the terms, where weight is given to hashtags because they suggest an intent to query. 26. The method of claim 17, further comprising:
discovering and/or sorting the related terms based on momentum of the terms, which measures the combined popularity of the terms and the speed at which that popularity is increasing. 27. The method of claim 17, further comprising:
discovering and/or sorting the related terms based on velocity of the terms, which solely measures the speed at which the terms' popularity is increasing, independent of the terms' overall popularity. 28. The method of claim 17, further comprising:
discovering and/or sorting the related terms based on peak of the terms, which indicates the time period that has the highest number of content items containing the terms over the time period selected. 29. The method of claim 17, further comprising:
discovering and/or sorting the related terms based on influence of the terms, which measures the total number of influential mentions/retweets of a content item containing the terms over the lifetime of the content item. | A new approach is proposed that contemplates systems and methods to discover one or more terms related to one or more query terms submitted by a user for search over a social media network, wherein the related terms discovered are trending and co-occurring with the submitted query terms over the social media network during a specific period of time. The terms related to the submitted keywords can be discovered based on based on various measurements that measure the trending characteristics of the terms in the social media content items collected over a period of time. Once the terms related to the submitted keywords have been discovered, they can be utilized to search or perform aggregated metrics and analytics on the social network together with the user-submitted query terms for content items containing all or most of the query terms and/or the related terms, wherein such content items obtained are presented as the search result to the user or subject to aggregate metrics and analytics presented to the user.1. A system, comprising:
a social media content analysis engine, which in operation, discovers one or more terms related to one or more query terms submitted by a user for analysis over a social media network, wherein the related terms discovered are trending and co-occurring with the submitted query terms over the social media network during a specific period of time; a social media content collection engine, which in operation,
accepts the one or more query terms submitted by the user;
utilizes both the query terms submitted and the related terms discovered to search over the social network or compute aggregate metrics and analytics in real time;
retrieves a plurality of content items matching all or at least a subset of the query terms and the related terms and presents the retrieved content items as search result to the user, or computes aggregate metrics and analytics for the said plurality of matching content items and presents the computed metrics/analytics to the user 2. The system of claim 1, wherein:
the social network is a publicly accessible web-based platform or community that enables its users/members to post, share, communicate, and interact with each other. 3. The system of claim 1, wherein:
the social network is one any other web-based communities. 4. The system of claim 1, wherein:
the content items on the social media network include one or more of citations, tweets, replies and/or re-tweets to the tweets, posts, comments to other users' posts, opinions, feeds, connections, references, links to other websites or applications, or any other activities on the social network. 5. The system of claim 1, wherein:
the social media content collection engine continuously retrieves social media content items from the social network in real time. 6. The system of claim 1, wherein:
the social media content analysis engine discovers the related terms by examining a historical archive of recent content items retrieved from the social network for top trending terms co-occurring with the submitted keywords before searching over the social network. 7. The system of claim 1, wherein:
the social media content analysis engine dynamically discovers the related terms by examining social media content stream retrieved from the social network in real time and applies the related terms discovered to search for the content items together with the user-submitted keywords. 8. The system of claim 1, wherein:
the social media content analysis engine discovers the related terms via a significant post index, which includes content items that contain a link or a re-post to another content item. 9. The system of claim 1, wherein:
the social media content analysis engine discovers and/or sorts the related terms based on unexpectedness of the terms, where weight is given to the terms that are uncommon in the general search. 10. The system of claim 1, wherein:
the social media content analysis engine discovers and/or sorts the related terms based on contemporaneousness of the terms, where weight is given to the terms whose rate of co-occurrence with the keywords submitted has increased significantly in a short period of time. 11. The system of claim 1, wherein:
the social media content analysis engine discovers and/or sorts the related terms based on meaningfulness of the terms, where weight is given to the terms whose absolute rate of co-occurrence with the query is larger than others. 12. The system of claim 1, wherein:
the social media content analysis engine discovers and/or sorts the related terms based on intentions of the terms, where weight is given to hashtags because they suggest an intent to query. 13. The system of claim 1, wherein:
the social media content analysis engine discovers and/or sorts the related terms based on momentum of the terms, which measures the combined popularity of the terms and the speed at which that popularity is increasing. 14. The system of claim 1, wherein:
the social media content analysis engine discovers and/or sorts the related terms based on velocity of the terms, which solely measures the speed at which the terms' popularity is increasing, independent of the terms' overall popularity. 15. The system of claim 1, wherein:
the social media content analysis engine discovers and/or sorts the related terms based on peak of the terms, which indicates the time period that has the highest number of content items containing the terms over the time period selected. 16. The system of claim 1, wherein:
the social media content analysis engine discovers and/or sorts the related terms based on influence of the terms, which measures the total number of influential mentions/retweets of a content item containing the terms over the lifetime of the content item. 17. A method, comprising:
accepting one or more keywords submitted by a user for search or analysis of content items over a social media network; discovering one or more terms related to the keywords submitted, wherein such related terms are trending and co-occurring with the submitted keywords over the social media network during a specific period of time; utilizing both the keywords submitted and the related terms discovered to search or analyze for a plurality of content items over the social network in real time; retrieving the content items or aggregate analytics and metrics for items containing all or at least a subset of the keywords and the related terms and presenting the retrieved content items as search result to the user. 18. The method of claim 17, further comprising:
retrieving social media content items or aggregate analytics from the social network continuously in real time. 19. The method of claim 17, further comprising:
discovering the related terms by examining a historical archive of recent content items retrieved from the social network for top trending terms co-occurring with the submitted keywords before searching over the social network. 20. The method of claim 17, further comprising:
discovering dynamically the related terms by examining social media content stream retrieved from the social network in real time and applies the related terms discovered to search or aggregate analytics for the content items together with the user-submitted keywords. 21. The method of claim 17, further comprising:
discovering the related terms via a significant post index, which includes content items that contain a link or a re-post to another content item. 22. The method of claim 17, further comprising:
discovering and/or sorting the related terms based on unexpectedness of the terms, where weight is given to the terms that are uncommon in the general search or analysis. 23. The method of claim 17, further comprising:
discovering and/or sorting the related terms based on contemporaneousness of the terms, where weight is given to the terms whose rate of co-occurrence with the keywords submitted has increased significantly in a short period of time. 24. The method of claim 17, further comprising:
discovering and/or sorting the related terms based on meaningfulness of the terms, where weight is given to the terms whose absolute rate of co-occurrence with the query is larger than others. 25. The method of claim 17, further comprising:
discovering and/or sorting the related terms based on intentions of the terms, where weight is given to hashtags because they suggest an intent to query. 26. The method of claim 17, further comprising:
discovering and/or sorting the related terms based on momentum of the terms, which measures the combined popularity of the terms and the speed at which that popularity is increasing. 27. The method of claim 17, further comprising:
discovering and/or sorting the related terms based on velocity of the terms, which solely measures the speed at which the terms' popularity is increasing, independent of the terms' overall popularity. 28. The method of claim 17, further comprising:
discovering and/or sorting the related terms based on peak of the terms, which indicates the time period that has the highest number of content items containing the terms over the time period selected. 29. The method of claim 17, further comprising:
discovering and/or sorting the related terms based on influence of the terms, which measures the total number of influential mentions/retweets of a content item containing the terms over the lifetime of the content item. | 2,100 |
6,611 | 6,611 | 15,690,246 | 2,132 | A system and method for backing up workloads for multiple tenants of a cloud computing system are disclosed. A method of backing up workloads for multiple tenants of a computing system includes triggering an archival process according to an archival policy set by a tenant, and executing the archival process by reading backup data of the tenant stored in a backup storage device of the computer system and transmitting the backup data to an archival store designated in the archival policy, and then deleting or invalidating the backup data stored in the backup storage device. | 1. A method of backing up workloads for multiple tenants of a computing system, comprising:
triggering an archival process according to an archival policy set by a tenant; and executing the archival process by reading backup data of the tenant stored in a backup storage device of the computer system and transmitting the backup data to an archival store designated in the archival policy, and then deleting or invalidating the backup data stored in the backup storage device. 2. The method of claim 1, further comprising:
placing archival tasks in a scheduling queue and prioritizing the archival tasks in the scheduling queue, wherein the archival process is executed according to an order of the archival tasks in the scheduling queue. 3. The method of claim 2, wherein
if the archival policy for the tenant assigns a first priority to first backup data stored in the backup storage device for the first tenant and assigns a second priority, which is a higher priority than the first priority, to second backup data stored in the backup storage device for the first tenant, a first archival task for archiving the first backup data is performed prior to a second archival task for archiving the second backup data. 4. The method of claim 1, further comprising:
responsive to a request for an expedited archival task, placing the expedited archival task in the scheduling queue behind other expedited archival tasks and ahead of all other archival tasks. 5. The method of claim 4, wherein the expedited archival task is requested when storage space in the backup storage device for a tenant falls below a minimum threshold. 6. The method of claim 1, wherein the backup storage device has a first tier of storage and a second tier of storage that is slower than the first tier of storage, and first backup data for the tenant is stored in the first tier of storage and second backup data for the tenant, which has been retained for a longer period of time in the backup storage device than the first backup data, is stored in the second tier of storage. 7. The method of claim 6, wherein third backup data for the tenant, which has been retained for a longer period of time in the backup storage device than the second backup data, is scheduled for archival. 8. The method of claim 1, wherein the archival store is an object store. 9. A non-transitory computer readable medium comprising instructions to be executed in a computer for managing backups of workloads for multiple tenants of a computing system, wherein the instructions when executed in the computer cause the computer to carry out the steps of:
triggering an archival process according to an archival policy set by a tenant; and executing the archival process by reading backup data of the tenant stored in a backup storage device of the computer system and transmitting the backup data to an archival store designated in the archival policy, and then deleting or invalidating the backup data stored in the backup storage device. 10. The non-transitory computer readable medium of claim 9, wherein the steps carried out by the computer further include:
placing archival tasks in a scheduling queue and prioritizing the archival tasks in the scheduling queue, wherein the archival process is executed according to an order of the archival tasks in the scheduling queue. 11. The non-transitory computer readable medium of claim 10, wherein
if the archival policy for the tenant assigns a first priority to first backup data stored in the backup storage device for the first tenant and assigns a second priority, which is a higher priority than the first priority, to second backup data stored in the backup storage device for the first tenant, a first archival task for archiving the first backup data is performed prior to a second archival task for archiving the second backup data. 12. The non-transitory computer readable medium of claim 9, wherein the steps carried out by the computer further include:
responsive to a request for an expedited archival task, placing the expedited archival task in the scheduling queue behind other expedited archival tasks and ahead of all other archival tasks. 13. The non-transitory computer readable medium of claim 12, wherein the expedited archival task is requested when storage space in the backup storage device for a tenant falls below a minimum threshold. 14. The non-transitory computer readable medium of claim 9, wherein the backup storage device has a first tier of storage and a second tier of storage that is slower than the first tier of storage, and first backup data for the tenant is stored in the first tier of storage and second backup data for the tenant, which has been retained for a longer period of time in the backup storage device than the first backup data, is stored in the second tier of storage. 15. The non-transitory computer readable medium of claim 14, wherein third backup data for the tenant, which has been retained for a longer period of time in the backup storage device than the second backup data, is scheduled for archival. 16. A computing system comprising:
a plurality of computers in each of which virtual machines are running, the virtual machines including virtual machines for a first tenant and virtual machines for a second tenant; and a backup storage device configured to store backup images of the virtual machines for both the first tenant and the second tenant, wherein one of the computers has running therein a data protection service that performs the steps of: triggering an archival process according to an archival policy set by a tenant; and executing the archival process by reading backup data of the tenant stored in the backup storage device of the computer system and transmitting the backup data to an archival store designated in the archival policy, and then deleting or invalidating the backup data stored in the backup storage device. 17. The computing system of claim 16, wherein the steps performed by the data protection service further include:
placing archival tasks in a scheduling queue and prioritizing the archival tasks in the scheduling queue, wherein the archival process is executed according to an order of the archival tasks in the scheduling queue. 18. The computing system of claim 17, wherein
if the archival policy for the tenant assigns a first priority to first backup data stored in the backup storage device for the first tenant and assigns a second priority, which is a higher priority than the first priority, to second backup data stored in the backup storage device for the first tenant, a first archival task for archiving the first backup data is performed prior to a second archival task for archiving the second backup data. 19. The computing system of claim 17, wherein the backup storage device has a first tier of storage and a second tier of storage that is slower than the first tier of storage, and first backup data for the tenant is stored in the first tier of storage and second backup data for the tenant, which has been retained for a longer period of time in the backup storage device than the first backup data, is stored in the second tier of storage. 20. The computing system of claim 19, wherein third backup data for the tenant, which has been retained for a longer period of time in the backup storage device than the second backup data, is scheduled for archival. | A system and method for backing up workloads for multiple tenants of a cloud computing system are disclosed. A method of backing up workloads for multiple tenants of a computing system includes triggering an archival process according to an archival policy set by a tenant, and executing the archival process by reading backup data of the tenant stored in a backup storage device of the computer system and transmitting the backup data to an archival store designated in the archival policy, and then deleting or invalidating the backup data stored in the backup storage device.1. A method of backing up workloads for multiple tenants of a computing system, comprising:
triggering an archival process according to an archival policy set by a tenant; and executing the archival process by reading backup data of the tenant stored in a backup storage device of the computer system and transmitting the backup data to an archival store designated in the archival policy, and then deleting or invalidating the backup data stored in the backup storage device. 2. The method of claim 1, further comprising:
placing archival tasks in a scheduling queue and prioritizing the archival tasks in the scheduling queue, wherein the archival process is executed according to an order of the archival tasks in the scheduling queue. 3. The method of claim 2, wherein
if the archival policy for the tenant assigns a first priority to first backup data stored in the backup storage device for the first tenant and assigns a second priority, which is a higher priority than the first priority, to second backup data stored in the backup storage device for the first tenant, a first archival task for archiving the first backup data is performed prior to a second archival task for archiving the second backup data. 4. The method of claim 1, further comprising:
responsive to a request for an expedited archival task, placing the expedited archival task in the scheduling queue behind other expedited archival tasks and ahead of all other archival tasks. 5. The method of claim 4, wherein the expedited archival task is requested when storage space in the backup storage device for a tenant falls below a minimum threshold. 6. The method of claim 1, wherein the backup storage device has a first tier of storage and a second tier of storage that is slower than the first tier of storage, and first backup data for the tenant is stored in the first tier of storage and second backup data for the tenant, which has been retained for a longer period of time in the backup storage device than the first backup data, is stored in the second tier of storage. 7. The method of claim 6, wherein third backup data for the tenant, which has been retained for a longer period of time in the backup storage device than the second backup data, is scheduled for archival. 8. The method of claim 1, wherein the archival store is an object store. 9. A non-transitory computer readable medium comprising instructions to be executed in a computer for managing backups of workloads for multiple tenants of a computing system, wherein the instructions when executed in the computer cause the computer to carry out the steps of:
triggering an archival process according to an archival policy set by a tenant; and executing the archival process by reading backup data of the tenant stored in a backup storage device of the computer system and transmitting the backup data to an archival store designated in the archival policy, and then deleting or invalidating the backup data stored in the backup storage device. 10. The non-transitory computer readable medium of claim 9, wherein the steps carried out by the computer further include:
placing archival tasks in a scheduling queue and prioritizing the archival tasks in the scheduling queue, wherein the archival process is executed according to an order of the archival tasks in the scheduling queue. 11. The non-transitory computer readable medium of claim 10, wherein
if the archival policy for the tenant assigns a first priority to first backup data stored in the backup storage device for the first tenant and assigns a second priority, which is a higher priority than the first priority, to second backup data stored in the backup storage device for the first tenant, a first archival task for archiving the first backup data is performed prior to a second archival task for archiving the second backup data. 12. The non-transitory computer readable medium of claim 9, wherein the steps carried out by the computer further include:
responsive to a request for an expedited archival task, placing the expedited archival task in the scheduling queue behind other expedited archival tasks and ahead of all other archival tasks. 13. The non-transitory computer readable medium of claim 12, wherein the expedited archival task is requested when storage space in the backup storage device for a tenant falls below a minimum threshold. 14. The non-transitory computer readable medium of claim 9, wherein the backup storage device has a first tier of storage and a second tier of storage that is slower than the first tier of storage, and first backup data for the tenant is stored in the first tier of storage and second backup data for the tenant, which has been retained for a longer period of time in the backup storage device than the first backup data, is stored in the second tier of storage. 15. The non-transitory computer readable medium of claim 14, wherein third backup data for the tenant, which has been retained for a longer period of time in the backup storage device than the second backup data, is scheduled for archival. 16. A computing system comprising:
a plurality of computers in each of which virtual machines are running, the virtual machines including virtual machines for a first tenant and virtual machines for a second tenant; and a backup storage device configured to store backup images of the virtual machines for both the first tenant and the second tenant, wherein one of the computers has running therein a data protection service that performs the steps of: triggering an archival process according to an archival policy set by a tenant; and executing the archival process by reading backup data of the tenant stored in the backup storage device of the computer system and transmitting the backup data to an archival store designated in the archival policy, and then deleting or invalidating the backup data stored in the backup storage device. 17. The computing system of claim 16, wherein the steps performed by the data protection service further include:
placing archival tasks in a scheduling queue and prioritizing the archival tasks in the scheduling queue, wherein the archival process is executed according to an order of the archival tasks in the scheduling queue. 18. The computing system of claim 17, wherein
if the archival policy for the tenant assigns a first priority to first backup data stored in the backup storage device for the first tenant and assigns a second priority, which is a higher priority than the first priority, to second backup data stored in the backup storage device for the first tenant, a first archival task for archiving the first backup data is performed prior to a second archival task for archiving the second backup data. 19. The computing system of claim 17, wherein the backup storage device has a first tier of storage and a second tier of storage that is slower than the first tier of storage, and first backup data for the tenant is stored in the first tier of storage and second backup data for the tenant, which has been retained for a longer period of time in the backup storage device than the first backup data, is stored in the second tier of storage. 20. The computing system of claim 19, wherein third backup data for the tenant, which has been retained for a longer period of time in the backup storage device than the second backup data, is scheduled for archival. | 2,100 |
6,612 | 6,612 | 15,852,488 | 2,156 | A query with a UNION ALL (UA) view is detected by a query optimizer. A query execution plan and cost for the query is obtained. The query is rewritten to push aggregates of the original query into the view. A query execution plan is generated for the rewritten query and a cost for executing the rewritten query is obtained. The lowest cost execution plan is selected for execution by a database engine of a database. | 1. A method, comprising:
receiving a query and generating a query execution plan for the query with a first cost of executing the query execution plan; rewriting the query as a rewritten query by pushing aggregates of the query into a UNION ALL (UA) view; generating a second query execution plan with a second cost of executing the rewritten query; and selecting one of: the query and the rewritten query as a selected query for execution based on a lower cost associated with the first cost and the second cost. 2. The method of claim 1, wherein rewriting further includes grouping branches of the UA view into a first group for single tables and a second group for multi-tables within the rewritten query. 3. The method of claim, wherein rewriting further includes pushing the aggregates into a derived table within the rewritten query. 4. The method of claim 1, wherein generating further includes terminating the generating when a running total cost for the second cost exceeds the first cost. 5. The method of claim 1, wherein generating further includes calculating a first portion of the second cost as aggregate costs for pushing the aggregates on single tables and an indices cost for using indices that become eligible because of the aggregates pushed. 6. The method of claim 5, wherein calculating further includes calculating a second portion of the second cost as second aggregate costs for pushing the aggregates on multiple tables or spools. 7. The method of claim 6, wherein calculating further include grouping the second aggregate cost into local aggregate costs for any local aggregate operations that can or cannot be processed. 8. The method of claim 7, wherein grouping further includes calculating a third portion of the second cost as nested costs of nested aggregate operations representing some of the aggregates that are being pushed on existing aggregate operations present in the UA view. 9. The method of claim 8, wherein calculating further includes calculating a fourth portion of the second cost as a final cost of aggregating over the aggregates pushed representing a final aggregation of the rewritten query. 10. The method of claim 9, wherein calculating further includes representing the second cost as a sum of the first portion, the second portion, the third portion, and the fourth portion. 11. The method of claim 1 further comprising, providing the selected query for execution to multiple database engines for parallel execution against a database. 12. A method, comprising:
rewriting a query into a rewritten query that pushes aggregates into a UNION ALL (UA) view within the rewritten query; generating a first query execution plan for the query by hiding or masking the aggregates being pushed in the rewritten query during the generation; obtaining a first plan cost for the first query execution plan; and revealing the aggregates being pushed from the rewritten query for generating a second query execution plan of the rewritten query; accumulating costs during the generating of the second query execution plan and terminating the generating and selecting the query as a selected query for execution as soon as the costs exceed the first plan cost; and selecting the rewritten query as the selected query for execution with the second query execution plan when the second query execution plan is finished being generated and a final cost representing that the accumulating costs is lower than the first plan cost. 13. The method of claim 12, wherein rewriting further includes pushing the aggregates into a derived table in the rewritten query. 14. The method of claim 12, wherein rewriting further includes grouping branches of the UA view into a first group representing single tables and a second group representing multiple tables or spools. 15. The method of claim 12, wherein accumulating further includes accumulating the costs based on the aggregates pushing on single tables and using aggregate join indices that become eligible because of the aggregates pushed. 16. The method of claim 12, wherein accumulating further includes accumulating the costs based on the aggregates pushed on multiple tables or spools. 17. The method of claim 12, wherein accumulating further includes accumulating the costs based on nested aggregate operations identified with the rewritten query during generation of the second query execution plan. 18. The method of claim 12 further comprising, providing the selected query to a database engine for execution against a database. 19. A system, comprising:
a data warehouse including:
a query optimizer;
wherein the query optimizer is configured to: i) execute on at least one network node of the data warehouse, ii) rewrite an original query into a rewritten query that pushes aggregates of the original query into a UNION ALL view, iii) calculate a first cost to execute the original query, iv) calculate a second cost to execute the rewritten query, v) select a selected query for execution within the data warehouse as a lower cost of: the first cost and the second cost. 20. The system of claim 19, wherein the query optimizer is further configured to iteratively calculate the second cost while a query execution plan for the rewritten query is being generated, and the query optimizer terminates generation of the query execution plan and identifies the selected query as the original query when the second cost exceeds the first cost. | A query with a UNION ALL (UA) view is detected by a query optimizer. A query execution plan and cost for the query is obtained. The query is rewritten to push aggregates of the original query into the view. A query execution plan is generated for the rewritten query and a cost for executing the rewritten query is obtained. The lowest cost execution plan is selected for execution by a database engine of a database.1. A method, comprising:
receiving a query and generating a query execution plan for the query with a first cost of executing the query execution plan; rewriting the query as a rewritten query by pushing aggregates of the query into a UNION ALL (UA) view; generating a second query execution plan with a second cost of executing the rewritten query; and selecting one of: the query and the rewritten query as a selected query for execution based on a lower cost associated with the first cost and the second cost. 2. The method of claim 1, wherein rewriting further includes grouping branches of the UA view into a first group for single tables and a second group for multi-tables within the rewritten query. 3. The method of claim, wherein rewriting further includes pushing the aggregates into a derived table within the rewritten query. 4. The method of claim 1, wherein generating further includes terminating the generating when a running total cost for the second cost exceeds the first cost. 5. The method of claim 1, wherein generating further includes calculating a first portion of the second cost as aggregate costs for pushing the aggregates on single tables and an indices cost for using indices that become eligible because of the aggregates pushed. 6. The method of claim 5, wherein calculating further includes calculating a second portion of the second cost as second aggregate costs for pushing the aggregates on multiple tables or spools. 7. The method of claim 6, wherein calculating further include grouping the second aggregate cost into local aggregate costs for any local aggregate operations that can or cannot be processed. 8. The method of claim 7, wherein grouping further includes calculating a third portion of the second cost as nested costs of nested aggregate operations representing some of the aggregates that are being pushed on existing aggregate operations present in the UA view. 9. The method of claim 8, wherein calculating further includes calculating a fourth portion of the second cost as a final cost of aggregating over the aggregates pushed representing a final aggregation of the rewritten query. 10. The method of claim 9, wherein calculating further includes representing the second cost as a sum of the first portion, the second portion, the third portion, and the fourth portion. 11. The method of claim 1 further comprising, providing the selected query for execution to multiple database engines for parallel execution against a database. 12. A method, comprising:
rewriting a query into a rewritten query that pushes aggregates into a UNION ALL (UA) view within the rewritten query; generating a first query execution plan for the query by hiding or masking the aggregates being pushed in the rewritten query during the generation; obtaining a first plan cost for the first query execution plan; and revealing the aggregates being pushed from the rewritten query for generating a second query execution plan of the rewritten query; accumulating costs during the generating of the second query execution plan and terminating the generating and selecting the query as a selected query for execution as soon as the costs exceed the first plan cost; and selecting the rewritten query as the selected query for execution with the second query execution plan when the second query execution plan is finished being generated and a final cost representing that the accumulating costs is lower than the first plan cost. 13. The method of claim 12, wherein rewriting further includes pushing the aggregates into a derived table in the rewritten query. 14. The method of claim 12, wherein rewriting further includes grouping branches of the UA view into a first group representing single tables and a second group representing multiple tables or spools. 15. The method of claim 12, wherein accumulating further includes accumulating the costs based on the aggregates pushing on single tables and using aggregate join indices that become eligible because of the aggregates pushed. 16. The method of claim 12, wherein accumulating further includes accumulating the costs based on the aggregates pushed on multiple tables or spools. 17. The method of claim 12, wherein accumulating further includes accumulating the costs based on nested aggregate operations identified with the rewritten query during generation of the second query execution plan. 18. The method of claim 12 further comprising, providing the selected query to a database engine for execution against a database. 19. A system, comprising:
a data warehouse including:
a query optimizer;
wherein the query optimizer is configured to: i) execute on at least one network node of the data warehouse, ii) rewrite an original query into a rewritten query that pushes aggregates of the original query into a UNION ALL view, iii) calculate a first cost to execute the original query, iv) calculate a second cost to execute the rewritten query, v) select a selected query for execution within the data warehouse as a lower cost of: the first cost and the second cost. 20. The system of claim 19, wherein the query optimizer is further configured to iteratively calculate the second cost while a query execution plan for the rewritten query is being generated, and the query optimizer terminates generation of the query execution plan and identifies the selected query as the original query when the second cost exceeds the first cost. | 2,100 |
6,613 | 6,613 | 13,153,572 | 2,169 | A patent Examiner information accessing system is disclosed for accessing patent Examiner information from a Patent and Trademark Office, or other, database. A search system is provided so that a user can search information aggregated by the Examiner information accessing system. | 1. A computer-implemented method, performed by a computer with a computer processor, the method comprising:
receiving a patent examiner name, identifying a single named patent examiner, input through a user interface display generated by the computer processor; searching, with the computer processor, a patent examiner data store for examiner statistics, representing examination performance and calculated from a plurality of different patent applications worked on by the single named patent examiner; and automatically displaying the examiner statistics, using the computer processor, for the single named patent examiner. 2. The computer-implemented method of claim 1 and further comprising:
displaying a user-actuable compare button proximate at least one of the examiner statistics. 3. The computer-implemented method of claim 2 and further comprising:
receiving user actuation of the user-actuable compare button; and
in response to the user actuation, automatically generating a comparison display showing a relative ranking of the single named patent examiner, relative to a group of additional patent examiners, based on the at least one of the examiner statistics. 4. The computer-implemented method of claim 3 wherein automatically generating the comparison display comprises:
automatically generating the comparison display showing a relative ranking of the single named patent examiner, relative to patent examiners in a same group art unit as the single named patent examiner, based on the at least one of the examiner statistics. 5. The computer-implemented method of claim 3 wherein automatically generating the comparison display comprises:
automatically generating the comparison display showing a relative ranking of the single named patent examiner, relative to patent examiners in a remainder of a patent office, based on the at least one of the examiner statistics. 6. The computer-implemented method of claim 1 wherein receiving a patent examiner name, comprises:
if there are more than one patent examiners with the patent examiner name in the patent examiner data store, then generating a disambiguation display to receive a user input disambiguating among the more than one patent examiners. 7. The computer-implemented method of claim 6 wherein generating the disambiguation display comprises:
generating a list of selectable examiner names corresponding to the more than one patent examiners. 8. The computer-implemented method of claim 1 wherein the statistics include a given statistic that identifies a single patent application. 9. The computer-implemented method of claim 8 wherein automatically displaying the examiner statistics comprises:
displaying a user-actuable button corresponding to the single patent application. 10. The computer-implemented method of claim 9 and further comprising:
receiving a user actuation of the user-actuable button; and
displaying user-selectable links corresponding to a file history of the single patent application. 11. The computer-implemented method of claim 10 wherein the user-actuable links link to documents in the file history. 12. The computer-implemented method of claim 11 and further comprising:
receiving user actuation of one of the user-actuable links; and
displaying a document in the file history to which the one of the user-actuable links is linked. | A patent Examiner information accessing system is disclosed for accessing patent Examiner information from a Patent and Trademark Office, or other, database. A search system is provided so that a user can search information aggregated by the Examiner information accessing system.1. A computer-implemented method, performed by a computer with a computer processor, the method comprising:
receiving a patent examiner name, identifying a single named patent examiner, input through a user interface display generated by the computer processor; searching, with the computer processor, a patent examiner data store for examiner statistics, representing examination performance and calculated from a plurality of different patent applications worked on by the single named patent examiner; and automatically displaying the examiner statistics, using the computer processor, for the single named patent examiner. 2. The computer-implemented method of claim 1 and further comprising:
displaying a user-actuable compare button proximate at least one of the examiner statistics. 3. The computer-implemented method of claim 2 and further comprising:
receiving user actuation of the user-actuable compare button; and
in response to the user actuation, automatically generating a comparison display showing a relative ranking of the single named patent examiner, relative to a group of additional patent examiners, based on the at least one of the examiner statistics. 4. The computer-implemented method of claim 3 wherein automatically generating the comparison display comprises:
automatically generating the comparison display showing a relative ranking of the single named patent examiner, relative to patent examiners in a same group art unit as the single named patent examiner, based on the at least one of the examiner statistics. 5. The computer-implemented method of claim 3 wherein automatically generating the comparison display comprises:
automatically generating the comparison display showing a relative ranking of the single named patent examiner, relative to patent examiners in a remainder of a patent office, based on the at least one of the examiner statistics. 6. The computer-implemented method of claim 1 wherein receiving a patent examiner name, comprises:
if there are more than one patent examiners with the patent examiner name in the patent examiner data store, then generating a disambiguation display to receive a user input disambiguating among the more than one patent examiners. 7. The computer-implemented method of claim 6 wherein generating the disambiguation display comprises:
generating a list of selectable examiner names corresponding to the more than one patent examiners. 8. The computer-implemented method of claim 1 wherein the statistics include a given statistic that identifies a single patent application. 9. The computer-implemented method of claim 8 wherein automatically displaying the examiner statistics comprises:
displaying a user-actuable button corresponding to the single patent application. 10. The computer-implemented method of claim 9 and further comprising:
receiving a user actuation of the user-actuable button; and
displaying user-selectable links corresponding to a file history of the single patent application. 11. The computer-implemented method of claim 10 wherein the user-actuable links link to documents in the file history. 12. The computer-implemented method of claim 11 and further comprising:
receiving user actuation of one of the user-actuable links; and
displaying a document in the file history to which the one of the user-actuable links is linked. | 2,100 |
6,614 | 6,614 | 14,816,999 | 2,125 | A learning computer system may update parameters and states of an uncertain system. The system may receive data from a user or other source; process the received data through layers of processing units, thereby generating processed data; process the processed data to produce one or more intermediate or output signals; compare the one or more intermediate or output signals with one or more reference signals to generate information indicative of a performance measure of one or more of the layers of processing units; send information indicative of the performance measure back through the layers of processing units; process the information indicative of the performance measure in the processing units and in interconnections between the processing units; generate random, chaotic, fuzzy, or other numerical perturbations of the received data, the processed data, or the one or more intermediate or output signals; update the parameters and states of the uncertain system using the received data, the numerical perturbations, and previous parameters and states of the uncertain system; determine whether the generated numerical perturbations satisfy a condition; and if the numerical perturbations satisfy the condition, inject the numerical perturbations into one or more of the parameters or states, the received data, the processed data, or one or more of the processing units. | 1. A learning computer system that updates parameters and states of an uncertain system comprising a data processing system that includes a hardware processor that has a configuration that:
receives data from a user or other source; processes the received data through layers of processing units, thereby generating processed data; processes the processed data to produce one or more intermediate or output signals; compares the one or more intermediate or output signals with one or more reference signals to generate information indicative of a performance measure of one or more of the layers of processing units; sends information indicative of the performance measure back through the layers of processing units; processes the information indicative of the performance measure in the processing units and in interconnections between the processing units; generates random, chaotic, fuzzy, or other numerical perturbations of the received data, the processed data, or the one or more intermediate or output signals; updates the parameters and states of the uncertain system using the received data, the numerical perturbations, and previous parameters and states of the uncertain system; determines whether the generated numerical perturbations satisfy a condition; and if the numerical perturbations satisfy the condition, injects the numerical perturbations into one or more of the parameters or states, the received data, the processed data, or one or more of the processing units. 2. The learning computer system of claim 1 wherein the learning computer system unconditionally injects noise or chaotic or other perturbations into one or more of the estimated parameters or states, the received data, the processed data, or one or more of the processing units. 3. The learning computer system of claim 2 wherein the unconditional injection speeds up learning by the learning computer system. 4. The learning computer system of claim 2 wherein the unconditional injection improves the accuracy of the learning computer system. 5. The learning computer system of claim 1 wherein, if the numerical perturbations do not satisfy the condition, the system does not inject the numerical perturbations into one or more of the parameters or states, the received data, the processed data, or one or more of the processing units. 6. The learning computer system of claim 1 wherein the received data represents an image, a speech signal, or other signal. 7. The learning computer system of claim 1 wherein the injection speeds up learning by the learning computer system. 8. The learning computer system of claim 1 wherein the injection improves the accuracy of the learning computer system. 9. A learning computer system that updates parameters and states of an uncertain system comprising a data processing system that includes a hardware processor that has a configuration that:
receives data from a user or other source; processes the received data bi-directionally through two layers of processing units, thereby generating processed data; generates random, chaotic, fuzzy, or other numerical perturbations of the received data, the processed data, or one or more signals within the two layers of processing units; updates the parameters and states of the uncertain system using the received data, the numerical perturbations, and previous parameters and states of the uncertain system; determines whether the generated numerical perturbations satisfy a condition; and if the numerical perturbations satisfy the condition, injects the numerical perturbations into one or more of the parameters or states, the received data, the processed data, or one or more of the processing units. 10. The learning computer system of claim 9 wherein the learning computer system repeats all of the steps of claim 9, except that the processing step during the repeat processes one or both of the two layers of processing units along with a third layer of a processing unit. 11. The learning computer system of claim of claim 10 wherein the learning computer system repeats all of the steps of claim 10 until the received data has been processed bi-directionally through all of the layers of the processing units. 12. The learning computer system of claim of claim 9 wherein the processing units in the two layers of processing units process bi-polar signals. 13. The learning computer system of claim 9 wherein the learning computer system unconditionally injects noise or chaotic or other perturbations into one or more of the estimated parameters or states, the received data, the processed data, or the processing units. 14. A non-transitory, tangible, computer-readable storage medium containing a program of instructions that causes a learning computer system running the program of instructions that has a data processing system that includes a hardware processor to update parameters and states of an uncertain system by:
receiving data from a user or other source; processing the received data through layers of processing units, thereby generating processed data; processing the processed data to produce one or more intermediate or output signals; comparing the one or more intermediate or output signals with one or more reference signals to generate information indicative of a performance measure of one or more of the layers of processing units; sending information indicative of the performance measure back through the layers of processing units; processing the information indicative of the performance measure in the processing units and in interconnections between the processing units; generating random, chaotic, fuzzy, or other numerical perturbations of the received data, the processed data, or the one or more intermediate or output signals; updating the parameters and states of the uncertain system using the received data, the numerical perturbations, and previous parameters and states of the uncertain system; determining whether the generated numerical perturbations satisfy a condition; and if the numerical perturbations satisfy the condition, injecting the numerical perturbations into one or more of the parameters or states, the received data, the processed data, or one or more of the processing units. 15. The storage medium of claim 14 wherein the program of instructions causes the learning computer system to unconditionally inject noise or chaotic or other perturbations into one or more of the estimated parameters or states, the received data, the processed data, or the one or more processing units. 16. The storage medium of claim 15 wherein the unconditional injection speeds up learning by the learning computer system. 17. The storage medium of claim 15 wherein the unconditional injection improves the accuracy of the learning computer system. 18. The storage medium of claim 14 wherein, if the numerical perturbations do not satisfy the condition, the program of instructions causes the learning computer system not to inject the numerical perturbations into one or more of the parameters or states, the received data, the processed data, or one or more of the processing units. 19. The storage medium of claim 14 wherein the received data represents an image, a speech signal, or other signal. 20. The storage medium of claim 14 wherein the injection speeds up learning by the learning computer system. 21. The storage medium of claim 14 wherein the injection improves the accuracy of the learning computer system. 22. A non-transitory, tangible, computer-readable storage medium containing a program of instructions that causes a learning computer system running the program of instructions that has a data processing system that includes a hardware processor to update parameters and states of an uncertain system by:
receiving data from a user or other source; processing the received data bi-directionally through two layers of processing units, thereby generating processed data; generating random, chaotic, fuzzy, or other numerical perturbations of the received data, the processed data, or one or more signals within the two layers of processing units; updating the parameters and states of the uncertain system using the received data, the numerical perturbations, and previous parameters and states of the uncertain system; determining whether the generated numerical perturbations satisfy a condition; and if the numerical perturbations satisfy the condition, injecting the numerical perturbations into one or more of the parameters or states, the received data, the processed data, or one or more of the processing units. 23. The storage medium of claim 22 wherein the program of instructions causes the learning computer system to repeat all of the steps of claim 22, except that the processing step during the repeat processes one or both of the two layers of processing units along with a third layer of a processing unit. 24. The storage medium of claim of claim 23 wherein the program of instructions causes the learning computer system to repeat all of the steps of claim 23 until the received data has been processed bi-directionally through all of the layers of the processing units. 25. The storage medium of claim of claim 22 wherein processing units in the two layers of processing units process bi-polar signals. 26. The storage medium of claim 22 wherein the program of instructions causes the learning computer system to unconditionally inject noise or chaotic or other perturbations into one or more of the estimated parameters or states, the received data, the processed data, or the processing units. | A learning computer system may update parameters and states of an uncertain system. The system may receive data from a user or other source; process the received data through layers of processing units, thereby generating processed data; process the processed data to produce one or more intermediate or output signals; compare the one or more intermediate or output signals with one or more reference signals to generate information indicative of a performance measure of one or more of the layers of processing units; send information indicative of the performance measure back through the layers of processing units; process the information indicative of the performance measure in the processing units and in interconnections between the processing units; generate random, chaotic, fuzzy, or other numerical perturbations of the received data, the processed data, or the one or more intermediate or output signals; update the parameters and states of the uncertain system using the received data, the numerical perturbations, and previous parameters and states of the uncertain system; determine whether the generated numerical perturbations satisfy a condition; and if the numerical perturbations satisfy the condition, inject the numerical perturbations into one or more of the parameters or states, the received data, the processed data, or one or more of the processing units.1. A learning computer system that updates parameters and states of an uncertain system comprising a data processing system that includes a hardware processor that has a configuration that:
receives data from a user or other source; processes the received data through layers of processing units, thereby generating processed data; processes the processed data to produce one or more intermediate or output signals; compares the one or more intermediate or output signals with one or more reference signals to generate information indicative of a performance measure of one or more of the layers of processing units; sends information indicative of the performance measure back through the layers of processing units; processes the information indicative of the performance measure in the processing units and in interconnections between the processing units; generates random, chaotic, fuzzy, or other numerical perturbations of the received data, the processed data, or the one or more intermediate or output signals; updates the parameters and states of the uncertain system using the received data, the numerical perturbations, and previous parameters and states of the uncertain system; determines whether the generated numerical perturbations satisfy a condition; and if the numerical perturbations satisfy the condition, injects the numerical perturbations into one or more of the parameters or states, the received data, the processed data, or one or more of the processing units. 2. The learning computer system of claim 1 wherein the learning computer system unconditionally injects noise or chaotic or other perturbations into one or more of the estimated parameters or states, the received data, the processed data, or one or more of the processing units. 3. The learning computer system of claim 2 wherein the unconditional injection speeds up learning by the learning computer system. 4. The learning computer system of claim 2 wherein the unconditional injection improves the accuracy of the learning computer system. 5. The learning computer system of claim 1 wherein, if the numerical perturbations do not satisfy the condition, the system does not inject the numerical perturbations into one or more of the parameters or states, the received data, the processed data, or one or more of the processing units. 6. The learning computer system of claim 1 wherein the received data represents an image, a speech signal, or other signal. 7. The learning computer system of claim 1 wherein the injection speeds up learning by the learning computer system. 8. The learning computer system of claim 1 wherein the injection improves the accuracy of the learning computer system. 9. A learning computer system that updates parameters and states of an uncertain system comprising a data processing system that includes a hardware processor that has a configuration that:
receives data from a user or other source; processes the received data bi-directionally through two layers of processing units, thereby generating processed data; generates random, chaotic, fuzzy, or other numerical perturbations of the received data, the processed data, or one or more signals within the two layers of processing units; updates the parameters and states of the uncertain system using the received data, the numerical perturbations, and previous parameters and states of the uncertain system; determines whether the generated numerical perturbations satisfy a condition; and if the numerical perturbations satisfy the condition, injects the numerical perturbations into one or more of the parameters or states, the received data, the processed data, or one or more of the processing units. 10. The learning computer system of claim 9 wherein the learning computer system repeats all of the steps of claim 9, except that the processing step during the repeat processes one or both of the two layers of processing units along with a third layer of a processing unit. 11. The learning computer system of claim of claim 10 wherein the learning computer system repeats all of the steps of claim 10 until the received data has been processed bi-directionally through all of the layers of the processing units. 12. The learning computer system of claim of claim 9 wherein the processing units in the two layers of processing units process bi-polar signals. 13. The learning computer system of claim 9 wherein the learning computer system unconditionally injects noise or chaotic or other perturbations into one or more of the estimated parameters or states, the received data, the processed data, or the processing units. 14. A non-transitory, tangible, computer-readable storage medium containing a program of instructions that causes a learning computer system running the program of instructions that has a data processing system that includes a hardware processor to update parameters and states of an uncertain system by:
receiving data from a user or other source; processing the received data through layers of processing units, thereby generating processed data; processing the processed data to produce one or more intermediate or output signals; comparing the one or more intermediate or output signals with one or more reference signals to generate information indicative of a performance measure of one or more of the layers of processing units; sending information indicative of the performance measure back through the layers of processing units; processing the information indicative of the performance measure in the processing units and in interconnections between the processing units; generating random, chaotic, fuzzy, or other numerical perturbations of the received data, the processed data, or the one or more intermediate or output signals; updating the parameters and states of the uncertain system using the received data, the numerical perturbations, and previous parameters and states of the uncertain system; determining whether the generated numerical perturbations satisfy a condition; and if the numerical perturbations satisfy the condition, injecting the numerical perturbations into one or more of the parameters or states, the received data, the processed data, or one or more of the processing units. 15. The storage medium of claim 14 wherein the program of instructions causes the learning computer system to unconditionally inject noise or chaotic or other perturbations into one or more of the estimated parameters or states, the received data, the processed data, or the one or more processing units. 16. The storage medium of claim 15 wherein the unconditional injection speeds up learning by the learning computer system. 17. The storage medium of claim 15 wherein the unconditional injection improves the accuracy of the learning computer system. 18. The storage medium of claim 14 wherein, if the numerical perturbations do not satisfy the condition, the program of instructions causes the learning computer system not to inject the numerical perturbations into one or more of the parameters or states, the received data, the processed data, or one or more of the processing units. 19. The storage medium of claim 14 wherein the received data represents an image, a speech signal, or other signal. 20. The storage medium of claim 14 wherein the injection speeds up learning by the learning computer system. 21. The storage medium of claim 14 wherein the injection improves the accuracy of the learning computer system. 22. A non-transitory, tangible, computer-readable storage medium containing a program of instructions that causes a learning computer system running the program of instructions that has a data processing system that includes a hardware processor to update parameters and states of an uncertain system by:
receiving data from a user or other source; processing the received data bi-directionally through two layers of processing units, thereby generating processed data; generating random, chaotic, fuzzy, or other numerical perturbations of the received data, the processed data, or one or more signals within the two layers of processing units; updating the parameters and states of the uncertain system using the received data, the numerical perturbations, and previous parameters and states of the uncertain system; determining whether the generated numerical perturbations satisfy a condition; and if the numerical perturbations satisfy the condition, injecting the numerical perturbations into one or more of the parameters or states, the received data, the processed data, or one or more of the processing units. 23. The storage medium of claim 22 wherein the program of instructions causes the learning computer system to repeat all of the steps of claim 22, except that the processing step during the repeat processes one or both of the two layers of processing units along with a third layer of a processing unit. 24. The storage medium of claim of claim 23 wherein the program of instructions causes the learning computer system to repeat all of the steps of claim 23 until the received data has been processed bi-directionally through all of the layers of the processing units. 25. The storage medium of claim of claim 22 wherein processing units in the two layers of processing units process bi-polar signals. 26. The storage medium of claim 22 wherein the program of instructions causes the learning computer system to unconditionally inject noise or chaotic or other perturbations into one or more of the estimated parameters or states, the received data, the processed data, or the processing units. | 2,100 |
6,615 | 6,615 | 14,563,469 | 2,199 | According to an example implementation, a computer-readable storage medium, computer-implemented method and a system are provided to receive a first class, the first class indirectly implementing a first interface, wherein the first class extends a second class that directly implements the first interface, identify one or more directly implemented methods within the first class, determine a method signature for one or more of the directly implemented methods, estimate that the first class indirectly implements the first interface based on the method signatures for the one or more directly implemented methods, and instrument the first class based on the estimating that the first class indirectly implements the first interface. | 1-20. (canceled) 21. A non-transitory computer-readable storage medium comprising computer-readable instructions stored thereon that, when executed, are configured to cause a processor to at least:
receive a first class, the first class indirectly implementing a first interface, wherein the first class extends a second class that directly implements the first interface; identify one or more directly implemented methods within the first class; determine a method signature for one or more of the directly implemented methods, each method signature including a method name and one or more method parameters for a method; and estimate that the first class indirectly implements the first interface based on the method signatures for the one or more directly implemented methods. 22. The computer-readable storage medium of claim 21 wherein the instructions comprise instructions, when executed, are configured to further to cause the processor to:
instrument the first class based on the estimating that the first class indirectly implements the first interface. 23. The computer-readable storage medium of claim 21 wherein the instructions configured to cause the processor to estimate comprise instructions, when executed, are configured to cause the processor to at least:
compare the method signature for the one or more directly implemented methods to the method signatures of the interface methods; and
identify one or more of the directly implemented methods having a method signature that matches one of the interface method signatures. 24. The computer-readable storage medium of claim 21 wherein the instructions configured to cause the processor to estimate comprise instructions, when executed, are configured to cause the processor to at least:
determine a method signature for each of a plurality of interface methods;
assign a weight to each of the plurality of interface method signatures of the first interface based on a number of occurrences of the associated interface method signature within a group of classes and interfaces;
compare the method signature for the one or more directly implemented methods to the method signatures of the interface methods;
identify one or more of the directly implemented methods having a method signature that matches one of the interface method signatures;
determine a class score for the first class with respect to the first interface as a sum of the weights of each of the interface method signatures that match one of the method signatures of the directly implemented methods;
compare the class score to a threshold; and
determine that the class score is greater than the threshold. 25. The computer-readable storage medium of claim 21 wherein the method signature includes a method name and a number and type of the method's parameters. 26. The computer-readable storage medium of claim 24 wherein the instructions configured to cause the processor to assign a weight comprise instructions, when executed, are configured to cause the processor to:
determine a number of occurrences of the interface method signature within a group that includes at least one class or interface other than the first class, wherein one or more of the classes implements one or more interfaces; and
assign a weight to the interface method signature, wherein the weight assigned to the interface method signature is inversely related to the number of occurrences of the interface method signature within the group. 27. The computer-readable storage medium of claim 22 wherein the first class includes byte-code, wherein the instructions configured to cause the processor to instrument the first class comprise instructions, when executed, are configured to cause the processor to inject additional byte-code into the first class in order to record or monitor an execution of the first class. 28. A computer implemented method performed by a processor comprising:
receiving a first class, the first class indirectly implementing a first interface, wherein the first class extends a second class that directly implements the first interface; identifying one or more directly implemented methods within the first class; determining a method signature for one or more of the directly implemented methods, each method signature including a method name and one or more method parameters for a method; and estimating that the first class indirectly implements the first interface based on the method signatures for the one or more directly implemented methods. 29. The computer implemented method of claim 28 and further comprising:
instrumenting the first class based on the estimating that the first class indirectly implements the first interface. 30. The computer implemented method of claim 28 wherein the estimating comprises:
comparing the method signature for the one or more directly implemented methods to the method signatures of the interface methods; and
identifying one or more of the directly implemented methods having a method signature that matches one of the interface method signatures. 31. The computer implemented method of claim 28 wherein the estimating comprises:
determining a method signature for each of a plurality of interface methods;
assigning a weight to each of the plurality of interface method signatures of the first interface based on a number of occurrences of the associated interface method signature within a group of classes and interfaces;
comparing the method signature for the one or more directly implemented methods to the method signatures of the interface methods;
identifying one or more of the directly implemented methods having a method signature that matches one of the interface method signatures;
determining a class score for the first class with respect to the first interface as a sum of the weights of each of the interface method signatures that match one of the method signatures of the directly implemented methods;
comparing the class score to a threshold; and
determining that the class score is greater than the threshold. 32. The computer implemented method of claim 28 wherein the method signature includes a method name and a number and type of the method's parameters. 33. The computer implemented method of claim 31 wherein the assigning a weight comprises:
determining a number of occurrences of the interface method signature within a group that includes at least one class or interface other than the first class, wherein one or more of the classes implements one or more interfaces; and
assigning a weight to the interface method signature, wherein the weight assigned to the interface method signature is inversely related to the number of occurrences of the interface method signature within the group. 34. The computer implemented method of claim 29 wherein the wherein the first class includes byte-code, wherein the instrumenting the first class comprises injecting additional byte-code into the first class in order to record or monitor an execution of the first class. | According to an example implementation, a computer-readable storage medium, computer-implemented method and a system are provided to receive a first class, the first class indirectly implementing a first interface, wherein the first class extends a second class that directly implements the first interface, identify one or more directly implemented methods within the first class, determine a method signature for one or more of the directly implemented methods, estimate that the first class indirectly implements the first interface based on the method signatures for the one or more directly implemented methods, and instrument the first class based on the estimating that the first class indirectly implements the first interface.1-20. (canceled) 21. A non-transitory computer-readable storage medium comprising computer-readable instructions stored thereon that, when executed, are configured to cause a processor to at least:
receive a first class, the first class indirectly implementing a first interface, wherein the first class extends a second class that directly implements the first interface; identify one or more directly implemented methods within the first class; determine a method signature for one or more of the directly implemented methods, each method signature including a method name and one or more method parameters for a method; and estimate that the first class indirectly implements the first interface based on the method signatures for the one or more directly implemented methods. 22. The computer-readable storage medium of claim 21 wherein the instructions comprise instructions, when executed, are configured to further to cause the processor to:
instrument the first class based on the estimating that the first class indirectly implements the first interface. 23. The computer-readable storage medium of claim 21 wherein the instructions configured to cause the processor to estimate comprise instructions, when executed, are configured to cause the processor to at least:
compare the method signature for the one or more directly implemented methods to the method signatures of the interface methods; and
identify one or more of the directly implemented methods having a method signature that matches one of the interface method signatures. 24. The computer-readable storage medium of claim 21 wherein the instructions configured to cause the processor to estimate comprise instructions, when executed, are configured to cause the processor to at least:
determine a method signature for each of a plurality of interface methods;
assign a weight to each of the plurality of interface method signatures of the first interface based on a number of occurrences of the associated interface method signature within a group of classes and interfaces;
compare the method signature for the one or more directly implemented methods to the method signatures of the interface methods;
identify one or more of the directly implemented methods having a method signature that matches one of the interface method signatures;
determine a class score for the first class with respect to the first interface as a sum of the weights of each of the interface method signatures that match one of the method signatures of the directly implemented methods;
compare the class score to a threshold; and
determine that the class score is greater than the threshold. 25. The computer-readable storage medium of claim 21 wherein the method signature includes a method name and a number and type of the method's parameters. 26. The computer-readable storage medium of claim 24 wherein the instructions configured to cause the processor to assign a weight comprise instructions, when executed, are configured to cause the processor to:
determine a number of occurrences of the interface method signature within a group that includes at least one class or interface other than the first class, wherein one or more of the classes implements one or more interfaces; and
assign a weight to the interface method signature, wherein the weight assigned to the interface method signature is inversely related to the number of occurrences of the interface method signature within the group. 27. The computer-readable storage medium of claim 22 wherein the first class includes byte-code, wherein the instructions configured to cause the processor to instrument the first class comprise instructions, when executed, are configured to cause the processor to inject additional byte-code into the first class in order to record or monitor an execution of the first class. 28. A computer implemented method performed by a processor comprising:
receiving a first class, the first class indirectly implementing a first interface, wherein the first class extends a second class that directly implements the first interface; identifying one or more directly implemented methods within the first class; determining a method signature for one or more of the directly implemented methods, each method signature including a method name and one or more method parameters for a method; and estimating that the first class indirectly implements the first interface based on the method signatures for the one or more directly implemented methods. 29. The computer implemented method of claim 28 and further comprising:
instrumenting the first class based on the estimating that the first class indirectly implements the first interface. 30. The computer implemented method of claim 28 wherein the estimating comprises:
comparing the method signature for the one or more directly implemented methods to the method signatures of the interface methods; and
identifying one or more of the directly implemented methods having a method signature that matches one of the interface method signatures. 31. The computer implemented method of claim 28 wherein the estimating comprises:
determining a method signature for each of a plurality of interface methods;
assigning a weight to each of the plurality of interface method signatures of the first interface based on a number of occurrences of the associated interface method signature within a group of classes and interfaces;
comparing the method signature for the one or more directly implemented methods to the method signatures of the interface methods;
identifying one or more of the directly implemented methods having a method signature that matches one of the interface method signatures;
determining a class score for the first class with respect to the first interface as a sum of the weights of each of the interface method signatures that match one of the method signatures of the directly implemented methods;
comparing the class score to a threshold; and
determining that the class score is greater than the threshold. 32. The computer implemented method of claim 28 wherein the method signature includes a method name and a number and type of the method's parameters. 33. The computer implemented method of claim 31 wherein the assigning a weight comprises:
determining a number of occurrences of the interface method signature within a group that includes at least one class or interface other than the first class, wherein one or more of the classes implements one or more interfaces; and
assigning a weight to the interface method signature, wherein the weight assigned to the interface method signature is inversely related to the number of occurrences of the interface method signature within the group. 34. The computer implemented method of claim 29 wherein the wherein the first class includes byte-code, wherein the instrumenting the first class comprises injecting additional byte-code into the first class in order to record or monitor an execution of the first class. | 2,100 |
6,616 | 6,616 | 15,998,486 | 2,177 | An operating method of an electronic device is provided. The method includes selecting an area corresponding to at least one field of a page displayed through a display of the electronic device on the basis of an input; confirming an attribute corresponding to the at least one field among a plurality of attributes including a first attribute and a second attribute; and selectively providing a content corresponding to the attribute among at least one content including a first content and a second content according to the confirmed attribute. | 1. An electronic device comprising:
a display; at least one memory storing instructions; and at least one processor configured to execute the instructions to: display a text input area via the display; receive, from an input tool spaced apart from the display, a hovering input on the text input area; in response to the reception, display a visual affordance, for indicating that a handwritten input is receivable for input to the text input area, wherein the visual affordance is at least partially superimposed on the text input area; receive a touch input on the visual affordance; and in response to receiving the touch input, display another text input area capable of receiving the handwritten input. 2. The electronic device of claim 1, wherein the other text input area is disposed below the text input area. 3. The electronic device of claim 1, wherein the at least one processor is further configured to execute the instructions to:
receive the handwritten input through the other text input area; and display, in the text input area, a text which is a result of recognition of the handwritten input. 4. The electronic device of claim 3, wherein the at least one processor is further configured to execute the instructions to:
display, in the text input area, the text with a designated font based on the recognition of the handwritten input. 5. The electronic device of claim 3, wherein the at least one processor is further configured to execute the instructions to:
in response to receiving the handwritten input through the other text input area, display a path of the handwritten input in the other text input area. 6. The electronic device of claim 5, wherein the at least one processor is further configured to execute the instructions to:
auto-scroll the displayed path of the handwritten input in horizontal direction. 7. The electronic device of claim 1, wherein the visual affordance is maintained while the hovering input is maintained over the display. 8. An electronic device comprising:
a display; at least one memory storing instructions; and at least one processor configured to execute the instructions to: display, with at least one message received from another electronic device, an input area for writing a message to be transmitted to the other electronic device; receive, while displaying the input area with the at least one message, an input for enclosing an image in the message; in response to receiving the input, display, with the at least one message and the input area, at least portion of a plurality of thumbnail images for respectively representing a plurality of images; receive, while displaying the at least portion of the plurality of thumbnail images, another input for selecting a thumbnail image among the plurality of thumbnail images; and in response to receiving the other input, display the selected thumbnail image within the input area that is enlarged. 9. The electronic device of claim 8, wherein the at least portion of the plurality of thumbnail images, which are displayed in response to receiving the input, correspond to most recently captured images among the plurality of images. 10. The electronic device of claim 8, wherein the at least one processor is further configured to execute the instructions to:
in response to receiving a flick input on the at least portion of the plurality of thumbnail images, auto-scroll the plurality of thumbnail images. 11. An electronic device comprising:
a display; at least one memory storing instructions; and at least one processor configured to execute the instructions to: display at least portion of a plurality of thumbnail images for respectively representing a plurality of images; receive a hovering input over a thumbnail image among the plurality of thumbnail images; and in response to receiving the hovering input, display the thumbnail image that is enlarged as superimposed on at least portion of the plurality of thumbnail images, with at least one executable object for processing the thumbnail image. 12. The electronic device of claim 11, wherein the thumbnail image that is enlarged is maintained while the hovering input is maintained over the display. 13. The electronic device of claim 11, wherein the at least portion of the plurality of thumbnail images is maintained on the display independently from receiving the hovering input. 14. A method of an electronic device comprising:
displaying a text input area via a display of the electronic device; receiving, from an input tool spaced apart from the display, a hovering input on the text input area; in response to the reception, displaying a visual affordance, for indicating that a handwritten input is receivable for input to the text input area, wherein the visual affordance is at least partially superimposed on the text input area; receiving a touch input on the visual affordance; and in response to receiving the touch input, displaying another text input area capable of receiving the handwritten input. 15. The method of claim 14, wherein the other text input area is disposed below the text input area. 16. The method of claim 14, further comprising:
receiving the handwritten input through the other text input area; and displaying, in the text input area, a text which is a result of recognition of the handwritten input. 17. The method of claim 16, wherein the displaying the text comprises displaying, in the text input area, the text with a designated font based on the recognition of the handwritten input. 18. The method of claim 16, further comprising:
in response to receiving the handwritten input through the other text input area, display a path of the handwritten input in the other text input area. 19. The method of claim 18, further comprising:
auto-scroll the displayed path of the handwritten input in horizontal direction. 20. The method of claim 14, wherein the visual affordance is maintained while the hovering input is maintained over the display. | An operating method of an electronic device is provided. The method includes selecting an area corresponding to at least one field of a page displayed through a display of the electronic device on the basis of an input; confirming an attribute corresponding to the at least one field among a plurality of attributes including a first attribute and a second attribute; and selectively providing a content corresponding to the attribute among at least one content including a first content and a second content according to the confirmed attribute.1. An electronic device comprising:
a display; at least one memory storing instructions; and at least one processor configured to execute the instructions to: display a text input area via the display; receive, from an input tool spaced apart from the display, a hovering input on the text input area; in response to the reception, display a visual affordance, for indicating that a handwritten input is receivable for input to the text input area, wherein the visual affordance is at least partially superimposed on the text input area; receive a touch input on the visual affordance; and in response to receiving the touch input, display another text input area capable of receiving the handwritten input. 2. The electronic device of claim 1, wherein the other text input area is disposed below the text input area. 3. The electronic device of claim 1, wherein the at least one processor is further configured to execute the instructions to:
receive the handwritten input through the other text input area; and display, in the text input area, a text which is a result of recognition of the handwritten input. 4. The electronic device of claim 3, wherein the at least one processor is further configured to execute the instructions to:
display, in the text input area, the text with a designated font based on the recognition of the handwritten input. 5. The electronic device of claim 3, wherein the at least one processor is further configured to execute the instructions to:
in response to receiving the handwritten input through the other text input area, display a path of the handwritten input in the other text input area. 6. The electronic device of claim 5, wherein the at least one processor is further configured to execute the instructions to:
auto-scroll the displayed path of the handwritten input in horizontal direction. 7. The electronic device of claim 1, wherein the visual affordance is maintained while the hovering input is maintained over the display. 8. An electronic device comprising:
a display; at least one memory storing instructions; and at least one processor configured to execute the instructions to: display, with at least one message received from another electronic device, an input area for writing a message to be transmitted to the other electronic device; receive, while displaying the input area with the at least one message, an input for enclosing an image in the message; in response to receiving the input, display, with the at least one message and the input area, at least portion of a plurality of thumbnail images for respectively representing a plurality of images; receive, while displaying the at least portion of the plurality of thumbnail images, another input for selecting a thumbnail image among the plurality of thumbnail images; and in response to receiving the other input, display the selected thumbnail image within the input area that is enlarged. 9. The electronic device of claim 8, wherein the at least portion of the plurality of thumbnail images, which are displayed in response to receiving the input, correspond to most recently captured images among the plurality of images. 10. The electronic device of claim 8, wherein the at least one processor is further configured to execute the instructions to:
in response to receiving a flick input on the at least portion of the plurality of thumbnail images, auto-scroll the plurality of thumbnail images. 11. An electronic device comprising:
a display; at least one memory storing instructions; and at least one processor configured to execute the instructions to: display at least portion of a plurality of thumbnail images for respectively representing a plurality of images; receive a hovering input over a thumbnail image among the plurality of thumbnail images; and in response to receiving the hovering input, display the thumbnail image that is enlarged as superimposed on at least portion of the plurality of thumbnail images, with at least one executable object for processing the thumbnail image. 12. The electronic device of claim 11, wherein the thumbnail image that is enlarged is maintained while the hovering input is maintained over the display. 13. The electronic device of claim 11, wherein the at least portion of the plurality of thumbnail images is maintained on the display independently from receiving the hovering input. 14. A method of an electronic device comprising:
displaying a text input area via a display of the electronic device; receiving, from an input tool spaced apart from the display, a hovering input on the text input area; in response to the reception, displaying a visual affordance, for indicating that a handwritten input is receivable for input to the text input area, wherein the visual affordance is at least partially superimposed on the text input area; receiving a touch input on the visual affordance; and in response to receiving the touch input, displaying another text input area capable of receiving the handwritten input. 15. The method of claim 14, wherein the other text input area is disposed below the text input area. 16. The method of claim 14, further comprising:
receiving the handwritten input through the other text input area; and displaying, in the text input area, a text which is a result of recognition of the handwritten input. 17. The method of claim 16, wherein the displaying the text comprises displaying, in the text input area, the text with a designated font based on the recognition of the handwritten input. 18. The method of claim 16, further comprising:
in response to receiving the handwritten input through the other text input area, display a path of the handwritten input in the other text input area. 19. The method of claim 18, further comprising:
auto-scroll the displayed path of the handwritten input in horizontal direction. 20. The method of claim 14, wherein the visual affordance is maintained while the hovering input is maintained over the display. | 2,100 |
6,617 | 6,617 | 16,644,243 | 2,128 | Decision engines are deployed in a variety of fields, from medical diagnostics to financial applications such as lending. Typically, solutions involve rule engines or artificial intelligence (AI) to assist in making a decision based on transactional data. However, rules can be complicated to maintain and conflicting, and AI does not offer transparency which is required for example, in many applications of the financial industry. The proposed solution discussed herein includes a rule engine which processes transactions based on weighted rules. The system is trained from a training transaction set. In some embodiments, the system may mine the training transaction set for rules, while in other embodiments the rules may be predefined and assigned weights by the system. | 1. A method for probabilistic data classification in a rule engine, the method comprising:
receiving a training data set, the data set comprising a plurality of training transactions, each transaction comprising: a plurality of attribute elements, and a real output element; assigning a weight value to each of a plurality of rules of the rule engine, each rule comprising an attribute element, and one or more rules further comprise: another element, and a relation between the attribute element and the another element; receiving from the rule engine a predicted output for each case of at least a first portion of the plurality of training transactions, in response to providing the at least a first portion of the plurality of training transactions to the rule engine; determining an objective function based on a predicted output and a corresponding real output; adjusting the weight value of at least a rule of the plurality of rules to minimize or maximize the objective function receiving from the rule engine a predicted output for each case of a second portion of the plurality of training cases; determining an objective function based on a predicted output and a corresponding real output; and sending a notification to indicate that the rule engine is operative, in response to the objective function reaching a threshold value. 2. (canceled) 3. The method of claim 1, further comprising:
sending a notification to indicate that the rule engine is inoperative, in response to the objective function outside of the threshold value. 4. The method of claim 1, wherein the objective function is outside of the threshold value, further comprising:
removing one or more transactions from the second portion; associating the removed one or more transactions with the first portion of training transactions; and providing the updated first portion of training transactions to the rule engine. 5. The method of claim 1, wherein the another element is: an attribute, or an output. 6. The method of claim 1, further comprising:
removing a rule from the plurality of rules, in response to the weight of the rule being within a threshold. 7. The method of claim 1, further comprising:
determining the impact of a rule; removing the rule from the plurality of rules, in response to the impact being below a threshold. 8. The method of claim 7, wherein determining the impact further comprises:
adjusting the weight of the rule; determining a rate of change of the output of a transaction based on a plurality of weights of the rule; and generating an impact value, based on the rate of change. 9. The method of claim 1, wherein the weight value is any of: static, dynamic, or adaptive. 10. The method of claim 1, further comprising:
generating a rule based on the plurality of training transactions. 11. The method of claim 10, wherein generating a rule further comprises:
determining a first frequency of a first attribute in the plurality of training transactions; determining a second frequency of a second attribute, in response to the first frequency exceeding a first threshold; generating a rule based on the first attribute and the second attribute, in response to the second frequency exceeding a second threshold. 12. The method of claim 10, further comprising:
receiving a rule as an input from a user. 13. The method of claim 1, wherein one or more weights are adjusted until the error value is below a first threshold. 14. The method of claim 1, further comprising:
receiving a new transaction; applying one or more rules of the plurality of rules to the transaction; and generating an outcome based on a portion of the rules of the one or more rules. 15. A probabilistic data classification rule engine system, the system comprising:
a processing circuitry; and a memory, the memory containing instructions that, when executed by the processing circuitry, configure the system to: receive a training data set, the data set comprising a plurality of training transactions, each transaction comprising: a plurality of attribute elements, and a real output element; assign a weight value to each of a plurality of rules of the rule engine, each rule comprising an attribute element, and one or more rules further comprise: another element, and a relation between the attribute element and the another element; generate a predicted output for each case of at least a first portion of the plurality of training transactions, in response to providing the at least a first portion of the plurality of training transactions to the rule engine; determine an objective function based on a first predicted output and a corresponding first real output; adjust the weight value of at least a rule of the plurality of rules to minimize or maximize the objective function; generate a predicted output for each case of a second portion of the plurality of training cases; determine an objective function based on a predicted output and a corresponding real output; and send a notification to indicate that the rule engine is operative, in response to the objective function reaching a threshold value. 16. (canceled) 17. The system of claim 15, wherein the system is further configured to:
send a notification to indicate that the rule engine is inoperative, in response to the objective function outside of the threshold value. 18. The system of claim 15, wherein the objective function is outside of the threshold value, and the system is further configured to:
remove one or more transactions from the second portion; associate the removed one or more transactions with the first portion of training transactions; and provide the updated first portion of training transactions to the rule engine. 19. The system of claim 15, wherein the another element is: an attribute, or an output. 20. The system of claim 15, wherein the system is further configured to:
remove a rule from the plurality of rules, in response to the weight of the rule being within a threshold. 21. The system of claim 15, wherein the system is further configured to:
determine the impact of a rule; remove the rule from the plurality of rules, in response to the impact being below a threshold. 22. The system of claim 21, wherein the system is further configured to determine the impact by:
adjusting the weight of the rule; determining a rate of change of the output of a transaction based on a plurality of weights of the rule; and generating an impact value, based on the rate of change. 23. The system of claim 15, wherein the weight value is any of: static, dynamic, or adaptive. 24. The system of claim 15, wherein the system is further configured to:
generate a rule based on the plurality of training transactions. 25. The system of claim 24, wherein the system is further configured to generate a rule by:
determining a first frequency of a first attribute in the plurality of training transactions; determining a second frequency of a second attribute, in response to the first frequency exceeding a first threshold; generating a rule based on the first attribute and the second attribute, in response to the second frequency exceeding a second threshold. 26. The system of claim 24, wherein the system is further configured to:
receive a rule as an input from a user. 27. The system of claim 15, wherein one or more weights are adjusted until the error value is below a first threshold. 28. The system of claim 15, wherein the system is further configured to:
receive a new transaction; apply one or more rules of the plurality of rules to the transaction; and generate an outcome based on a portion of the rules of the one or more rules. | Decision engines are deployed in a variety of fields, from medical diagnostics to financial applications such as lending. Typically, solutions involve rule engines or artificial intelligence (AI) to assist in making a decision based on transactional data. However, rules can be complicated to maintain and conflicting, and AI does not offer transparency which is required for example, in many applications of the financial industry. The proposed solution discussed herein includes a rule engine which processes transactions based on weighted rules. The system is trained from a training transaction set. In some embodiments, the system may mine the training transaction set for rules, while in other embodiments the rules may be predefined and assigned weights by the system.1. A method for probabilistic data classification in a rule engine, the method comprising:
receiving a training data set, the data set comprising a plurality of training transactions, each transaction comprising: a plurality of attribute elements, and a real output element; assigning a weight value to each of a plurality of rules of the rule engine, each rule comprising an attribute element, and one or more rules further comprise: another element, and a relation between the attribute element and the another element; receiving from the rule engine a predicted output for each case of at least a first portion of the plurality of training transactions, in response to providing the at least a first portion of the plurality of training transactions to the rule engine; determining an objective function based on a predicted output and a corresponding real output; adjusting the weight value of at least a rule of the plurality of rules to minimize or maximize the objective function receiving from the rule engine a predicted output for each case of a second portion of the plurality of training cases; determining an objective function based on a predicted output and a corresponding real output; and sending a notification to indicate that the rule engine is operative, in response to the objective function reaching a threshold value. 2. (canceled) 3. The method of claim 1, further comprising:
sending a notification to indicate that the rule engine is inoperative, in response to the objective function outside of the threshold value. 4. The method of claim 1, wherein the objective function is outside of the threshold value, further comprising:
removing one or more transactions from the second portion; associating the removed one or more transactions with the first portion of training transactions; and providing the updated first portion of training transactions to the rule engine. 5. The method of claim 1, wherein the another element is: an attribute, or an output. 6. The method of claim 1, further comprising:
removing a rule from the plurality of rules, in response to the weight of the rule being within a threshold. 7. The method of claim 1, further comprising:
determining the impact of a rule; removing the rule from the plurality of rules, in response to the impact being below a threshold. 8. The method of claim 7, wherein determining the impact further comprises:
adjusting the weight of the rule; determining a rate of change of the output of a transaction based on a plurality of weights of the rule; and generating an impact value, based on the rate of change. 9. The method of claim 1, wherein the weight value is any of: static, dynamic, or adaptive. 10. The method of claim 1, further comprising:
generating a rule based on the plurality of training transactions. 11. The method of claim 10, wherein generating a rule further comprises:
determining a first frequency of a first attribute in the plurality of training transactions; determining a second frequency of a second attribute, in response to the first frequency exceeding a first threshold; generating a rule based on the first attribute and the second attribute, in response to the second frequency exceeding a second threshold. 12. The method of claim 10, further comprising:
receiving a rule as an input from a user. 13. The method of claim 1, wherein one or more weights are adjusted until the error value is below a first threshold. 14. The method of claim 1, further comprising:
receiving a new transaction; applying one or more rules of the plurality of rules to the transaction; and generating an outcome based on a portion of the rules of the one or more rules. 15. A probabilistic data classification rule engine system, the system comprising:
a processing circuitry; and a memory, the memory containing instructions that, when executed by the processing circuitry, configure the system to: receive a training data set, the data set comprising a plurality of training transactions, each transaction comprising: a plurality of attribute elements, and a real output element; assign a weight value to each of a plurality of rules of the rule engine, each rule comprising an attribute element, and one or more rules further comprise: another element, and a relation between the attribute element and the another element; generate a predicted output for each case of at least a first portion of the plurality of training transactions, in response to providing the at least a first portion of the plurality of training transactions to the rule engine; determine an objective function based on a first predicted output and a corresponding first real output; adjust the weight value of at least a rule of the plurality of rules to minimize or maximize the objective function; generate a predicted output for each case of a second portion of the plurality of training cases; determine an objective function based on a predicted output and a corresponding real output; and send a notification to indicate that the rule engine is operative, in response to the objective function reaching a threshold value. 16. (canceled) 17. The system of claim 15, wherein the system is further configured to:
send a notification to indicate that the rule engine is inoperative, in response to the objective function outside of the threshold value. 18. The system of claim 15, wherein the objective function is outside of the threshold value, and the system is further configured to:
remove one or more transactions from the second portion; associate the removed one or more transactions with the first portion of training transactions; and provide the updated first portion of training transactions to the rule engine. 19. The system of claim 15, wherein the another element is: an attribute, or an output. 20. The system of claim 15, wherein the system is further configured to:
remove a rule from the plurality of rules, in response to the weight of the rule being within a threshold. 21. The system of claim 15, wherein the system is further configured to:
determine the impact of a rule; remove the rule from the plurality of rules, in response to the impact being below a threshold. 22. The system of claim 21, wherein the system is further configured to determine the impact by:
adjusting the weight of the rule; determining a rate of change of the output of a transaction based on a plurality of weights of the rule; and generating an impact value, based on the rate of change. 23. The system of claim 15, wherein the weight value is any of: static, dynamic, or adaptive. 24. The system of claim 15, wherein the system is further configured to:
generate a rule based on the plurality of training transactions. 25. The system of claim 24, wherein the system is further configured to generate a rule by:
determining a first frequency of a first attribute in the plurality of training transactions; determining a second frequency of a second attribute, in response to the first frequency exceeding a first threshold; generating a rule based on the first attribute and the second attribute, in response to the second frequency exceeding a second threshold. 26. The system of claim 24, wherein the system is further configured to:
receive a rule as an input from a user. 27. The system of claim 15, wherein one or more weights are adjusted until the error value is below a first threshold. 28. The system of claim 15, wherein the system is further configured to:
receive a new transaction; apply one or more rules of the plurality of rules to the transaction; and generate an outcome based on a portion of the rules of the one or more rules. | 2,100 |
6,618 | 6,618 | 15,331,239 | 2,166 | System and method for constructing a hierarchical index table usable for matching a search sequence to reference data. The index table may be constructed to contain entries associated with an exhaustive list of all subsequences of a given length, wherein each entry contains the number and locations of matches of each subsequence in the reference data. The hierarchical index table may be constructed in an iterative manner, wherein entries for each lengthened subsequence are selectively and iteratively constructed based on the number of matches being greater than each of a set of respective thresholds. The hierarchical index table may be used to search for matches between a search sequence and reference data, and to perform misfit identification and characterization upon each respective candidate match. | 1. A method for matching a search sequence to reference data, the method comprising:
performing, by a computing device: a) storing reference data in a memory; b) creating a hierarchical index table based on the reference data, wherein said creating comprises creating a plurality of entries at a plurality of levels in the hierarchical index table, wherein for entries at each respective level n, where n is a non-zero positive integer, said creating comprises creating additional n+1 level entries for respective level n entries in the hierarchical index table in response to matching criteria of the respective level n entry being greater than a threshold; c) receiving input specifying a search sequence; and d) searching the reference data for matches of a subsection of the search sequence using the hierarchical index table. 2. The method of claim 1,
wherein said creating first level entries in the hierarchical index table is performed for each possible subsequence of the reference data having a first length; wherein said creating n+1 level entries in the hierarchical index table is performed for each nth level entry where the matching criteria of the respective level n entry is greater than a threshold, for each possible subsequence of the reference data having a respective length corresponding to the n+1 level. 3. The method of claim 2,
wherein said creating a respective entry in in any respective level of the hierarchical index table is performed by:
searching for matches of the respective subsequence of the respective length in the reference data; and
storing information in the respective entry in a respective level of the hierarchical index table, wherein the information specifies a number of matches of the respective subsequence of a respective length in the reference data, wherein the information further specifies the location of each of the matches. 4. The method of claim 3,
wherein, for n+1 levels, said searching for matches of the respective subsequence of the n+1 level is performed at locations associated with the corresponding entry in the n level. 5. The method of claim 3, wherein the data indicating the number of matches associated with each entry is stored in a first data structure, and the data location of each of the matches associated with each entry is stored in a second data structure, wherein the first and second data structures are each comprised within the hierarchical index table. 6. The method of claim 1, further comprising:
for each respective n level entry, storing a pointer in memory that references n+1 level entries that correspond to the respective n level entry. 7. The method of claim 1, wherein the reference data comprises a reference genome and searching the reference data comprises aligning a short read (SR) to the reference genome. 8. A computer readable memory medium comprising program instructions for aligning a short read to a reference genome, wherein the program instructions are executable to:
a) store the reference genome in a memory; b) create a hierarchical index table based on the reference genome, wherein said creating comprises creating a plurality of entries at a plurality of levels in the hierarchical index table, wherein each respective entry contains information related to the locations in the reference genome of a sequence of base pairs associated with the respective entry, wherein for nonzero positive integers n, for entries at each respective level n, said creating comprises creating additional n+1 level entries for respective level n entries in the hierarchical index table in response to matching criteria of the respective level n entry being greater than a threshold; c) receiving input specifying a short read; and d) searching the reference genome for matches of a subsection of the short read using the hierarchical index table. 9. The memory medium of claim 8,
wherein said creating first level entries in the hierarchical index table is performed for each possible subsequence of the reference genome having a first length; wherein said creating n+1 level entries in the hierarchical index table is performed for each nth level entry where the matching criteria of the respective level n entry is greater than a threshold, for each possible subsequence of the reference genome having a respective length corresponding to the n+1 level. 10. The memory medium of claim 8,
wherein said creating a respective entry in in any respective level of the hierarchical index table is performed by:
searching for matches of the respective subsequence of the respective length in the reference data; and
storing information in the respective entry in a respective level of the hierarchical index table, wherein the information specifies a number of matches of the respective subsequence of a respective length in the reference genome, wherein the information further specifies the location of each of the matches. 11. The method of claim 10,
wherein, for n+1 levels, said searching for matches of the respective subsequence of the n+1 level is performed at locations associated with the corresponding entry in the n level. 12. The method of claim 10, wherein the data indicating the number of matches associated with each entry is stored in a first data structure, and the data location of each of the matches associated with each entry is stored in a second data structure, wherein the first and second data structures are each comprised within the hierarchical index table. 13. The method of claim 8, further comprising:
for each respective n level entry, storing a pointer in memory that references n+1 level entries that correspond to the respective n level entry. 14. A method for matching a search sequence to reference data, the method comprising:
performing, by a computing device: a) storing the reference data in a memory; b) creating a hierarchical index table based on the reference data, comprising:
for each possible respective subsequence of the reference data having a first length,
i) creating a respective entry in in a first level of the hierarchical index table by:
searching for matches of the respective subsequence of the first length in the reference data; and
storing information in the respective entry in a first level of the hierarchical index table, wherein the information specifies a number of matches of the respective subsequence of the first length in the reference data, wherein the information further specifies the location of each of the matches;
ii) comparing the number of matches in the respective first entry to a first threshold; and
for each respective entry in the first level having a number of matches greater than the first threshold, performing said creating a respective entry in a second level of the hierarchical index table for a second set of subsequences having a second length, wherein each respective entry in the first level is associated with its corresponding entry in the second level, wherein the second set of subsequences comprise each possible subsequence of the reference data having the second length, and wherein said searching for matches of the respective subsequence having the second length is performed at locations associated with the corresponding entry in the first level; and
c) receiving input specifying a search sequence; and d) searching the reference data for matches of a subsection of the search sequence using the hierarchical index table. 15. The method of claim 14, wherein the reference data and each subsequence are encoded numerically. 16. The method of claim 14, wherein said comparing and said creating a respective entry in a second level of the hierarchical index table comprises a second iteration, and wherein the method further comprises:
performing one or more additional iterations using one or more respective additional thresholds, lengths, sets of subsequences, and levels of the hierarchical index table. 17. The method of claim 14, wherein the reference data comprises a reference genome and searching the reference data comprises aligning a short read (SR) to the reference genome. 18. The method of claim 14, wherein the data indicating the number of locations associated with each entry is stored in a first data structure, and the data indicating the locations in the reference data associated with each entry is stored in a second data structure, wherein the first and second data structures are each comprised within the hierarchical index table. 19. The method of claim 14, wherein the association of the corresponding entries in the second level with the respective entry in the first level comprises information comprised within the respective entry in the first level, the information comprising:
link information that points to the corresponding entries in the second level; and the second length. 20. The method of claim 16, wherein each of the second and subsequent iterations further comprises:
for each respective subsequence found to have a number of matches not greater than the respective threshold, storing a STOP instruction in the entry associated with the respective subsequence, wherein the STOP instruction is usable by the method to prevent the creation of further entries based on the subsequence associated with the STOP instruction. 21. The method of claim 14, further comprising:
increasing the length of the subsection of the search sequence based on determining that a first entry in the index table associated with the search sequence is not a terminal entry of the index table, looking up a second entry in the index table that matches the subsection of the search sequence of an increased length based on determining that the first entry is not a terminal entry of the index table. 22. The method of claim 14, further comprising:
evaluating each match found while searching the reference data, comprising:
for each match, determining misfits between search sequence and the reference data based on identifying base errors between the search sequence and the reference data. 23. The method of claim 22, wherein said evaluating further comprises:
determining at least one indel in the search sequence, comprising:
determining an anchor position in the search sequence based on a number of lo-end and hi-end misfits of the search sequence;
determining length and type of the at least one indel using the anchor position;
determining a starting position of the at least one indel, comprising:
computing a first running error sum between the search sequence and the reference data at the match location;
computing a second running error sum between the search sequence and the reference data at an offset from the match location, wherein the offset is based on the type and length of the at least one indel; and
determining a starting location of the at least one indel based on a minimum of the first and second running error sums. | System and method for constructing a hierarchical index table usable for matching a search sequence to reference data. The index table may be constructed to contain entries associated with an exhaustive list of all subsequences of a given length, wherein each entry contains the number and locations of matches of each subsequence in the reference data. The hierarchical index table may be constructed in an iterative manner, wherein entries for each lengthened subsequence are selectively and iteratively constructed based on the number of matches being greater than each of a set of respective thresholds. The hierarchical index table may be used to search for matches between a search sequence and reference data, and to perform misfit identification and characterization upon each respective candidate match.1. A method for matching a search sequence to reference data, the method comprising:
performing, by a computing device: a) storing reference data in a memory; b) creating a hierarchical index table based on the reference data, wherein said creating comprises creating a plurality of entries at a plurality of levels in the hierarchical index table, wherein for entries at each respective level n, where n is a non-zero positive integer, said creating comprises creating additional n+1 level entries for respective level n entries in the hierarchical index table in response to matching criteria of the respective level n entry being greater than a threshold; c) receiving input specifying a search sequence; and d) searching the reference data for matches of a subsection of the search sequence using the hierarchical index table. 2. The method of claim 1,
wherein said creating first level entries in the hierarchical index table is performed for each possible subsequence of the reference data having a first length; wherein said creating n+1 level entries in the hierarchical index table is performed for each nth level entry where the matching criteria of the respective level n entry is greater than a threshold, for each possible subsequence of the reference data having a respective length corresponding to the n+1 level. 3. The method of claim 2,
wherein said creating a respective entry in in any respective level of the hierarchical index table is performed by:
searching for matches of the respective subsequence of the respective length in the reference data; and
storing information in the respective entry in a respective level of the hierarchical index table, wherein the information specifies a number of matches of the respective subsequence of a respective length in the reference data, wherein the information further specifies the location of each of the matches. 4. The method of claim 3,
wherein, for n+1 levels, said searching for matches of the respective subsequence of the n+1 level is performed at locations associated with the corresponding entry in the n level. 5. The method of claim 3, wherein the data indicating the number of matches associated with each entry is stored in a first data structure, and the data location of each of the matches associated with each entry is stored in a second data structure, wherein the first and second data structures are each comprised within the hierarchical index table. 6. The method of claim 1, further comprising:
for each respective n level entry, storing a pointer in memory that references n+1 level entries that correspond to the respective n level entry. 7. The method of claim 1, wherein the reference data comprises a reference genome and searching the reference data comprises aligning a short read (SR) to the reference genome. 8. A computer readable memory medium comprising program instructions for aligning a short read to a reference genome, wherein the program instructions are executable to:
a) store the reference genome in a memory; b) create a hierarchical index table based on the reference genome, wherein said creating comprises creating a plurality of entries at a plurality of levels in the hierarchical index table, wherein each respective entry contains information related to the locations in the reference genome of a sequence of base pairs associated with the respective entry, wherein for nonzero positive integers n, for entries at each respective level n, said creating comprises creating additional n+1 level entries for respective level n entries in the hierarchical index table in response to matching criteria of the respective level n entry being greater than a threshold; c) receiving input specifying a short read; and d) searching the reference genome for matches of a subsection of the short read using the hierarchical index table. 9. The memory medium of claim 8,
wherein said creating first level entries in the hierarchical index table is performed for each possible subsequence of the reference genome having a first length; wherein said creating n+1 level entries in the hierarchical index table is performed for each nth level entry where the matching criteria of the respective level n entry is greater than a threshold, for each possible subsequence of the reference genome having a respective length corresponding to the n+1 level. 10. The memory medium of claim 8,
wherein said creating a respective entry in in any respective level of the hierarchical index table is performed by:
searching for matches of the respective subsequence of the respective length in the reference data; and
storing information in the respective entry in a respective level of the hierarchical index table, wherein the information specifies a number of matches of the respective subsequence of a respective length in the reference genome, wherein the information further specifies the location of each of the matches. 11. The method of claim 10,
wherein, for n+1 levels, said searching for matches of the respective subsequence of the n+1 level is performed at locations associated with the corresponding entry in the n level. 12. The method of claim 10, wherein the data indicating the number of matches associated with each entry is stored in a first data structure, and the data location of each of the matches associated with each entry is stored in a second data structure, wherein the first and second data structures are each comprised within the hierarchical index table. 13. The method of claim 8, further comprising:
for each respective n level entry, storing a pointer in memory that references n+1 level entries that correspond to the respective n level entry. 14. A method for matching a search sequence to reference data, the method comprising:
performing, by a computing device: a) storing the reference data in a memory; b) creating a hierarchical index table based on the reference data, comprising:
for each possible respective subsequence of the reference data having a first length,
i) creating a respective entry in in a first level of the hierarchical index table by:
searching for matches of the respective subsequence of the first length in the reference data; and
storing information in the respective entry in a first level of the hierarchical index table, wherein the information specifies a number of matches of the respective subsequence of the first length in the reference data, wherein the information further specifies the location of each of the matches;
ii) comparing the number of matches in the respective first entry to a first threshold; and
for each respective entry in the first level having a number of matches greater than the first threshold, performing said creating a respective entry in a second level of the hierarchical index table for a second set of subsequences having a second length, wherein each respective entry in the first level is associated with its corresponding entry in the second level, wherein the second set of subsequences comprise each possible subsequence of the reference data having the second length, and wherein said searching for matches of the respective subsequence having the second length is performed at locations associated with the corresponding entry in the first level; and
c) receiving input specifying a search sequence; and d) searching the reference data for matches of a subsection of the search sequence using the hierarchical index table. 15. The method of claim 14, wherein the reference data and each subsequence are encoded numerically. 16. The method of claim 14, wherein said comparing and said creating a respective entry in a second level of the hierarchical index table comprises a second iteration, and wherein the method further comprises:
performing one or more additional iterations using one or more respective additional thresholds, lengths, sets of subsequences, and levels of the hierarchical index table. 17. The method of claim 14, wherein the reference data comprises a reference genome and searching the reference data comprises aligning a short read (SR) to the reference genome. 18. The method of claim 14, wherein the data indicating the number of locations associated with each entry is stored in a first data structure, and the data indicating the locations in the reference data associated with each entry is stored in a second data structure, wherein the first and second data structures are each comprised within the hierarchical index table. 19. The method of claim 14, wherein the association of the corresponding entries in the second level with the respective entry in the first level comprises information comprised within the respective entry in the first level, the information comprising:
link information that points to the corresponding entries in the second level; and the second length. 20. The method of claim 16, wherein each of the second and subsequent iterations further comprises:
for each respective subsequence found to have a number of matches not greater than the respective threshold, storing a STOP instruction in the entry associated with the respective subsequence, wherein the STOP instruction is usable by the method to prevent the creation of further entries based on the subsequence associated with the STOP instruction. 21. The method of claim 14, further comprising:
increasing the length of the subsection of the search sequence based on determining that a first entry in the index table associated with the search sequence is not a terminal entry of the index table, looking up a second entry in the index table that matches the subsection of the search sequence of an increased length based on determining that the first entry is not a terminal entry of the index table. 22. The method of claim 14, further comprising:
evaluating each match found while searching the reference data, comprising:
for each match, determining misfits between search sequence and the reference data based on identifying base errors between the search sequence and the reference data. 23. The method of claim 22, wherein said evaluating further comprises:
determining at least one indel in the search sequence, comprising:
determining an anchor position in the search sequence based on a number of lo-end and hi-end misfits of the search sequence;
determining length and type of the at least one indel using the anchor position;
determining a starting position of the at least one indel, comprising:
computing a first running error sum between the search sequence and the reference data at the match location;
computing a second running error sum between the search sequence and the reference data at an offset from the match location, wherein the offset is based on the type and length of the at least one indel; and
determining a starting location of the at least one indel based on a minimum of the first and second running error sums. | 2,100 |
6,619 | 6,619 | 15,828,054 | 2,117 | A system and approach for developing a periodic water usage profile and demand for controlling a water heater. A mode may be selected for demand for a certain amount of water of a particular temperature range to be available for use from the water heater. Data on hot water usage may be collected and the usage profile and demand may be calculated from the data. The water heater may be programmed to operate in a certain fashion based on the usage profile and demand. A control knob may be on the water heater control to select a particular demand. Control of the water heater may be operated from a remote device connected in a wireless or wired fashion. An optimization program may be implemented in the control of the water heater for achieving one or more beneficial goals related to water heater performance and hot water production. | 1. A communication mechanism comprising:
a smart device; and a control device connected to an appliance having a set of water demand levels; and wherein: control of the appliance is effected with signals between the smart device and the control device; the control device comprises optimization software for the appliance; and a basis for power for the appliance is selected from a group consisting of electricity, natural gas, propane, oil, kerosene, coal, and wood; the optimization software comprises usage pattern based optimization configured to: obtain a water usage profile; obtain one or more items selected from a group consisting of reduced operating costs of the appliance, prognostics for performance over time, maintenance alarms, performance optimization alerts, and demand response management for load shedding; determine a set of temperature set points for the set of water demand levels based on the water usage profile and the one or more items, each water demand level having at least one temperature set point; obtain a selected water demand level from the set of water demand levels from the appliance; and adjust a temperature of water to the temperature set point for the selected water demand level; the appliance is a water heater; and the control device is configured to enable the optimization software to update the set of temperature set points for the set of water demand levels based on user input to the water usage profile, monitored water usage pattern, and changes in the one or more items. 2. The mechanism of claim 1, wherein:
the control device comprises a communication module, that is powered by a source selected from a group consisting of a battery, a capacitor, a line power outlet, an appliance control power outlet, a solar cell, and a flame/heat/thermo cell; and the appliance is powered by one or more sources selected from a group consisting of a line power outlet, thermopiles, solar panels, wind generators, rechargeable batteries, and energy harvesting systems. 3. The mechanism of claim 1, wherein a smart device is selected from a group consisting of an e-reader, a tablet computer, personal computer (PC), laptop, notebook, tablet, personal digital assistant (PDA), wireless router, and smart phone. 4. The mechanism of claim 1, wherein the control device has a wireless connection with the appliance. 5. The mechanism of claim 1, wherein the control device has a wire connection with the appliance. 6. The mechanism of claim 1, wherein the control device is embedded in a control unit of the appliance. 7. The mechanism of claim 1, wherein the smart device can control two or more appliances with two or more control devices connected to the two or more appliances, respectively. 8. The mechanism of claim 1, wherein set points of the appliance are changeable with the smart device via the control device. 9. The mechanism of claim 1, wherein the control device can interface with a thermostat to perform a function with the smart device, control the set of temperature set points of the appliance, to read a home heating and cooling schedule on another smart device and apply the home heating and cooling schedule to an appliance usage profile. 10. The mechanism of claim 1, wherein the smart device can read settings of a thermostat, and settings of the appliance that impact hot water demand, and apply the settings to a schedule and the water usage profile of the appliance. 11. The mechanism of claim 1, further comprising a control knob for selecting a level amount of hot water demand or temperature of hot water. 12. The mechanism of claim 1, further comprising:
one or more accessories connected to the appliance; and wherein the one or more accessories have communications for one or more items selected from a group consisting of water shutoff valves, fuel valves, stand alone man-machine interface (MMI), and power switches; and the communications are effected by one or more items selected from a group consisting of relay outputs, transistor outputs, RF outputs and light outputs. 13. A method for controlling a water heater comprising:
creating a periodic water usage profile from water usage and temperature data from a water heater with a profiling program; loading the periodic water usage profile to a control for a water heater; selecting a mode of demand, at the control for the water heater, for a certain amount of water within a particular temperature range to be available for use from the water heater; creating a learning program having an enablement option for an update of the periodic water usage profile, water temperature and mode of demand for water from the water heater; and loading the update of the periodic water usage profile, water temperature and mode of demand for water to the control for the water heater device; and wherein a basis for power for the water heater is selected from a group consisting of electricity, natural gas, propane, oil, kerosene, coal, and wood. 14. The method of claim 13, wherein:
if the enablement option of the learning program is engaged, then a monitoring of water usage, temperature and demand for water from the water heater occurs for X days; and an update of the periodic water usage profile, water temperature and mode of demand for water based on the monitoring for X days is loaded to the control for the water heater device. 15. The method of claim 13, wherein if an enablement option of learning program is not engaged, then the water heater operates according to a predetermined program for one or more items selected from a group consisting of water usage and water temperature. 16. The method of claim 14, further comprising:
collecting data related to water usage, temperature and demand; and calculating statistics for usage, demand and adjustment over time; and wherein: a daily usage profile and margin of error are determined and updated; a weekly usage routine for day by day usage pattern is determined and updated; and more usage increases a confidence level in the daily usage profile and weekly usage routine. 17. The method of claim 13, wherein if the basis for power for the water heater is electricity, the water heater benefits from a flexibility of having one, two or more heating elements being selected to be energized. 18. A communication system comprising:
a control device connected to an appliance having a set of water demand levels; and a control knob; and wherein: control of the appliance is effected with signals between the control knob and the control device; the control device comprises optimization software for the appliance; a basis for power for the appliance is selected from a group consisting of electricity, natural gas, propane, oil, kerosene, coal, and wood; and the optimization software comprises usage pattern based optimization configured to:
obtain a water usage profile;
obtain one or more items selected from a group consisting of reduced operating costs of the appliance, prognostics for performance over time, maintenance alarms, performance optimization alerts, and demand response management for load shedding;
determine a set of temperature set points for the set of water demand levels based on the water usage profile and the one or more items, each water demand level having at least one temperature set point;
obtain a selected water demand level from the set of water demand levels from the control knob;
adjust a temperature of water to the temperature set point for the selected water demand level;
the appliance is a water heater; and
the control device is configured to enable the optimization software to update the set of temperature set points for the set of water demand levels based on user input to the water usage profile, monitored water usage pattern, and changes in the one or more items. 19. The system of claim 18, wherein the optimization software has a learning option that allows it to update the set of temperature set point for the set of water demand levels based on the monitored water usage pattern. 20. The system of claim 18, further comprising:
one or more accessories connected to the appliance; and wherein: the one or more accessories have communications for one or more items selected from a group consisting of water shutoff valves, fuel valves, stand alone man-machine interface (MMI), and power switches; the control device comprises a communication module that is powered by a source selected from a group consisting of a battery, a capacitor, a line power outlet, an appliance control power outlet, a solar cell, and a flame/heat/thermo cell; and the appliance is powered by one or more sources selected from a group consisting of a line power outlet, thermopiles, solar panels, wind generators, rechargeable batteries, and energy harvesting systems. | A system and approach for developing a periodic water usage profile and demand for controlling a water heater. A mode may be selected for demand for a certain amount of water of a particular temperature range to be available for use from the water heater. Data on hot water usage may be collected and the usage profile and demand may be calculated from the data. The water heater may be programmed to operate in a certain fashion based on the usage profile and demand. A control knob may be on the water heater control to select a particular demand. Control of the water heater may be operated from a remote device connected in a wireless or wired fashion. An optimization program may be implemented in the control of the water heater for achieving one or more beneficial goals related to water heater performance and hot water production.1. A communication mechanism comprising:
a smart device; and a control device connected to an appliance having a set of water demand levels; and wherein: control of the appliance is effected with signals between the smart device and the control device; the control device comprises optimization software for the appliance; and a basis for power for the appliance is selected from a group consisting of electricity, natural gas, propane, oil, kerosene, coal, and wood; the optimization software comprises usage pattern based optimization configured to: obtain a water usage profile; obtain one or more items selected from a group consisting of reduced operating costs of the appliance, prognostics for performance over time, maintenance alarms, performance optimization alerts, and demand response management for load shedding; determine a set of temperature set points for the set of water demand levels based on the water usage profile and the one or more items, each water demand level having at least one temperature set point; obtain a selected water demand level from the set of water demand levels from the appliance; and adjust a temperature of water to the temperature set point for the selected water demand level; the appliance is a water heater; and the control device is configured to enable the optimization software to update the set of temperature set points for the set of water demand levels based on user input to the water usage profile, monitored water usage pattern, and changes in the one or more items. 2. The mechanism of claim 1, wherein:
the control device comprises a communication module, that is powered by a source selected from a group consisting of a battery, a capacitor, a line power outlet, an appliance control power outlet, a solar cell, and a flame/heat/thermo cell; and the appliance is powered by one or more sources selected from a group consisting of a line power outlet, thermopiles, solar panels, wind generators, rechargeable batteries, and energy harvesting systems. 3. The mechanism of claim 1, wherein a smart device is selected from a group consisting of an e-reader, a tablet computer, personal computer (PC), laptop, notebook, tablet, personal digital assistant (PDA), wireless router, and smart phone. 4. The mechanism of claim 1, wherein the control device has a wireless connection with the appliance. 5. The mechanism of claim 1, wherein the control device has a wire connection with the appliance. 6. The mechanism of claim 1, wherein the control device is embedded in a control unit of the appliance. 7. The mechanism of claim 1, wherein the smart device can control two or more appliances with two or more control devices connected to the two or more appliances, respectively. 8. The mechanism of claim 1, wherein set points of the appliance are changeable with the smart device via the control device. 9. The mechanism of claim 1, wherein the control device can interface with a thermostat to perform a function with the smart device, control the set of temperature set points of the appliance, to read a home heating and cooling schedule on another smart device and apply the home heating and cooling schedule to an appliance usage profile. 10. The mechanism of claim 1, wherein the smart device can read settings of a thermostat, and settings of the appliance that impact hot water demand, and apply the settings to a schedule and the water usage profile of the appliance. 11. The mechanism of claim 1, further comprising a control knob for selecting a level amount of hot water demand or temperature of hot water. 12. The mechanism of claim 1, further comprising:
one or more accessories connected to the appliance; and wherein the one or more accessories have communications for one or more items selected from a group consisting of water shutoff valves, fuel valves, stand alone man-machine interface (MMI), and power switches; and the communications are effected by one or more items selected from a group consisting of relay outputs, transistor outputs, RF outputs and light outputs. 13. A method for controlling a water heater comprising:
creating a periodic water usage profile from water usage and temperature data from a water heater with a profiling program; loading the periodic water usage profile to a control for a water heater; selecting a mode of demand, at the control for the water heater, for a certain amount of water within a particular temperature range to be available for use from the water heater; creating a learning program having an enablement option for an update of the periodic water usage profile, water temperature and mode of demand for water from the water heater; and loading the update of the periodic water usage profile, water temperature and mode of demand for water to the control for the water heater device; and wherein a basis for power for the water heater is selected from a group consisting of electricity, natural gas, propane, oil, kerosene, coal, and wood. 14. The method of claim 13, wherein:
if the enablement option of the learning program is engaged, then a monitoring of water usage, temperature and demand for water from the water heater occurs for X days; and an update of the periodic water usage profile, water temperature and mode of demand for water based on the monitoring for X days is loaded to the control for the water heater device. 15. The method of claim 13, wherein if an enablement option of learning program is not engaged, then the water heater operates according to a predetermined program for one or more items selected from a group consisting of water usage and water temperature. 16. The method of claim 14, further comprising:
collecting data related to water usage, temperature and demand; and calculating statistics for usage, demand and adjustment over time; and wherein: a daily usage profile and margin of error are determined and updated; a weekly usage routine for day by day usage pattern is determined and updated; and more usage increases a confidence level in the daily usage profile and weekly usage routine. 17. The method of claim 13, wherein if the basis for power for the water heater is electricity, the water heater benefits from a flexibility of having one, two or more heating elements being selected to be energized. 18. A communication system comprising:
a control device connected to an appliance having a set of water demand levels; and a control knob; and wherein: control of the appliance is effected with signals between the control knob and the control device; the control device comprises optimization software for the appliance; a basis for power for the appliance is selected from a group consisting of electricity, natural gas, propane, oil, kerosene, coal, and wood; and the optimization software comprises usage pattern based optimization configured to:
obtain a water usage profile;
obtain one or more items selected from a group consisting of reduced operating costs of the appliance, prognostics for performance over time, maintenance alarms, performance optimization alerts, and demand response management for load shedding;
determine a set of temperature set points for the set of water demand levels based on the water usage profile and the one or more items, each water demand level having at least one temperature set point;
obtain a selected water demand level from the set of water demand levels from the control knob;
adjust a temperature of water to the temperature set point for the selected water demand level;
the appliance is a water heater; and
the control device is configured to enable the optimization software to update the set of temperature set points for the set of water demand levels based on user input to the water usage profile, monitored water usage pattern, and changes in the one or more items. 19. The system of claim 18, wherein the optimization software has a learning option that allows it to update the set of temperature set point for the set of water demand levels based on the monitored water usage pattern. 20. The system of claim 18, further comprising:
one or more accessories connected to the appliance; and wherein: the one or more accessories have communications for one or more items selected from a group consisting of water shutoff valves, fuel valves, stand alone man-machine interface (MMI), and power switches; the control device comprises a communication module that is powered by a source selected from a group consisting of a battery, a capacitor, a line power outlet, an appliance control power outlet, a solar cell, and a flame/heat/thermo cell; and the appliance is powered by one or more sources selected from a group consisting of a line power outlet, thermopiles, solar panels, wind generators, rechargeable batteries, and energy harvesting systems. | 2,100 |
6,620 | 6,620 | 15,399,173 | 2,194 | The present disclosure relates to managing variability in an application programming interface (API). According to one embodiment, a method generally includes receiving, from a user, a definition of a variability schema and context information associated with the variability schema. The variability schema generally represents a variation of one or more properties defined in an application programming interface (API). A computing system links the variation and context information to the one or more properties defined in the API. The computing system receives a query to perform one or more actions using the one or more properties, matches context information associated with the query to the context information associated with the variability schema, and processes the query using the variation of the one or more properties. | 1. A method for managing variability in an application programming interface (API), comprising:
receiving, from a user, a definition of a variability schema and context information associated with the variability schema, the variability schema representing a variation of one or more properties defined in an application programming interface (API); linking the variation and context information to the one or more properties defined in the API; receiving a query to perform one or more actions using the one or more properties; matching context information associated with the query to the context information associated with the variability schema; and processing the query using the variation of the one or more properties. 2. The method of claim 1, wherein the one or more properties comprises a data object model defined in the API. 3. The method of claim 2, wherein the variation comprises one or more additional data elements not present in data object model defined in the API. 4. The method of claim 1, wherein the one or more properties comprises a function defined in the API. 5. The method of claim 4, wherein the variation comprises one or more data processing rules not present in the function defined in the API. 6. The method of claim 1, wherein the context information comprises a geographical location associated with a user that generated the query. 7. The method of claim 1, wherein the context information comprises user membership in an access group and wherein membership in the access group identifies whether the user can invoke one or more functions in the API for modifying data stored in a user data store. 8. The method of claim 1, wherein the definition of the variability schema is included in a markup language file. 9. A system, comprising:
a processor; and memory storing instructions which, when executed on one or more processors, performs an operation for managing variability in an application programming interface (API), the operation comprising:
receiving, from a user, a definition of a variability schema and context information associated with the variability schema, the variability schema representing a variation of one or more properties defined in an application programming interface (API);
linking the variation and context information to the one or more properties defined in the API;
receiving a query to perform one or more actions using the one or more properties;
matching context information associated with the query to the context information associated with the variability schema; and
processing the query using the variation of the one or more properties. 10. The system of claim 9, wherein the one or more properties comprises a data object model defined in the API. 11. The system of claim 10, wherein the variation comprises one or more additional data elements not present in data object model defined in the API. 12. The system of claim 9, wherein the one or more properties comprises a function defined in the API. 13. The system of claim 12, wherein the variation comprises one or more data processing rules not present in the function defined in the API. 14. The system of claim 9, wherein the context information comprises one or more of:
a geographical location associated with a user that generated the query; and user membership in an access group, wherein membership in the access group identifies whether the user can invoke one or more functions in the API for modifying data stored in a user data store. 15. A computer-readable medium comprising instructions which, when executed on one or more processors, performs an operation for managing variability in an application programming interface (API), the operation comprising:
receiving, from a user, a definition of a variability schema and context information associated with the variability schema, the variability schema representing a variation of one or more properties defined in an application programming interface (API); linking the variation and context information to the one or more properties defined in the API; receiving a query to perform one or more actions using the one or more properties; matching context information associated with the query to the context information associated with the variability schema; and processing the query using the variation of the one or more properties. 16. The computer-readable medium of claim 15, wherein the one or more properties comprises a data object model defined in the API. 17. The computer-readable medium of claim 16, wherein the variation comprises one or more additional data elements not present in data object model defined in the API. 18. The computer-readable medium of claim 15, wherein the one or more properties comprises a function defined in the API. 19. The computer-readable medium of claim 18, wherein the variation comprises one or more data processing rules not present in the function defined in the API. 20. The computer-readable medium of claim 15, wherein the context information comprises one or more of:
a geographical location associated with a user that generated the query; and user membership in an access group, wherein membership in the access group identifies whether the user can invoke one or more functions in the API for modifying data stored in a user data store. | The present disclosure relates to managing variability in an application programming interface (API). According to one embodiment, a method generally includes receiving, from a user, a definition of a variability schema and context information associated with the variability schema. The variability schema generally represents a variation of one or more properties defined in an application programming interface (API). A computing system links the variation and context information to the one or more properties defined in the API. The computing system receives a query to perform one or more actions using the one or more properties, matches context information associated with the query to the context information associated with the variability schema, and processes the query using the variation of the one or more properties.1. A method for managing variability in an application programming interface (API), comprising:
receiving, from a user, a definition of a variability schema and context information associated with the variability schema, the variability schema representing a variation of one or more properties defined in an application programming interface (API); linking the variation and context information to the one or more properties defined in the API; receiving a query to perform one or more actions using the one or more properties; matching context information associated with the query to the context information associated with the variability schema; and processing the query using the variation of the one or more properties. 2. The method of claim 1, wherein the one or more properties comprises a data object model defined in the API. 3. The method of claim 2, wherein the variation comprises one or more additional data elements not present in data object model defined in the API. 4. The method of claim 1, wherein the one or more properties comprises a function defined in the API. 5. The method of claim 4, wherein the variation comprises one or more data processing rules not present in the function defined in the API. 6. The method of claim 1, wherein the context information comprises a geographical location associated with a user that generated the query. 7. The method of claim 1, wherein the context information comprises user membership in an access group and wherein membership in the access group identifies whether the user can invoke one or more functions in the API for modifying data stored in a user data store. 8. The method of claim 1, wherein the definition of the variability schema is included in a markup language file. 9. A system, comprising:
a processor; and memory storing instructions which, when executed on one or more processors, performs an operation for managing variability in an application programming interface (API), the operation comprising:
receiving, from a user, a definition of a variability schema and context information associated with the variability schema, the variability schema representing a variation of one or more properties defined in an application programming interface (API);
linking the variation and context information to the one or more properties defined in the API;
receiving a query to perform one or more actions using the one or more properties;
matching context information associated with the query to the context information associated with the variability schema; and
processing the query using the variation of the one or more properties. 10. The system of claim 9, wherein the one or more properties comprises a data object model defined in the API. 11. The system of claim 10, wherein the variation comprises one or more additional data elements not present in data object model defined in the API. 12. The system of claim 9, wherein the one or more properties comprises a function defined in the API. 13. The system of claim 12, wherein the variation comprises one or more data processing rules not present in the function defined in the API. 14. The system of claim 9, wherein the context information comprises one or more of:
a geographical location associated with a user that generated the query; and user membership in an access group, wherein membership in the access group identifies whether the user can invoke one or more functions in the API for modifying data stored in a user data store. 15. A computer-readable medium comprising instructions which, when executed on one or more processors, performs an operation for managing variability in an application programming interface (API), the operation comprising:
receiving, from a user, a definition of a variability schema and context information associated with the variability schema, the variability schema representing a variation of one or more properties defined in an application programming interface (API); linking the variation and context information to the one or more properties defined in the API; receiving a query to perform one or more actions using the one or more properties; matching context information associated with the query to the context information associated with the variability schema; and processing the query using the variation of the one or more properties. 16. The computer-readable medium of claim 15, wherein the one or more properties comprises a data object model defined in the API. 17. The computer-readable medium of claim 16, wherein the variation comprises one or more additional data elements not present in data object model defined in the API. 18. The computer-readable medium of claim 15, wherein the one or more properties comprises a function defined in the API. 19. The computer-readable medium of claim 18, wherein the variation comprises one or more data processing rules not present in the function defined in the API. 20. The computer-readable medium of claim 15, wherein the context information comprises one or more of:
a geographical location associated with a user that generated the query; and user membership in an access group, wherein membership in the access group identifies whether the user can invoke one or more functions in the API for modifying data stored in a user data store. | 2,100 |
6,621 | 6,621 | 15,311,449 | 2,179 | In one implementation, a system for user interface components load time visualization includes a load engine to monitor a load time of a number of elements of a user interface, a color engine to assign a color to each of the number of elements of the user interface based on the load time, and a compile engine to display a component color map of the user interface utilizing the color assigned to each of the number of elements. | 1. A system for user interface components load time visualization, comprising:
a load engine to monitor a load time of a number of elements of a user interface; a color engine to assign a color to each of the number of elements of the user interface based on the load time; and a compile engine to display a component color map of the user interface utilizing the color assigned to each of the number of elements. 2. The system of claim 1, wherein the number of elements of the user interface include displayed elements of a particular window of the user interface. 3. The system of claim 1, wherein each of the number of elements is assigned a load time. 4. The system of claim 1, wherein the load time is a quantity of time a particular element takes to be displayed on the user interface. 5. The system of claim 1, wherein the compile engine takes a snap shot of a particular window of the user interface that includes the number of elements. 6. The system of claim 1, wherein the component color map is placed on a visual representation of the particular window. 7. The system of claim 1, wherein each of the number of elements is displayed in the color assigned by the color engine to generate the component color map. 8. A non-transitory computer readable medium storing instructions executable by a processing resource to cause a controller to:
monitor a load time of each of a number of elements displayed on a window of a user interface; assign a color to each of the number of elements based on the load time; and generate a component color map of the window based the color assigned to each of the number of elements. 9. The medium of claim 8, comprising instructions to identify a plurality of different windows of the user interface and capture a snap shot of the plurality of different windows of the user interface. 10. The medium of claim 9, wherein the component color map is generated by taking a snap shot of the window and applying the assigned color to each of the number of elements displayed on the snap shot. 11. The medium of claim 9, wherein a first color is assigned to an element from the number of elements with a greatest load time and a second color is assigned to an element from the number of elements with a lowest load time. 12. A method for network tool synchronization, comprising:
selecting a window of a user interface; monitoring a load time of each of a number of elements displayed on the window of the user interface; assigning a color to each of the number of elements based on the load time; capturing a snap shot of the window and the number of elements; and generating a component color map on the snap shot of the window based the color assigned to each of the number of elements. 13. The method of claim 12, wherein selecting the window includes selecting an element from a different window of the user interface that executes instructions to display the window. 14. The method of claim 12, wherein assigning the color includes generating a color scheme based on the monitored load times of each of the number of elements. 15. The method of claim 14, wherein monitoring the load time includes monitoring a quantity of time between selecting the window and a corresponding element from
the number of elements being displayed on the user interface. | In one implementation, a system for user interface components load time visualization includes a load engine to monitor a load time of a number of elements of a user interface, a color engine to assign a color to each of the number of elements of the user interface based on the load time, and a compile engine to display a component color map of the user interface utilizing the color assigned to each of the number of elements.1. A system for user interface components load time visualization, comprising:
a load engine to monitor a load time of a number of elements of a user interface; a color engine to assign a color to each of the number of elements of the user interface based on the load time; and a compile engine to display a component color map of the user interface utilizing the color assigned to each of the number of elements. 2. The system of claim 1, wherein the number of elements of the user interface include displayed elements of a particular window of the user interface. 3. The system of claim 1, wherein each of the number of elements is assigned a load time. 4. The system of claim 1, wherein the load time is a quantity of time a particular element takes to be displayed on the user interface. 5. The system of claim 1, wherein the compile engine takes a snap shot of a particular window of the user interface that includes the number of elements. 6. The system of claim 1, wherein the component color map is placed on a visual representation of the particular window. 7. The system of claim 1, wherein each of the number of elements is displayed in the color assigned by the color engine to generate the component color map. 8. A non-transitory computer readable medium storing instructions executable by a processing resource to cause a controller to:
monitor a load time of each of a number of elements displayed on a window of a user interface; assign a color to each of the number of elements based on the load time; and generate a component color map of the window based the color assigned to each of the number of elements. 9. The medium of claim 8, comprising instructions to identify a plurality of different windows of the user interface and capture a snap shot of the plurality of different windows of the user interface. 10. The medium of claim 9, wherein the component color map is generated by taking a snap shot of the window and applying the assigned color to each of the number of elements displayed on the snap shot. 11. The medium of claim 9, wherein a first color is assigned to an element from the number of elements with a greatest load time and a second color is assigned to an element from the number of elements with a lowest load time. 12. A method for network tool synchronization, comprising:
selecting a window of a user interface; monitoring a load time of each of a number of elements displayed on the window of the user interface; assigning a color to each of the number of elements based on the load time; capturing a snap shot of the window and the number of elements; and generating a component color map on the snap shot of the window based the color assigned to each of the number of elements. 13. The method of claim 12, wherein selecting the window includes selecting an element from a different window of the user interface that executes instructions to display the window. 14. The method of claim 12, wherein assigning the color includes generating a color scheme based on the monitored load times of each of the number of elements. 15. The method of claim 14, wherein monitoring the load time includes monitoring a quantity of time between selecting the window and a corresponding element from
the number of elements being displayed on the user interface. | 2,100 |
6,622 | 6,622 | 16,389,096 | 2,193 | A method and system. Application programming interface (API) call data is analyzed for a user to identify a relationship between API input data and API output data of two or more API calls. API usage information is generated by utilizing a dependency between the two or more API calls. The API usage information includes information pertaining to how data flows between the two or more API calls. API provision is improved with respect to execution of a process, based on utilization of the API usage information. Improving API provision includes: receiving a specification of an improvement to be achieved for the process, selecting at least two APIs from the two or more APIs for achieving the improvement, generating a new API that combines the at least two APIs, and modifying the process by including the new API in the process and removing the at least two APIs from the process. | 1. A method, said method comprising:
analyzing, by one or more processors of a computer system, application programming interface (API) call data for a user to identify a relationship between API input data and API output data of two or more API calls; generating, by the one or more processors, API usage information, said generating the API usage information utilizing a dependency between the two or more API calls, wherein the API usage information comprises information pertaining to how data flows between the two or more API calls; and improving API provision with respect to execution of a process, said improving API provision being based on utilization of the API usage information, said improving API provision comprising: receiving a specification of an improvement to be achieved for the process, selecting a plurality of APIs from the two or more APIs for achieving the improvement, generating a new API that combines the plurality of APIs, and modifying the process by including the new API in the process and removing the plurality of APIs from the process, wherein the improvement is achieved due to the new API in the modified process. 2. The method of claim 1, said method further comprising:
prior to said generating the API usage information, determining, by the one or more processors, the dependency between the two or more API calls, based on the identified relationship. 3. The method of claim 1, wherein said analyzing comprises determining that input data of an API call is based on output data of a preceding API call and in response, identifying a relationship between the input data of the API call and the output data of the preceding API call. 4. The method of claim 3, wherein said determining that the input data of the API call is based on the output data of the preceding API call comprises: determining that the input data of the API call is selected from the group consisting of an aggregation, a subset, a concatenation, a conversion, a translation of the response data of the preceding API call, and combinations thereof. 5. The method of claim 1, wherein said obtaining API call data for the user comprises:
intercepting API traffic between an API consumer and an API provider, said API consumer being controlled by the user to invoke the two or more API calls. 6. The method of claim 5, wherein said obtaining API call data for the user comprises:
reading API call data from the intercepted API traffic; storing the read API call data in a data store; and forwarding the intercepted API traffic to the API traffic's intended destination. 7. The method of claim 5, wherein said intercepting API traffic is performed at either the API consumer or the API provider. 8. The method of claim 1, said method further comprising:
obtaining, by the one or more processors, API call data for a second user; and analyzing, by the one or more processors, the obtained API call data for the second user to determine a refined dependency between the two or more API calls indication of accuracy of the obtained API call data for the second user. 9. A computer program product, comprising one or more computer readable hardware storage devices having computer readable program code stored therein, said program code containing instructions executable by one or more processors of a computer system to implement a method, said method comprising:
analyzing, by one or more processors, application programming interface (API) call data for a user to identify a relationship between API input data and API output data of two or more API calls; generating, by the one or more processors, API usage information, said generating the API usage information utilizing a dependency between the two or more API calls, wherein the API usage information comprises information pertaining to how data flows between the two or more API calls; and improving API provision with respect to execution of a process, said improving API provision being based on utilization of the API usage information, said improving API provision comprising: receiving a specification of an improvement to be achieved for the process, selecting a plurality of APIs from the two or more APIs for achieving the improvement, generating a new API that combines the plurality of APIs, and modifying the process by including the new API in the process and removing the plurality of APIs from the process, wherein the improvement is achieved due to the new API in the modified process. 10. The computer program product of claim 9, said method further comprising:
prior to said generating the API usage information, determining, by the one or more processors, the dependency between the two or more API calls, based on the identified relationship. 11. The computer program product of claim 9, wherein said analyzing comprises determining that input data of an API call is based on output data of a preceding API call and in response, identifying a relationship between the input data of the API call and the output data of the preceding API call. 12. The computer program product of claim 11, wherein said determining that the input data of the API call is based on the output data of the preceding API call comprises: determining that the input data of the API call is selected from the group consisting of an aggregation, a subset, a concatenation, a conversion, a translation of the response data of the preceding API call, and combinations thereof. 13. The computer program product of claim 9, wherein said obtaining API call data for the user comprises:
intercepting API traffic between an API consumer and an API provider, said API consumer being controlled by the user to invoke the two or more API calls. 14. The computer program product of claim 9, said method further comprising:
obtaining, by the one or more processors, API call data for a second user; and analyzing, by the one or more processors, the obtained API call data for the second user to determine a refined dependency between the two or more API calls indication of accuracy of the obtained API call data for the second user. 15. A computer system, comprising one or more processors, one or more memories, and one or more computer readable hardware storage devices, said one or more hardware storage device containing program code executable by the one or more processors via the one or more memories to implement a method, said method comprising:
analyzing, by one or more processors, application programming interface (API) call data for a user to identify a relationship between API input data and API output data of two or more API calls; generating, by the one or more processors, API usage information, said generating the API usage information utilizing a dependency between the two or more API calls, wherein the API usage information comprises information pertaining to how data flows between the two or more API calls; and improving API provision with respect to execution of a process, said improving API provision being based on utilization of the API usage information, said improving API provision comprising: receiving a specification of an improvement to be achieved for the process, selecting a plurality of APIs from the two or more APIs for achieving the improvement, generating a new API that combines the plurality of APIs, and modifying the process by including the new API in the process and removing the plurality of APIs from the process, wherein the improvement is achieved due to the new API in the modified process. 16. The computer system of claim 15, said method further comprising:
prior to said generating the API usage information, determining, by the one or more processors, the dependency between the two or more API calls, based on the identified relationship. 17. The computer system of claim 15, wherein said analyzing comprises determining that input data of an API call is based on output data of a preceding API call and in response, identifying a relationship between the input data of the API call and the output data of the preceding API call. 18. The computer system of claim 17, wherein said determining that the input data of the API call is based on the output data of the preceding API call comprises: determining that the input data of the API call is selected from the group consisting of an aggregation, a subset, a concatenation, a conversion, a translation of the response data of the preceding API call, and combinations thereof. 19. The computer system of claim 15, wherein said obtaining API call data for the user comprises:
intercepting API traffic between an API consumer and an API provider, said API consumer being controlled by the user to invoke the two or more API calls. 20. The computer system of claim 15, said method further comprising:
obtaining, by the one or more processors, API call data for a second user; and analyzing, by the one or more processors, the obtained API call data for the second user to determine a refined dependency between the two or more API calls indication of accuracy of the obtained API call data for the second user. | A method and system. Application programming interface (API) call data is analyzed for a user to identify a relationship between API input data and API output data of two or more API calls. API usage information is generated by utilizing a dependency between the two or more API calls. The API usage information includes information pertaining to how data flows between the two or more API calls. API provision is improved with respect to execution of a process, based on utilization of the API usage information. Improving API provision includes: receiving a specification of an improvement to be achieved for the process, selecting at least two APIs from the two or more APIs for achieving the improvement, generating a new API that combines the at least two APIs, and modifying the process by including the new API in the process and removing the at least two APIs from the process.1. A method, said method comprising:
analyzing, by one or more processors of a computer system, application programming interface (API) call data for a user to identify a relationship between API input data and API output data of two or more API calls; generating, by the one or more processors, API usage information, said generating the API usage information utilizing a dependency between the two or more API calls, wherein the API usage information comprises information pertaining to how data flows between the two or more API calls; and improving API provision with respect to execution of a process, said improving API provision being based on utilization of the API usage information, said improving API provision comprising: receiving a specification of an improvement to be achieved for the process, selecting a plurality of APIs from the two or more APIs for achieving the improvement, generating a new API that combines the plurality of APIs, and modifying the process by including the new API in the process and removing the plurality of APIs from the process, wherein the improvement is achieved due to the new API in the modified process. 2. The method of claim 1, said method further comprising:
prior to said generating the API usage information, determining, by the one or more processors, the dependency between the two or more API calls, based on the identified relationship. 3. The method of claim 1, wherein said analyzing comprises determining that input data of an API call is based on output data of a preceding API call and in response, identifying a relationship between the input data of the API call and the output data of the preceding API call. 4. The method of claim 3, wherein said determining that the input data of the API call is based on the output data of the preceding API call comprises: determining that the input data of the API call is selected from the group consisting of an aggregation, a subset, a concatenation, a conversion, a translation of the response data of the preceding API call, and combinations thereof. 5. The method of claim 1, wherein said obtaining API call data for the user comprises:
intercepting API traffic between an API consumer and an API provider, said API consumer being controlled by the user to invoke the two or more API calls. 6. The method of claim 5, wherein said obtaining API call data for the user comprises:
reading API call data from the intercepted API traffic; storing the read API call data in a data store; and forwarding the intercepted API traffic to the API traffic's intended destination. 7. The method of claim 5, wherein said intercepting API traffic is performed at either the API consumer or the API provider. 8. The method of claim 1, said method further comprising:
obtaining, by the one or more processors, API call data for a second user; and analyzing, by the one or more processors, the obtained API call data for the second user to determine a refined dependency between the two or more API calls indication of accuracy of the obtained API call data for the second user. 9. A computer program product, comprising one or more computer readable hardware storage devices having computer readable program code stored therein, said program code containing instructions executable by one or more processors of a computer system to implement a method, said method comprising:
analyzing, by one or more processors, application programming interface (API) call data for a user to identify a relationship between API input data and API output data of two or more API calls; generating, by the one or more processors, API usage information, said generating the API usage information utilizing a dependency between the two or more API calls, wherein the API usage information comprises information pertaining to how data flows between the two or more API calls; and improving API provision with respect to execution of a process, said improving API provision being based on utilization of the API usage information, said improving API provision comprising: receiving a specification of an improvement to be achieved for the process, selecting a plurality of APIs from the two or more APIs for achieving the improvement, generating a new API that combines the plurality of APIs, and modifying the process by including the new API in the process and removing the plurality of APIs from the process, wherein the improvement is achieved due to the new API in the modified process. 10. The computer program product of claim 9, said method further comprising:
prior to said generating the API usage information, determining, by the one or more processors, the dependency between the two or more API calls, based on the identified relationship. 11. The computer program product of claim 9, wherein said analyzing comprises determining that input data of an API call is based on output data of a preceding API call and in response, identifying a relationship between the input data of the API call and the output data of the preceding API call. 12. The computer program product of claim 11, wherein said determining that the input data of the API call is based on the output data of the preceding API call comprises: determining that the input data of the API call is selected from the group consisting of an aggregation, a subset, a concatenation, a conversion, a translation of the response data of the preceding API call, and combinations thereof. 13. The computer program product of claim 9, wherein said obtaining API call data for the user comprises:
intercepting API traffic between an API consumer and an API provider, said API consumer being controlled by the user to invoke the two or more API calls. 14. The computer program product of claim 9, said method further comprising:
obtaining, by the one or more processors, API call data for a second user; and analyzing, by the one or more processors, the obtained API call data for the second user to determine a refined dependency between the two or more API calls indication of accuracy of the obtained API call data for the second user. 15. A computer system, comprising one or more processors, one or more memories, and one or more computer readable hardware storage devices, said one or more hardware storage device containing program code executable by the one or more processors via the one or more memories to implement a method, said method comprising:
analyzing, by one or more processors, application programming interface (API) call data for a user to identify a relationship between API input data and API output data of two or more API calls; generating, by the one or more processors, API usage information, said generating the API usage information utilizing a dependency between the two or more API calls, wherein the API usage information comprises information pertaining to how data flows between the two or more API calls; and improving API provision with respect to execution of a process, said improving API provision being based on utilization of the API usage information, said improving API provision comprising: receiving a specification of an improvement to be achieved for the process, selecting a plurality of APIs from the two or more APIs for achieving the improvement, generating a new API that combines the plurality of APIs, and modifying the process by including the new API in the process and removing the plurality of APIs from the process, wherein the improvement is achieved due to the new API in the modified process. 16. The computer system of claim 15, said method further comprising:
prior to said generating the API usage information, determining, by the one or more processors, the dependency between the two or more API calls, based on the identified relationship. 17. The computer system of claim 15, wherein said analyzing comprises determining that input data of an API call is based on output data of a preceding API call and in response, identifying a relationship between the input data of the API call and the output data of the preceding API call. 18. The computer system of claim 17, wherein said determining that the input data of the API call is based on the output data of the preceding API call comprises: determining that the input data of the API call is selected from the group consisting of an aggregation, a subset, a concatenation, a conversion, a translation of the response data of the preceding API call, and combinations thereof. 19. The computer system of claim 15, wherein said obtaining API call data for the user comprises:
intercepting API traffic between an API consumer and an API provider, said API consumer being controlled by the user to invoke the two or more API calls. 20. The computer system of claim 15, said method further comprising:
obtaining, by the one or more processors, API call data for a second user; and analyzing, by the one or more processors, the obtained API call data for the second user to determine a refined dependency between the two or more API calls indication of accuracy of the obtained API call data for the second user. | 2,100 |
6,623 | 6,623 | 15,477,831 | 2,154 | An approach is provided for managing files in an online account. A file access platform causes, at least in part, retrieval of a file associated with a first communication stored in an online account. Next, the file access platform determines whether the file is modified after the retrieval and generates a second communication including a modified version of the file based, at least in part, on the determination. Then, the file access platform causes, at least in part, transmission of the second communication including the modified version to the online account. | 1. A method comprising:
retrieving by at least one authorized device a file associated with at least one communication, a metadata associated with the file, or a combination thereof stored in an online account; accessing and modifying the retrieved file by at least one application in the at least one authorized device; and synchronizing the modified retrieved file in the at least one authorized device with the corresponding file in the online account based, at least in part, on the metadata. 2. A method of claim 1, wherein the synchronizing further comprising:
saving the modified retrieved file as a new version of the file in the online account. 3. A method of claim 2, further comprising:
presenting the new version of the file in a user interface of the at least one other device, wherein older versions of the file are available in a hidden window of the user interface. 4. A method of claim 1, wherein the synchronizing of the modified retrieved file occurs automatically, periodically, on user requests, upon closing the modified retrieved file, upon saving the modified retrieved file, or a combination thereof 5. A method of claim 4, wherein the synchronizing on the user requests further comprising:
presenting an option for the synchronizing of the modified retrieved file in a user interface of the at least one authorized device; determining a selection of the option for the synchronizing of the modified retrieved file; and synchronizing the modified retrieved file based, at least in part, on the selection. 6. A method of claim 1, wherein the at least one application in the at least one authorized device comprises a synchronization feature, and wherein a cloud-to-local application in the at least one authorized device is used for the accessing of the file and the synchronizing of the modified retrieved file. 7. A method of claim 1, further comprising:
tracking the one or more modifications to the retrieved file, a predetermined number of previous modifications to the retrieved file, or a combination thereof; and saving the modified retrieved file, the previously modified retrieved file, the corresponding file, or a combination thereof in a single communication thread. 8. A method of claim 1, further comprising:
providing, access to the retrieved file based, at least in part, on pre-set conditions, wherein the pre-set conditions classifies the retrieved file as an exclusive file or a non-exclusive file, and wherein the exclusive file is accessible by an authorized user of an account, and the non-exclusive file is accessible by one or more authorized users. 9. A method of claim 8, further comprising:
providing access to the non-exclusive file to at least one other user by the one or more authorized users, a proprietor of the tile, or a combination thereof 10. A method of claim 1, wherein the metadata includes online account information, authentication credentials, authentication tokens, or a combination thereof. 11. An apparatus comprising:
at least one processor; and at least one memory including computer program code for one or more programs, the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus to perform at least the following,
retrieve by at least one authorized device a file associated with at least one communication, a metadata associated with the file, or a combination thereof stored in an online account;
access and modifying the retrieved file by at least one application in the at least one authorized device; and
synchronize the modified retrieved file in the at least one authorized device with the corresponding file in the online account based, at least in part, on the metadata. 12. An apparatus of claim 11, wherein the synchronizing further comprises the apparatus to further caused to:
save the modified retrieved file as a new version of the file in the online account. 13. An apparatus of claim 12, wherein the apparatus is further caused to:
present the new version of the file in a user interface of the at least one other device, wherein older versions of the file are available in a hidden window of the user interface. 14. An apparatus of claim 11, wherein the synchronizing of the modified retrieved file occurs automatically, periodically, on user requests, upon closing the modified retrieved file, upon saving the modified retrieved file, or a combination thereof. 15. An apparatus of claim 14, wherein the synchronizing on the user requests further comprises the apparatus to further caused to:
present an option for the synchronizing of the modified retrieved file in a user interface of the at least one authorized device; determine a selection of the option for the synchronizing of the modified retrieved file; and synchronize the modified retrieved file based, at least in part, on the selection. 16. An apparatus of claim 11, wherein the apparatus is further caused to:
track the one or more modifications to the retrieved file, a predetermined number of previous modifications to the retrieved file, or a combination thereof; and save the modified retrieved file, the previously modified retrieved file, the corresponding file, or a combination thereof in a single communication thread. 17. An apparatus of claim 11, wherein the apparatus is further caused to:
provide access to the retrieved file based, at least in part, on pre-set conditions, wherein the pre-set conditions classifies the retrieved file as an exclusive file or a non-exclusive file, and wherein the exclusive file is accessible by an authorized user of an account, and the non-exclusive file is accessible by one or more authorized users. 18. A computer-readable storage medium carrying one or more sequences of one or more instructions which, when executed by one or more processors, cause an apparatus to at least perform the following steps:
retrieving by at least one authorized device a file associated with at least one communication, a metadata associated with the file, or a combination thereof stored in an online account; accessing and modifying the retrieved file by at least one application in the at least one authorized device; and synchronizing the modified retrieved file in the at least one authorized device with the corresponding file in the online account based, at least in part, on the metadata. 19. A computer-readable storage medium of claim 18, wherein the synchronizing further comprises the apparatus to further caused to perform:
saving the modified retrieved file as a new version of the file in the online account. 20. A computer-readable storage medium of claim 19, wherein the apparatus is further caused to perform:
presenting the new version of the file in a user interface of the at least one other device, wherein older versions of the file are available in a hidden window of the user interface. | An approach is provided for managing files in an online account. A file access platform causes, at least in part, retrieval of a file associated with a first communication stored in an online account. Next, the file access platform determines whether the file is modified after the retrieval and generates a second communication including a modified version of the file based, at least in part, on the determination. Then, the file access platform causes, at least in part, transmission of the second communication including the modified version to the online account.1. A method comprising:
retrieving by at least one authorized device a file associated with at least one communication, a metadata associated with the file, or a combination thereof stored in an online account; accessing and modifying the retrieved file by at least one application in the at least one authorized device; and synchronizing the modified retrieved file in the at least one authorized device with the corresponding file in the online account based, at least in part, on the metadata. 2. A method of claim 1, wherein the synchronizing further comprising:
saving the modified retrieved file as a new version of the file in the online account. 3. A method of claim 2, further comprising:
presenting the new version of the file in a user interface of the at least one other device, wherein older versions of the file are available in a hidden window of the user interface. 4. A method of claim 1, wherein the synchronizing of the modified retrieved file occurs automatically, periodically, on user requests, upon closing the modified retrieved file, upon saving the modified retrieved file, or a combination thereof 5. A method of claim 4, wherein the synchronizing on the user requests further comprising:
presenting an option for the synchronizing of the modified retrieved file in a user interface of the at least one authorized device; determining a selection of the option for the synchronizing of the modified retrieved file; and synchronizing the modified retrieved file based, at least in part, on the selection. 6. A method of claim 1, wherein the at least one application in the at least one authorized device comprises a synchronization feature, and wherein a cloud-to-local application in the at least one authorized device is used for the accessing of the file and the synchronizing of the modified retrieved file. 7. A method of claim 1, further comprising:
tracking the one or more modifications to the retrieved file, a predetermined number of previous modifications to the retrieved file, or a combination thereof; and saving the modified retrieved file, the previously modified retrieved file, the corresponding file, or a combination thereof in a single communication thread. 8. A method of claim 1, further comprising:
providing, access to the retrieved file based, at least in part, on pre-set conditions, wherein the pre-set conditions classifies the retrieved file as an exclusive file or a non-exclusive file, and wherein the exclusive file is accessible by an authorized user of an account, and the non-exclusive file is accessible by one or more authorized users. 9. A method of claim 8, further comprising:
providing access to the non-exclusive file to at least one other user by the one or more authorized users, a proprietor of the tile, or a combination thereof 10. A method of claim 1, wherein the metadata includes online account information, authentication credentials, authentication tokens, or a combination thereof. 11. An apparatus comprising:
at least one processor; and at least one memory including computer program code for one or more programs, the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus to perform at least the following,
retrieve by at least one authorized device a file associated with at least one communication, a metadata associated with the file, or a combination thereof stored in an online account;
access and modifying the retrieved file by at least one application in the at least one authorized device; and
synchronize the modified retrieved file in the at least one authorized device with the corresponding file in the online account based, at least in part, on the metadata. 12. An apparatus of claim 11, wherein the synchronizing further comprises the apparatus to further caused to:
save the modified retrieved file as a new version of the file in the online account. 13. An apparatus of claim 12, wherein the apparatus is further caused to:
present the new version of the file in a user interface of the at least one other device, wherein older versions of the file are available in a hidden window of the user interface. 14. An apparatus of claim 11, wherein the synchronizing of the modified retrieved file occurs automatically, periodically, on user requests, upon closing the modified retrieved file, upon saving the modified retrieved file, or a combination thereof. 15. An apparatus of claim 14, wherein the synchronizing on the user requests further comprises the apparatus to further caused to:
present an option for the synchronizing of the modified retrieved file in a user interface of the at least one authorized device; determine a selection of the option for the synchronizing of the modified retrieved file; and synchronize the modified retrieved file based, at least in part, on the selection. 16. An apparatus of claim 11, wherein the apparatus is further caused to:
track the one or more modifications to the retrieved file, a predetermined number of previous modifications to the retrieved file, or a combination thereof; and save the modified retrieved file, the previously modified retrieved file, the corresponding file, or a combination thereof in a single communication thread. 17. An apparatus of claim 11, wherein the apparatus is further caused to:
provide access to the retrieved file based, at least in part, on pre-set conditions, wherein the pre-set conditions classifies the retrieved file as an exclusive file or a non-exclusive file, and wherein the exclusive file is accessible by an authorized user of an account, and the non-exclusive file is accessible by one or more authorized users. 18. A computer-readable storage medium carrying one or more sequences of one or more instructions which, when executed by one or more processors, cause an apparatus to at least perform the following steps:
retrieving by at least one authorized device a file associated with at least one communication, a metadata associated with the file, or a combination thereof stored in an online account; accessing and modifying the retrieved file by at least one application in the at least one authorized device; and synchronizing the modified retrieved file in the at least one authorized device with the corresponding file in the online account based, at least in part, on the metadata. 19. A computer-readable storage medium of claim 18, wherein the synchronizing further comprises the apparatus to further caused to perform:
saving the modified retrieved file as a new version of the file in the online account. 20. A computer-readable storage medium of claim 19, wherein the apparatus is further caused to perform:
presenting the new version of the file in a user interface of the at least one other device, wherein older versions of the file are available in a hidden window of the user interface. | 2,100 |
6,624 | 6,624 | 16,409,810 | 2,199 | The present invention relates to a method and system for installing software onto a client in the NIM environment and corresponding client. Said method includes: initializing said client, wherein a virtual mapping device associated with a memory driver of the client is created, the virtual mapping device for scheduling between the client's memory driver and the remote NIM server with respect to the I/O operation for running the software so as to direct the I/O operation for running said software to the client's memory driver or the remote NIM server; running said software on the client; acquiring the resources desired for running software; and conducting data migration operation from the NIM server to the client while running said software, wherein the migrated data is the resource data obtained from NIM server and desired for installing said software; and the software installation being completed when all the data desired for installing said software are migrated to the memory driver of the client. It is unnecessary for the present invention to copy all the installation images to the local client before installing software, therefore time delay of installing OSs or application programs can be shortened or even eliminated. | 1-14. (canceled) 15. A computer-implemented method, comprising:
receiving, by a client hardware device, software code for executing software; initializing installation of the software on the client hardware device; executing, by the client hardware device and responsive to the initializing, the software code; migrating, from a Network Installation Management (NIM) server and during the executing the software code, resource data separate from the software code to the client hardware device; completing, during the executing the software code and using the migrated resource data, the installation. 16. The method of claim 15, wherein
an I/O operation for running the software is redirected to a virtual mapping device. 17. The method of claim 16, wherein
the virtual mapping device is configured to:
identify a resource requested by the I/O operation,
determine whether the resource is available within the client hardware device or within the NIM server,
obtain, responsive to a determination that the resource is available within the NIM server, the resource from the NIM server, and
obtain, responsive to a determination that the resource is available within the client hardware device, the resource from the client hardware device. 18. The method of claim 16, wherein
the migrated resource data is requested by the virtual mapping device in response to determinations that
the intercepted I/O operation requested the resource data, and
the resource data is not found within the client hardware device. 19. The method of claim 16, wherein
the virtual mapping device is configured to identify data for the installation of the software as local or non-local. 20. The method of claim 15, wherein
the resource data is used to install the software. 21. A client hardware device, comprising:
a memory driver; and a hardware processor configured to execute the following operations:
receiving, by the client hardware device, software code for executing the software;
initializing installation of the software on the client hardware device;
executing, by the client hardware device and responsive to the initializing, the software code;
migrating, from a Network Installation Management (NIM) server and during the executing the software code, resource data separate from the software code to the client hardware device;
completing, during the executing the software code and using the migrated resource data, the installation. 22. The client hardware device of claim 21, wherein
an I/O operation for running the software is redirected to a virtual mapping device 23. The client hardware device of claim 22, wherein
the virtual mapping device is configured to:
identify a resource requested by the I/O operation,
determine whether the resource is available within the client hardware device or within the NIM server,
obtain, responsive to a determination that the resource is available within the NIM server, the resource from the NIM server, and
obtain, responsive to a determination that the resource is available within the client hardware device, the resource from the client hardware device. 24. The client hardware device of claim 22, wherein
the migrated resource data is requested by the virtual mapping device in response to determinations that
the intercepted IO operation requested the resource data, and
the resource data is not found within the client hardware device. 25. The client hardware device of claim 22, wherein
the virtual mapping device is configured to identify data for the installation of the software as local or non-local. 26. The client hardware device of claim 21, wherein
the resource data is used to install the software. 27. A computer program product, comprising:
a hardware storage device having stored therein computer usable program code, the computer usable program code, which when executed by a client hardware device, causes the computer device to perform:
receiving, by the client hardware device, software code for executing software;
initializing installation of the software on the client hardware device;
executing, by the client hardware device and responsive to the initializing, the software code;
migrating, from a Network Installation Management (NIM) server and during the executing the software code, resource data separate from the software code to the client hardware device;
completing, during the executing the software code and using the migrated resource data, the installation. 28. The computer program product of claim 27, wherein
an I/O operation for running the software is redirected to a virtual mapping device 29. The computer program product of claim 28, wherein
the virtual mapping device is configured to:
identify a resource requested by the I/O operation,
determine whether the resource is available within the client hardware device or within the NIM server,
obtain, responsive to a determination that the resource is available within the NIM server, the resource from the NIM server, and
obtain, responsive to a determination that the resource is available within the client hardware device, the resource from the client hardware device. 30. The computer program product of claim 28, wherein
the migrated resource data is requested by the virtual mapping device in response to determinations that
the intercepted I/O operation requested the resource data, and
the resource data is not found within the client hardware device. 31. The computer program product of claim 28, wherein
the virtual mapping device is configured to identify data for the installation of the software as local or non-local. 32. The computer program product of claim 27, wherein
the resource data is used to install the software. | The present invention relates to a method and system for installing software onto a client in the NIM environment and corresponding client. Said method includes: initializing said client, wherein a virtual mapping device associated with a memory driver of the client is created, the virtual mapping device for scheduling between the client's memory driver and the remote NIM server with respect to the I/O operation for running the software so as to direct the I/O operation for running said software to the client's memory driver or the remote NIM server; running said software on the client; acquiring the resources desired for running software; and conducting data migration operation from the NIM server to the client while running said software, wherein the migrated data is the resource data obtained from NIM server and desired for installing said software; and the software installation being completed when all the data desired for installing said software are migrated to the memory driver of the client. It is unnecessary for the present invention to copy all the installation images to the local client before installing software, therefore time delay of installing OSs or application programs can be shortened or even eliminated.1-14. (canceled) 15. A computer-implemented method, comprising:
receiving, by a client hardware device, software code for executing software; initializing installation of the software on the client hardware device; executing, by the client hardware device and responsive to the initializing, the software code; migrating, from a Network Installation Management (NIM) server and during the executing the software code, resource data separate from the software code to the client hardware device; completing, during the executing the software code and using the migrated resource data, the installation. 16. The method of claim 15, wherein
an I/O operation for running the software is redirected to a virtual mapping device. 17. The method of claim 16, wherein
the virtual mapping device is configured to:
identify a resource requested by the I/O operation,
determine whether the resource is available within the client hardware device or within the NIM server,
obtain, responsive to a determination that the resource is available within the NIM server, the resource from the NIM server, and
obtain, responsive to a determination that the resource is available within the client hardware device, the resource from the client hardware device. 18. The method of claim 16, wherein
the migrated resource data is requested by the virtual mapping device in response to determinations that
the intercepted I/O operation requested the resource data, and
the resource data is not found within the client hardware device. 19. The method of claim 16, wherein
the virtual mapping device is configured to identify data for the installation of the software as local or non-local. 20. The method of claim 15, wherein
the resource data is used to install the software. 21. A client hardware device, comprising:
a memory driver; and a hardware processor configured to execute the following operations:
receiving, by the client hardware device, software code for executing the software;
initializing installation of the software on the client hardware device;
executing, by the client hardware device and responsive to the initializing, the software code;
migrating, from a Network Installation Management (NIM) server and during the executing the software code, resource data separate from the software code to the client hardware device;
completing, during the executing the software code and using the migrated resource data, the installation. 22. The client hardware device of claim 21, wherein
an I/O operation for running the software is redirected to a virtual mapping device 23. The client hardware device of claim 22, wherein
the virtual mapping device is configured to:
identify a resource requested by the I/O operation,
determine whether the resource is available within the client hardware device or within the NIM server,
obtain, responsive to a determination that the resource is available within the NIM server, the resource from the NIM server, and
obtain, responsive to a determination that the resource is available within the client hardware device, the resource from the client hardware device. 24. The client hardware device of claim 22, wherein
the migrated resource data is requested by the virtual mapping device in response to determinations that
the intercepted IO operation requested the resource data, and
the resource data is not found within the client hardware device. 25. The client hardware device of claim 22, wherein
the virtual mapping device is configured to identify data for the installation of the software as local or non-local. 26. The client hardware device of claim 21, wherein
the resource data is used to install the software. 27. A computer program product, comprising:
a hardware storage device having stored therein computer usable program code, the computer usable program code, which when executed by a client hardware device, causes the computer device to perform:
receiving, by the client hardware device, software code for executing software;
initializing installation of the software on the client hardware device;
executing, by the client hardware device and responsive to the initializing, the software code;
migrating, from a Network Installation Management (NIM) server and during the executing the software code, resource data separate from the software code to the client hardware device;
completing, during the executing the software code and using the migrated resource data, the installation. 28. The computer program product of claim 27, wherein
an I/O operation for running the software is redirected to a virtual mapping device 29. The computer program product of claim 28, wherein
the virtual mapping device is configured to:
identify a resource requested by the I/O operation,
determine whether the resource is available within the client hardware device or within the NIM server,
obtain, responsive to a determination that the resource is available within the NIM server, the resource from the NIM server, and
obtain, responsive to a determination that the resource is available within the client hardware device, the resource from the client hardware device. 30. The computer program product of claim 28, wherein
the migrated resource data is requested by the virtual mapping device in response to determinations that
the intercepted I/O operation requested the resource data, and
the resource data is not found within the client hardware device. 31. The computer program product of claim 28, wherein
the virtual mapping device is configured to identify data for the installation of the software as local or non-local. 32. The computer program product of claim 27, wherein
the resource data is used to install the software. | 2,100 |
6,625 | 6,625 | 16,200,503 | 2,196 | A processing system includes a task queue, a laxity-aware task scheduler coupled to the task queue, and a workgroup dispatcher coupled to the laxity-aware task scheduler. Based on a laxity evaluation of laxity values associated with a plurality of tasks stored in the task queue, the workgroup dispatcher schedules the plurality of tasks. The laxity evaluation includes determining a priority of each task of the plurality of tasks. The laxity value is determined using laxity information, where the laxity information includes an arrival time, a task duration, a task deadline, and a number of workgroups. | 1. A method, comprising:
receiving laxity information associated with each task of a plurality of tasks; determining a laxity value for each task of said plurality of tasks based on said laxity information; performing a laxity evaluation of said laxity values; and scheduling said plurality of tasks based on said laxity evaluation. 2. The method of claim 1, wherein:
said laxity evaluation includes determining a priority of each task of said plurality of tasks. 3. The method of claim 2, wherein:
said laxity information is used to determine an amount of time for completion of each task and includes an arrival time, a task duration, a task deadline, and a number of workgroups. 4. The method of claim 3, wherein:
said priority of each task of said plurality of tasks is determined by comparing said laxity value of each task of said plurality of tasks. 5. The method of claim 4, further comprising:
determining said laxity value by subtracting said task duration from said task deadline. 6. The method of claim 4, wherein scheduling includes:
when a first laxity value associated with a first task of said plurality of tasks is less than a second laxity value associated with a second task of said plurality of tasks, said first task receives scheduling priority over said second task. 7. The method of claim 4, further comprising:
wherein scheduling said plurality of tasks includes providing a first task of said plurality of tasks with a higher priority level to a first compute unit prior to providing a second task of said plurality of tasks with a lower priority level to said first compute unit. 8. The method of claim 4, wherein:
when a first task duration of a first task with higher priority is less than or equal to a laxity value of a second task of lower priority then said first task, said first task is scheduled prior to said second task in a first compute unit. 9. The method of claim 4, further comprising:
assigning said plurality of tasks to at least a first compute unit and a second compute unit based on said priority of each task. 10. A processing system, comprising:
a task queue; a laxity-aware task scheduler coupled to said task queue; and a workgroup dispatcher coupled to said laxity-aware task scheduler, wherein based on a laxity evaluation of laxity values associated with a plurality of tasks stored in said task queue, said workgroup dispatcher schedules said plurality of tasks. 11. The processing system of claim 10, wherein:
said laxity evaluation includes determining a priority of each task of said plurality of tasks. 12. The processing system of claim 11, wherein:
said laxity value is determined using laxity information, said laxity information including an arrival time, a task duration, a task deadline, and a number of workgroups. 13. The processing system of claim 12, wherein:
said priority of each task of said plurality of tasks is determined by comparing the laxity values of each task of said plurality of tasks. 14. The processing system of claim 12, wherein:
said laxity value is determined by subtracting said task duration from said task deadline. 15. The processing system of claim 10, wherein:
when a first laxity value of said laxity values associated with a first task of said plurality of tasks is less than a second laxity value of said laxity values associated with a second task of said plurality of tasks, said first task receives scheduling priority over said second task. 16. The processing system of claim 15, wherein:
said workgroup dispatcher schedules said plurality of tasks by providing a first task of said plurality of tasks with a higher priority level to a first compute unit prior to providing a second task of said plurality of tasks with a lower priority level to said first compute unit. 17. The processing system of claim 16, wherein:
when a first task duration of a first task with higher priority is less than or equal to a laxity value of a second task of lower priority, said first task is scheduled prior to said second task in a first compute unit. 18. A method, comprising:
providing a plurality of jobs to a laxity-aware task scheduler, wherein said plurality of jobs includes a first job and a second job; determining a first laxity value of said first job and a second laxity value of said second job; and assigning a first priority to said first job and a second priority to said second job based on a laxity evaluation of said first laxity value and said second laxity value. 19. The method of claim 18, further comprising:
scheduling said first job and said second job based on said laxity evaluation. 20. The method of claim 18, further comprising:
adjusting said first priority of said first job and said second priority of said second job based on said laxity evaluation. | A processing system includes a task queue, a laxity-aware task scheduler coupled to the task queue, and a workgroup dispatcher coupled to the laxity-aware task scheduler. Based on a laxity evaluation of laxity values associated with a plurality of tasks stored in the task queue, the workgroup dispatcher schedules the plurality of tasks. The laxity evaluation includes determining a priority of each task of the plurality of tasks. The laxity value is determined using laxity information, where the laxity information includes an arrival time, a task duration, a task deadline, and a number of workgroups.1. A method, comprising:
receiving laxity information associated with each task of a plurality of tasks; determining a laxity value for each task of said plurality of tasks based on said laxity information; performing a laxity evaluation of said laxity values; and scheduling said plurality of tasks based on said laxity evaluation. 2. The method of claim 1, wherein:
said laxity evaluation includes determining a priority of each task of said plurality of tasks. 3. The method of claim 2, wherein:
said laxity information is used to determine an amount of time for completion of each task and includes an arrival time, a task duration, a task deadline, and a number of workgroups. 4. The method of claim 3, wherein:
said priority of each task of said plurality of tasks is determined by comparing said laxity value of each task of said plurality of tasks. 5. The method of claim 4, further comprising:
determining said laxity value by subtracting said task duration from said task deadline. 6. The method of claim 4, wherein scheduling includes:
when a first laxity value associated with a first task of said plurality of tasks is less than a second laxity value associated with a second task of said plurality of tasks, said first task receives scheduling priority over said second task. 7. The method of claim 4, further comprising:
wherein scheduling said plurality of tasks includes providing a first task of said plurality of tasks with a higher priority level to a first compute unit prior to providing a second task of said plurality of tasks with a lower priority level to said first compute unit. 8. The method of claim 4, wherein:
when a first task duration of a first task with higher priority is less than or equal to a laxity value of a second task of lower priority then said first task, said first task is scheduled prior to said second task in a first compute unit. 9. The method of claim 4, further comprising:
assigning said plurality of tasks to at least a first compute unit and a second compute unit based on said priority of each task. 10. A processing system, comprising:
a task queue; a laxity-aware task scheduler coupled to said task queue; and a workgroup dispatcher coupled to said laxity-aware task scheduler, wherein based on a laxity evaluation of laxity values associated with a plurality of tasks stored in said task queue, said workgroup dispatcher schedules said plurality of tasks. 11. The processing system of claim 10, wherein:
said laxity evaluation includes determining a priority of each task of said plurality of tasks. 12. The processing system of claim 11, wherein:
said laxity value is determined using laxity information, said laxity information including an arrival time, a task duration, a task deadline, and a number of workgroups. 13. The processing system of claim 12, wherein:
said priority of each task of said plurality of tasks is determined by comparing the laxity values of each task of said plurality of tasks. 14. The processing system of claim 12, wherein:
said laxity value is determined by subtracting said task duration from said task deadline. 15. The processing system of claim 10, wherein:
when a first laxity value of said laxity values associated with a first task of said plurality of tasks is less than a second laxity value of said laxity values associated with a second task of said plurality of tasks, said first task receives scheduling priority over said second task. 16. The processing system of claim 15, wherein:
said workgroup dispatcher schedules said plurality of tasks by providing a first task of said plurality of tasks with a higher priority level to a first compute unit prior to providing a second task of said plurality of tasks with a lower priority level to said first compute unit. 17. The processing system of claim 16, wherein:
when a first task duration of a first task with higher priority is less than or equal to a laxity value of a second task of lower priority, said first task is scheduled prior to said second task in a first compute unit. 18. A method, comprising:
providing a plurality of jobs to a laxity-aware task scheduler, wherein said plurality of jobs includes a first job and a second job; determining a first laxity value of said first job and a second laxity value of said second job; and assigning a first priority to said first job and a second priority to said second job based on a laxity evaluation of said first laxity value and said second laxity value. 19. The method of claim 18, further comprising:
scheduling said first job and said second job based on said laxity evaluation. 20. The method of claim 18, further comprising:
adjusting said first priority of said first job and said second priority of said second job based on said laxity evaluation. | 2,100 |
6,626 | 6,626 | 15,872,663 | 2,196 | A computation graph is accessed. In the computation graph, operations to be performed are represented as interior nodes, inputs to the operations are represented as leaf nodes, and a result of the operations is represented as a root. Selected sets of the operations are combined to form respective kernels of operations. Code is generated execute the kernels of operations. The code is executed to determine the result. | 1. A computer-implemented method, comprising:
accessing an input comprising a computation graph, wherein the computation graph comprises a plurality of nodes representing operations to be performed, inputs to the operations, and results of the operations; combining selected nodes of the computation graph to form respective kernels of operations; encoding the kernels of operations as an executable function comprising code to execute the kernels of operations; and executing the code to determine the result. 2. The method of claim 1, wherein each node of the plurality of nodes is represented as a respective data structure of a plurality of data structures, wherein each respective data structure comprises a first field that identifies a type of a node of the plurality of nodes, a second field that lists inputs to the node represented by the data structure, and a third field that comprises: a result of an operation if the node represents an operation to be performed and the result has been computed, a null value if the node represents an operation to be performed and a result of the operation has not yet been computed, and an input value if the node represents an input to an operation;
wherein the method further comprises traversing nodes of the computation graph to identify the selected nodes, wherein said traversing comprises identifying data structures that have other than the null value in their third field. 3. The method of claim 1, wherein the function is operable for execution on different processor architectures, wherein the processor architectures comprise graphics processing unit architectures and multi-core central processing unit architectures. 4. The method of claim 1, wherein said executing comprises just-in-time compiling the function. 5. The method of claim 1, wherein said encoding and said executing comprise:
generating an object file comprising the function and that is linked into an application; and calling into the function in the object file to execute the code. 6. The method of claim 1, wherein said executing comprises storing, in a register, a result of an operation of a kernel of operations that is an input to another operation of the kernel of operations. 7. The method of claim 1, wherein the computation graph is a directed acyclic graph. 8. The method of claim 1, further comprising, if the computation graph exceeds a threshold size, then:
executing a first portion of the computation graph; and using a result of said executing the first portion as an input to a second portion of the computation graph. 9. A computer system, comprising:
a processing unit; and memory coupled to the processing unit and storing a computation graph; wherein the memory also stores instructions that when executed by the processing unit perform a method comprising:
accessing an input comprising the computation graph, wherein the computation graph comprises a plurality of nodes representing operations to be performed, inputs to the operations, and results of the operations;
combining selected nodes of the computation graph to form respective kernels of operations;
encoding the kernels of operations as an executable function comprising code to execute the kernels of operations; and
executing the code to determine the result. 10. The computer system of claim 9, wherein the processing unit is selected from the group consisting of: a graphics processing unit; and a multi-core central processing unit. 11. The computer system of claim 9, wherein each node of the plurality of nodes is represented as a respective data structure of a plurality of data structures, wherein each respective data structure comprises a first field that identifies a type of a node of the plurality of nodes, a second field that lists inputs to the node represented by the data structure, and a third field that includes a value for the node, wherein the value comprises: a result of an operation if the node represents an operation to be performed and the result has been computed, a null value if the node represents an operation to be performed and a result of the operation has not yet been computed, and an input value if the node represents an input to an operation; and
wherein the method further comprises traversing nodes of the computation graph to identify the selected nodes, wherein said traversing comprises identifying data structures that have other than the null value in their third field. 12. The computer system of claim 9, wherein the method further comprises just-in-time compiling the function. 13. The computer system of claim 9, wherein the method further comprises:
generating an object file comprising the function and that is linked into an application; and calling into the function in the object file to execute the code. 14. The computer system of claim 9, wherein the method further comprises storing, in a register, a result of an operation of a kernel of operations that is an input to another operation of the kernel of operations. 15. The computer system of claim 9, wherein the method further comprises:
if the computation graph exceeds a threshold size, then:
executing a first portion of the computation graph; and
using a result of said executing the first portion as an input to a second portion of the computation graph. 16. A non-transitory computer-readable medium having computer-executable instructions for performing a method of executing a directed acyclic graph (DAG), the method comprising:
accessing an input comprising the DAG, wherein the DAG comprises a plurality of nodes representing operations to be performed, inputs to the operations, and results of the operations; combining selected nodes of the DAG to form respective kernels of operations; encoding the kernels of operations as an executable function comprising code to execute the kernels of operations; and executing the code to determine the result. 17. The non-transitory computer-readable medium of claim 16, wherein each node of the plurality of nodes is represented as a respective data structure of a plurality of data structures, wherein each respective data structure comprises a first field that identifies a type of a node of the plurality of nodes, a second field that lists inputs to the node represented by the data structure, and a third field that includes a value for the node, wherein the value comprises: a result of an operation if the node represents an operation to be performed and the result has been computed, a null value if the node represents an operation to be performed and a result of the operation has not yet been computed, and an input value if the node represents an input to an operation;
wherein the method further comprises traversing nodes of the DAG to identify the selected nodes, wherein said traversing comprises identifying data structures that have other than the null value in their third field. 18. The non-transitory computer-readable medium of claim 17, wherein the function is operable for execution on different processor architectures, wherein the processor architectures comprise graphics processing unit architectures and multi-core central processing unit architectures. 19. The non-transitory computer-readable medium of claim 17, wherein the method further comprises just-in-time compiling the function. 20. The non-transitory computer-readable medium of claim 17, wherein the method further comprises:
generating an object file comprising the function and that is linked into an application; and calling into the function in the object file to execute the code. | A computation graph is accessed. In the computation graph, operations to be performed are represented as interior nodes, inputs to the operations are represented as leaf nodes, and a result of the operations is represented as a root. Selected sets of the operations are combined to form respective kernels of operations. Code is generated execute the kernels of operations. The code is executed to determine the result.1. A computer-implemented method, comprising:
accessing an input comprising a computation graph, wherein the computation graph comprises a plurality of nodes representing operations to be performed, inputs to the operations, and results of the operations; combining selected nodes of the computation graph to form respective kernels of operations; encoding the kernels of operations as an executable function comprising code to execute the kernels of operations; and executing the code to determine the result. 2. The method of claim 1, wherein each node of the plurality of nodes is represented as a respective data structure of a plurality of data structures, wherein each respective data structure comprises a first field that identifies a type of a node of the plurality of nodes, a second field that lists inputs to the node represented by the data structure, and a third field that comprises: a result of an operation if the node represents an operation to be performed and the result has been computed, a null value if the node represents an operation to be performed and a result of the operation has not yet been computed, and an input value if the node represents an input to an operation;
wherein the method further comprises traversing nodes of the computation graph to identify the selected nodes, wherein said traversing comprises identifying data structures that have other than the null value in their third field. 3. The method of claim 1, wherein the function is operable for execution on different processor architectures, wherein the processor architectures comprise graphics processing unit architectures and multi-core central processing unit architectures. 4. The method of claim 1, wherein said executing comprises just-in-time compiling the function. 5. The method of claim 1, wherein said encoding and said executing comprise:
generating an object file comprising the function and that is linked into an application; and calling into the function in the object file to execute the code. 6. The method of claim 1, wherein said executing comprises storing, in a register, a result of an operation of a kernel of operations that is an input to another operation of the kernel of operations. 7. The method of claim 1, wherein the computation graph is a directed acyclic graph. 8. The method of claim 1, further comprising, if the computation graph exceeds a threshold size, then:
executing a first portion of the computation graph; and using a result of said executing the first portion as an input to a second portion of the computation graph. 9. A computer system, comprising:
a processing unit; and memory coupled to the processing unit and storing a computation graph; wherein the memory also stores instructions that when executed by the processing unit perform a method comprising:
accessing an input comprising the computation graph, wherein the computation graph comprises a plurality of nodes representing operations to be performed, inputs to the operations, and results of the operations;
combining selected nodes of the computation graph to form respective kernels of operations;
encoding the kernels of operations as an executable function comprising code to execute the kernels of operations; and
executing the code to determine the result. 10. The computer system of claim 9, wherein the processing unit is selected from the group consisting of: a graphics processing unit; and a multi-core central processing unit. 11. The computer system of claim 9, wherein each node of the plurality of nodes is represented as a respective data structure of a plurality of data structures, wherein each respective data structure comprises a first field that identifies a type of a node of the plurality of nodes, a second field that lists inputs to the node represented by the data structure, and a third field that includes a value for the node, wherein the value comprises: a result of an operation if the node represents an operation to be performed and the result has been computed, a null value if the node represents an operation to be performed and a result of the operation has not yet been computed, and an input value if the node represents an input to an operation; and
wherein the method further comprises traversing nodes of the computation graph to identify the selected nodes, wherein said traversing comprises identifying data structures that have other than the null value in their third field. 12. The computer system of claim 9, wherein the method further comprises just-in-time compiling the function. 13. The computer system of claim 9, wherein the method further comprises:
generating an object file comprising the function and that is linked into an application; and calling into the function in the object file to execute the code. 14. The computer system of claim 9, wherein the method further comprises storing, in a register, a result of an operation of a kernel of operations that is an input to another operation of the kernel of operations. 15. The computer system of claim 9, wherein the method further comprises:
if the computation graph exceeds a threshold size, then:
executing a first portion of the computation graph; and
using a result of said executing the first portion as an input to a second portion of the computation graph. 16. A non-transitory computer-readable medium having computer-executable instructions for performing a method of executing a directed acyclic graph (DAG), the method comprising:
accessing an input comprising the DAG, wherein the DAG comprises a plurality of nodes representing operations to be performed, inputs to the operations, and results of the operations; combining selected nodes of the DAG to form respective kernels of operations; encoding the kernels of operations as an executable function comprising code to execute the kernels of operations; and executing the code to determine the result. 17. The non-transitory computer-readable medium of claim 16, wherein each node of the plurality of nodes is represented as a respective data structure of a plurality of data structures, wherein each respective data structure comprises a first field that identifies a type of a node of the plurality of nodes, a second field that lists inputs to the node represented by the data structure, and a third field that includes a value for the node, wherein the value comprises: a result of an operation if the node represents an operation to be performed and the result has been computed, a null value if the node represents an operation to be performed and a result of the operation has not yet been computed, and an input value if the node represents an input to an operation;
wherein the method further comprises traversing nodes of the DAG to identify the selected nodes, wherein said traversing comprises identifying data structures that have other than the null value in their third field. 18. The non-transitory computer-readable medium of claim 17, wherein the function is operable for execution on different processor architectures, wherein the processor architectures comprise graphics processing unit architectures and multi-core central processing unit architectures. 19. The non-transitory computer-readable medium of claim 17, wherein the method further comprises just-in-time compiling the function. 20. The non-transitory computer-readable medium of claim 17, wherein the method further comprises:
generating an object file comprising the function and that is linked into an application; and calling into the function in the object file to execute the code. | 2,100 |
6,627 | 6,627 | 15,178,771 | 2,159 | A system includes reception of a first fragment of a first result set of a first one of a plurality of queries, storage of the first fragment of the first result set in a first local buffer associated with the first one of the plurality of queries, reception of a first fragment of a second result set of a second one of a plurality of queries, storage the first fragment of the second result set in a second local buffer associated with the second one of the plurality of queries, determination to flush the first local buffer, and, in response to the determination, transmit all fragments currently stored in the first local buffer to a client from which the plurality of queries was received with an identifier of the first one of the plurality of queries, before receiving all fragments of the first result set. | 1. A system comprising:
a memory storing processor-executable process steps; and a processor to execute the processor-executable process steps to cause the system to: receive a first fragment of a first result set of a first one of a plurality of queries; store the first fragment of the first result set in a first local buffer associated with the first one of the plurality of queries; receive a first fragment of a second result set of a second one of a plurality of queries; store the first fragment of the second result set in a second local buffer associated with the second one of the plurality of queries; determine to flush the first local buffer; and in response to the determination, transmit all fragments currently stored in the first local buffer to a client from which the plurality of queries was received with an identifier of the first one of the plurality of queries, before receiving all fragments of the first result set. 2. A system according to claim 1, the processor to further execute the processor-executable process steps to cause the system to:
determine to flush the second local buffer; and in response to the determination, transmit all fragments currently stored in the second local buffer to the client with an identifier of the second one of the plurality of queries, before receiving all fragments of the second result set. 3. A system according to claim 1, the processor to further execute the processor-executable process steps to cause the system to:
receive a second fragment of the first result set of the first one of the plurality of queries; and store the second fragment of the first result set in the first local buffer. 4. A system according to claim 3, the processor to further execute the processor-executable process steps to cause the system to:
determine that all fragments of the first result set have been received; and transmit all fragments currently stored in the first local buffer to the client with an identifier of the first one of the plurality of queries and an end flag. 5. A system according to claim 1, the processor to further execute the processor-executable process steps to cause the system to:
receive the plurality of queries from the client; and instruct a data engine to execute the plurality of queries at least partially contemporaneously. 6. A system according to claim 5,
wherein reception of the plurality of queries comprises reception of the plurality of queries in a Hypertext Transfer Protocol request payload. 7. A system according to claim 1, wherein determination to flush the first local buffer comprises:
determination of a current storage capacity of the first local buffer. 8. A computer-implemented method comprising:
receiving a plurality of database queries from a client; instructing a data engine to execute the plurality of database queries at least partially contemporaneously; receiving a first fragment of a first result set of a first one of the plurality of database queries from the data engine; storing the first fragment of the first result set in a first memory buffer associated with the first one of the plurality of queries; receiving a first fragment of a second result set of a second one of a plurality of queries from the data engine; storing the first fragment of the second result set in a second memory buffer associated with the second one of the plurality of queries; determining to flush the first memory buffer; and in response to the determination, transmitting all fragments currently stored in the first memory buffer to the client with an identifier of the first one of the plurality of queries, before receiving all fragments of the first result set. 9. A method according to claim 8, further comprising:
determining to flush the second memory buffer; and in response to the determination, transmitting all fragments currently stored in the second memory buffer to the client with an identifier of the second one of the plurality of queries, before receiving all fragments of the second result set. 10. A method according to claim 8, further comprising:
receiving a second fragment of the first result set of a first one of the plurality of database queries; and storing the second fragment of the first result set in the first memory buffer. 11. A method according to claim 10, further comprising:
determining that all fragments of the first result set have been received; and transmitting all fragments currently stored in the first memory buffer to the client with an identifier of the first one of the plurality of queries and an end flag. 12. A method according to claim 8,
wherein receiving the plurality of queries comprises receiving the plurality of queries in a Hypertext Transfer Protocol request payload. 13. A method according to claim 8, wherein determining to flush the first memory buffer comprises:
determining a current storage capacity of the first memory buffer and an expected fragment size. 14. A non-transitory computer-readable medium storing program code, the program code executable by a processor of a computing system to cause the computing system to:
receive a first fragment of a first result set of a first one of a plurality of queries; store the first fragment of the first result set in a first local buffer associated with the first one of the plurality of queries; receive a first fragment of a second result set of a second one of a plurality of queries; store the first fragment of the second result set in a second local buffer associated with the second one of the plurality of queries; determine to flush the first local buffer; and in response to the determination, transmit all fragments currently stored in the first local buffer to a client from which the plurality of queries was received with an identifier of the first one of the plurality of queries, before receiving all fragments of the first result set. 15. A medium according to claim 14, the program code executable by a processor of a computing system to cause the computing system to:
determine to flush the second local buffer; and in response to the determination, transmit all fragments currently stored in the second local buffer to the client with an identifier of the second one of the plurality of queries, before receiving all fragments of the second result set. 16. A medium according to claim 14, the program code executable by a processor of a computing system to cause the computing system to:
receive a second fragment of the first result set of the first one of the plurality of queries; and store the second fragment of the first result set in the first local buffer. 17. A medium according to claim 16, the program code executable by a processor of a computing system to cause the computing system to:
determine that all fragments of the first result set have been received; and transmit all fragments currently stored in the first local buffer to the client with an identifier of the first one of the plurality of queries and an end flag. 18. A medium according to claim 14, the program code executable by a processor of a computing system to cause the computing system to:
receive the plurality of queries from the client; and instruct a data engine to execute the plurality of queries at least partially contemporaneously. 19. A medium according to claim 18,
wherein reception of the plurality of queries comprises reception of the plurality of queries in a Hypertext Transfer Protocol request payload. 20. A medium according to claim 14, wherein determination to flush the first local buffer comprises:
determination of a current storage capacity of the first local buffer. | A system includes reception of a first fragment of a first result set of a first one of a plurality of queries, storage of the first fragment of the first result set in a first local buffer associated with the first one of the plurality of queries, reception of a first fragment of a second result set of a second one of a plurality of queries, storage the first fragment of the second result set in a second local buffer associated with the second one of the plurality of queries, determination to flush the first local buffer, and, in response to the determination, transmit all fragments currently stored in the first local buffer to a client from which the plurality of queries was received with an identifier of the first one of the plurality of queries, before receiving all fragments of the first result set.1. A system comprising:
a memory storing processor-executable process steps; and a processor to execute the processor-executable process steps to cause the system to: receive a first fragment of a first result set of a first one of a plurality of queries; store the first fragment of the first result set in a first local buffer associated with the first one of the plurality of queries; receive a first fragment of a second result set of a second one of a plurality of queries; store the first fragment of the second result set in a second local buffer associated with the second one of the plurality of queries; determine to flush the first local buffer; and in response to the determination, transmit all fragments currently stored in the first local buffer to a client from which the plurality of queries was received with an identifier of the first one of the plurality of queries, before receiving all fragments of the first result set. 2. A system according to claim 1, the processor to further execute the processor-executable process steps to cause the system to:
determine to flush the second local buffer; and in response to the determination, transmit all fragments currently stored in the second local buffer to the client with an identifier of the second one of the plurality of queries, before receiving all fragments of the second result set. 3. A system according to claim 1, the processor to further execute the processor-executable process steps to cause the system to:
receive a second fragment of the first result set of the first one of the plurality of queries; and store the second fragment of the first result set in the first local buffer. 4. A system according to claim 3, the processor to further execute the processor-executable process steps to cause the system to:
determine that all fragments of the first result set have been received; and transmit all fragments currently stored in the first local buffer to the client with an identifier of the first one of the plurality of queries and an end flag. 5. A system according to claim 1, the processor to further execute the processor-executable process steps to cause the system to:
receive the plurality of queries from the client; and instruct a data engine to execute the plurality of queries at least partially contemporaneously. 6. A system according to claim 5,
wherein reception of the plurality of queries comprises reception of the plurality of queries in a Hypertext Transfer Protocol request payload. 7. A system according to claim 1, wherein determination to flush the first local buffer comprises:
determination of a current storage capacity of the first local buffer. 8. A computer-implemented method comprising:
receiving a plurality of database queries from a client; instructing a data engine to execute the plurality of database queries at least partially contemporaneously; receiving a first fragment of a first result set of a first one of the plurality of database queries from the data engine; storing the first fragment of the first result set in a first memory buffer associated with the first one of the plurality of queries; receiving a first fragment of a second result set of a second one of a plurality of queries from the data engine; storing the first fragment of the second result set in a second memory buffer associated with the second one of the plurality of queries; determining to flush the first memory buffer; and in response to the determination, transmitting all fragments currently stored in the first memory buffer to the client with an identifier of the first one of the plurality of queries, before receiving all fragments of the first result set. 9. A method according to claim 8, further comprising:
determining to flush the second memory buffer; and in response to the determination, transmitting all fragments currently stored in the second memory buffer to the client with an identifier of the second one of the plurality of queries, before receiving all fragments of the second result set. 10. A method according to claim 8, further comprising:
receiving a second fragment of the first result set of a first one of the plurality of database queries; and storing the second fragment of the first result set in the first memory buffer. 11. A method according to claim 10, further comprising:
determining that all fragments of the first result set have been received; and transmitting all fragments currently stored in the first memory buffer to the client with an identifier of the first one of the plurality of queries and an end flag. 12. A method according to claim 8,
wherein receiving the plurality of queries comprises receiving the plurality of queries in a Hypertext Transfer Protocol request payload. 13. A method according to claim 8, wherein determining to flush the first memory buffer comprises:
determining a current storage capacity of the first memory buffer and an expected fragment size. 14. A non-transitory computer-readable medium storing program code, the program code executable by a processor of a computing system to cause the computing system to:
receive a first fragment of a first result set of a first one of a plurality of queries; store the first fragment of the first result set in a first local buffer associated with the first one of the plurality of queries; receive a first fragment of a second result set of a second one of a plurality of queries; store the first fragment of the second result set in a second local buffer associated with the second one of the plurality of queries; determine to flush the first local buffer; and in response to the determination, transmit all fragments currently stored in the first local buffer to a client from which the plurality of queries was received with an identifier of the first one of the plurality of queries, before receiving all fragments of the first result set. 15. A medium according to claim 14, the program code executable by a processor of a computing system to cause the computing system to:
determine to flush the second local buffer; and in response to the determination, transmit all fragments currently stored in the second local buffer to the client with an identifier of the second one of the plurality of queries, before receiving all fragments of the second result set. 16. A medium according to claim 14, the program code executable by a processor of a computing system to cause the computing system to:
receive a second fragment of the first result set of the first one of the plurality of queries; and store the second fragment of the first result set in the first local buffer. 17. A medium according to claim 16, the program code executable by a processor of a computing system to cause the computing system to:
determine that all fragments of the first result set have been received; and transmit all fragments currently stored in the first local buffer to the client with an identifier of the first one of the plurality of queries and an end flag. 18. A medium according to claim 14, the program code executable by a processor of a computing system to cause the computing system to:
receive the plurality of queries from the client; and instruct a data engine to execute the plurality of queries at least partially contemporaneously. 19. A medium according to claim 18,
wherein reception of the plurality of queries comprises reception of the plurality of queries in a Hypertext Transfer Protocol request payload. 20. A medium according to claim 14, wherein determination to flush the first local buffer comprises:
determination of a current storage capacity of the first local buffer. | 2,100 |
6,628 | 6,628 | 16,153,069 | 2,178 | A method of analyzing graphical user interface (GUI) objects. The method can include dynamically scanning attributes assigned to various GUI objects assigned to a view of a GUI in order to identify attributes associated with each of the GUI objects. For each of the GUI objects, a list of attributes can be generated. A determination can be made as to whether at least one of the GUI objects has a list of attributes that does not correspond to lists of attributes for other GUI objects. When at least one GUI object has a list of attributes that does not correspond to lists of attributes for other GUI objects, an identifier can be output. The identifier can indicate that the GUI object has the list of attributes that does not correspond to the lists of attributes for the other GUI objects. | 1-24. (canceled) 25. A computer-implemented method of analyzing graphical user interface objects in a view of a graphical user interface, comprising:
scanning, dynamically, each of the graphical user interface objects to identify attributes respectively associated with each of the graphical user interface objects; generating, for each of the graphical user interface objects, a list of attributes associated therewith; identifying, within the graphical user interface objects, a group of graphical user interface objects having a similar type; identifying, for the group, an attribute found in at least one member of the group and not found in at least one other member of the group; and outputting, to the graphical user interface, an identifier indicating the attribute found in at least one member of the group and not found in at least one other member of the group. 26. The method of claim 25, further comprising
automatically updating a list of attributes for a first graphical user interface object that does not include an attribute that is common among a set of the graphical user interface objects with which the first graphical user interface objects is associated. 27. The method of claim 26, wherein
the list of attributes for the first graphical user interface object is updated to include the attribute that is common among the set of the graphical user interface objects with which the first graphical user interface objects is associated. 28. The method of claim 25, further comprising
automatically updating a list of attributes for a first graphical user interface object that includes an attribute that is not common among a set of the graphical user interface objects with which the first graphical user interface objects is associated. 29. The method of claim 28, wherein
the list of attributes for the first graphical user interface object is updated to remove the attribute that is not common among the set of the graphical user interface objects with which the first graphical user interface objects is associated. 30. A computer hardware system for analyzing graphical user interface objects in a view of a graphical user interface, comprising:
a hardware processor configured to initiate the following executable operations:
scanning, dynamically, each of the graphical user interface objects to identify attributes respectively associated with each of the graphical user interface objects;
generating, for each of the graphical user interface objects, a list of attributes associated therewith;
identifying, within the graphical user interface objects, a group of graphical user interface objects having a similar type;
identifying, for the group, an attribute found in at least one member of the group and not found in at least one other member of the group; and
outputting, to the graphical user interface, an identifier indicating the attribute found in at least one member of the group and not found in at least one other member of the group. 31. The system of claim 30, wherein the hardware processor is further configured to initiate
automatically updating a list of attributes for a first graphical user interface object that does not include an attribute that is common among a set of the graphical user interface objects with which the first graphical user interface objects is associated. 32. The system of claim 31, wherein
the list of attributes for the first graphical user interface object is updated to include the attribute that is common among the set of the graphical user interface objects with which the first graphical user interface objects is associated. 33. The system of claim 30, wherein the hardware processor is further configured to initiate
automatically updating a list of attributes for a first graphical user interface object that includes an attribute that is not common among a set of the graphical user interface objects with which the first graphical user interface objects is associated. 34. The system of claim 33, wherein
the list of attributes for the first graphical user interface object is updated to remove the attribute that is not common among the set of the graphical user interface objects with which the first graphical user interface objects is associated. 35. A computer program product, comprising
a hardware storage device having stored therein computer readable program code, the computer readable program code, which when executed by a computer hardware system, causes the computer hardware system to perform:
scanning, dynamically, each of the graphical user interface objects to identify attributes respectively associated with each of the graphical user interface objects;
generating, for each of the graphical user interface objects, a list of attributes associated therewith;
identifying, within the graphical user interface objects, a group of graphical user interface objects having a similar type;
identifying, for the group, an attribute found in at least one member of the group and not found in at least one other member of the group; and
outputting, to the graphical user interface, an identifier indicating the attribute found in at least one member of the group and not found in at least one other member of the group. 36. The computer program product of claim 35, wherein the computer readable program code further causes the computer hardware system to perform
automatically updating a list of attributes for a first graphical user interface object that does not include an attribute that is common among a set of the graphical user interface objects with which the first graphical user interface objects is associated. 37. The computer program product of claim 36, wherein
the list of attributes for the first graphical user interface object is updated to include the attribute that is common among the set of the graphical user interface objects with which the first graphical user interface objects is associated. 38. The computer program product of claim 35, wherein the computer readable program code further causes the computer hardware system to perform
automatically updating a list of attributes for a first graphical user interface object that includes an attribute that is not common among a set of the graphical user interface objects with which the first graphical user interface objects is associated. 39. The computer program product of claim 38, wherein
the list of attributes for the first graphical user interface object is updated to remove the attribute that is not common among the set of the graphical user interface objects with which the first graphical user interface objects is associated. | A method of analyzing graphical user interface (GUI) objects. The method can include dynamically scanning attributes assigned to various GUI objects assigned to a view of a GUI in order to identify attributes associated with each of the GUI objects. For each of the GUI objects, a list of attributes can be generated. A determination can be made as to whether at least one of the GUI objects has a list of attributes that does not correspond to lists of attributes for other GUI objects. When at least one GUI object has a list of attributes that does not correspond to lists of attributes for other GUI objects, an identifier can be output. The identifier can indicate that the GUI object has the list of attributes that does not correspond to the lists of attributes for the other GUI objects.1-24. (canceled) 25. A computer-implemented method of analyzing graphical user interface objects in a view of a graphical user interface, comprising:
scanning, dynamically, each of the graphical user interface objects to identify attributes respectively associated with each of the graphical user interface objects; generating, for each of the graphical user interface objects, a list of attributes associated therewith; identifying, within the graphical user interface objects, a group of graphical user interface objects having a similar type; identifying, for the group, an attribute found in at least one member of the group and not found in at least one other member of the group; and outputting, to the graphical user interface, an identifier indicating the attribute found in at least one member of the group and not found in at least one other member of the group. 26. The method of claim 25, further comprising
automatically updating a list of attributes for a first graphical user interface object that does not include an attribute that is common among a set of the graphical user interface objects with which the first graphical user interface objects is associated. 27. The method of claim 26, wherein
the list of attributes for the first graphical user interface object is updated to include the attribute that is common among the set of the graphical user interface objects with which the first graphical user interface objects is associated. 28. The method of claim 25, further comprising
automatically updating a list of attributes for a first graphical user interface object that includes an attribute that is not common among a set of the graphical user interface objects with which the first graphical user interface objects is associated. 29. The method of claim 28, wherein
the list of attributes for the first graphical user interface object is updated to remove the attribute that is not common among the set of the graphical user interface objects with which the first graphical user interface objects is associated. 30. A computer hardware system for analyzing graphical user interface objects in a view of a graphical user interface, comprising:
a hardware processor configured to initiate the following executable operations:
scanning, dynamically, each of the graphical user interface objects to identify attributes respectively associated with each of the graphical user interface objects;
generating, for each of the graphical user interface objects, a list of attributes associated therewith;
identifying, within the graphical user interface objects, a group of graphical user interface objects having a similar type;
identifying, for the group, an attribute found in at least one member of the group and not found in at least one other member of the group; and
outputting, to the graphical user interface, an identifier indicating the attribute found in at least one member of the group and not found in at least one other member of the group. 31. The system of claim 30, wherein the hardware processor is further configured to initiate
automatically updating a list of attributes for a first graphical user interface object that does not include an attribute that is common among a set of the graphical user interface objects with which the first graphical user interface objects is associated. 32. The system of claim 31, wherein
the list of attributes for the first graphical user interface object is updated to include the attribute that is common among the set of the graphical user interface objects with which the first graphical user interface objects is associated. 33. The system of claim 30, wherein the hardware processor is further configured to initiate
automatically updating a list of attributes for a first graphical user interface object that includes an attribute that is not common among a set of the graphical user interface objects with which the first graphical user interface objects is associated. 34. The system of claim 33, wherein
the list of attributes for the first graphical user interface object is updated to remove the attribute that is not common among the set of the graphical user interface objects with which the first graphical user interface objects is associated. 35. A computer program product, comprising
a hardware storage device having stored therein computer readable program code, the computer readable program code, which when executed by a computer hardware system, causes the computer hardware system to perform:
scanning, dynamically, each of the graphical user interface objects to identify attributes respectively associated with each of the graphical user interface objects;
generating, for each of the graphical user interface objects, a list of attributes associated therewith;
identifying, within the graphical user interface objects, a group of graphical user interface objects having a similar type;
identifying, for the group, an attribute found in at least one member of the group and not found in at least one other member of the group; and
outputting, to the graphical user interface, an identifier indicating the attribute found in at least one member of the group and not found in at least one other member of the group. 36. The computer program product of claim 35, wherein the computer readable program code further causes the computer hardware system to perform
automatically updating a list of attributes for a first graphical user interface object that does not include an attribute that is common among a set of the graphical user interface objects with which the first graphical user interface objects is associated. 37. The computer program product of claim 36, wherein
the list of attributes for the first graphical user interface object is updated to include the attribute that is common among the set of the graphical user interface objects with which the first graphical user interface objects is associated. 38. The computer program product of claim 35, wherein the computer readable program code further causes the computer hardware system to perform
automatically updating a list of attributes for a first graphical user interface object that includes an attribute that is not common among a set of the graphical user interface objects with which the first graphical user interface objects is associated. 39. The computer program product of claim 38, wherein
the list of attributes for the first graphical user interface object is updated to remove the attribute that is not common among the set of the graphical user interface objects with which the first graphical user interface objects is associated. | 2,100 |
6,629 | 6,629 | 15,789,424 | 2,133 | A storage system includes a storage device and a host device. The storage device includes a nonvolatile memory device having a first size and a first volatile memory device having a second size smaller than the first size and configured to operate as a cache memory with respect to the nonvolatile memory device. The first volatile memory device is configured to allow a first bus portion access to cache data stored in the first volatile memory device. The host device is configured to generate a cache table corresponding to information in the cache data stored in the first volatile memory device and configured to read the cache data stored in the first volatile memory device via the first bus portion based on the cache table. | 1. A storage system comprising:
a storage device including, a nonvolatile memory device having a first size, a first volatile memory device having a second size smaller than the first size, the first volatile memory device configured to operate as a cache memory with respect to the nonvolatile memory device, and allow a first bus portion access to cache data stored in the first volatile memory device; and a host device configured to, generate a cache table corresponding to information on the cache data stored in the first volatile memory device, and read the cache data stored in the first volatile memory device via the first bus portion based on the cache table. 2. The storage system of claim 1, wherein the host device includes,
a processor configured to generate a read command for reading data stored in the nonvolatile memory device using a virtual address space; a second volatile memory device configured to store the cache table in which a first address corresponding to the cache data stored in the first volatile memory device in the first bus portion is mapped to a second address corresponding to the cache data in the virtual address space; and a cache controller configured to implement the virtual address space and provide the virtual address space to the processor, convert a virtual address in the virtual address space to a physical address of the first volatile memory device using the cache table, the virtual address being included in the read command, and read a copy of data corresponding to the virtual address and stored in the nonvolatile memory device from the first volatile memory device, via the first bus portion, using the physical address. 3. The storage system of claim 2, wherein, when the virtual address included in the read command from the processor exists in the cache table, the cache controller is configured to read the physical address corresponding to the virtual address from the cache table and to perform a read operation with respect to a region corresponding to the physical address in the first volatile memory device. 4. The storage system of claim 2, wherein, when the virtual address included in the read command from the processor is not included in the cache table, the cache controller is configured to provide a load request signal to the storage device and the storage device is configured to load data corresponding to the virtual address from the nonvolatile memory device to the first volatile memory device as the cache data in response to the load request signal. 5. The storage system of Maim 4, wherein the cache controller is configured to store, in the cache table, the virtual address and the physical address to which the cache data are loaded such that the virtual address is mapped to the physical address, and
perform a read operation with respect to a region corresponding to the physical address in the first volatile memory device. 6. The storage system of claim 2, wherein the storage device further includes:
a storage controller configured to receive a command signal from the cache controller and configured to perform, based on the command signal, at least one of a load operation to load data stored in the nonvolatile memory device to the first volatile memory device as the cache data, a clean operation to store the cache data stored in the first volatile memory device to the nonvolatile memory device, and an invalidating operation to delete the cache data stored in the first volatile memory device. 7. The storage system of claim 6, wherein the cache controller is configured to store the command signal in a predetermined region of the first volatile memory device using the first bus portion, and the storage controller is configured to perform at least one of the load operation, the clean operation and the invalidating operation based on the command signal stored in the predetermined region of the first volatile memory device. 8. The storage system of claim 6, wherein the storage device is configured to allow a second bus portion access to data stored in the nonvolatile memory device. 9. The storage system of claim 8, wherein the cache controller is configured to provide the command signal to the storage controller via the second bus portion, and the storage controller is configured to perform at least one of the load operation, the clean operation and the invalidating operation based on the command signal provided via the second bus portion from the cache controller. 10. The storage system of claim 6, wherein the cache controller is configured to copy the cache table from the second volatile memory device to a region of the first volatile memory device via the first bus portion. 11. The storage system of claim 10, wherein the storage controller is configured to backup the cache table from the region of the first volatile memory device to the nonvolatile memory device if the storage device is powered off and restore the cache table from the nonvolatile memory device to a predetermined region of the first volatile memory device if the storage device is powered on, and
the cache controller is configured to copy the cache table via the first bus portion from the predetermined region of the first volatile memory device to the second volatile memory device when the storage device is powered on. 12. The storage system of claim 1, wherein the first volatile memory device corresponds to a dynamic random access memory (DRAM) device and the nonvolatile memory device corresponds to a flash memory device. 13. The storage system of claim 1, wherein the host device and the first volatile memory device are configured to communicate through a Peripheral Component Interconnect Express (PCIe) bus, the first bus portion being a portion of the PCIe express bus. 14. A storage device comprising:
a nonvolatile memory device having a first size; and a first volatile memory device having a second size smaller than the first size and operating as a cache memory with respect to the nonvolatile memory device, wherein the storage device implements a first interface to allow a first bus portion access to cache data stored in the first volatile memory device and a host device reads the cache data stored in the first volatile memory device via the first bus portion based on a cache table corresponding to information on the cache data stored in the first volatile memory device. 15. The storage device of claim 14, wherein the cache data stored in the first volatile memory device is read by:
generating the cache table in which a first address corresponding to the cache data stored in the first volatile memory device in the first bus portion is mapped to a second address corresponding to the cache data in a virtual address space; generating a read command for reading data stored in the nonvolatile memory device using the virtual address space; converting a virtual address in the virtual address space to a physical address in the first bus portion using the cache table, the virtual address being included in the read command; and reading, from the first volatile memory device using the physical address, a copy of data corresponding to the virtual address and stored in the nonvolatile memory device. 16. The storage system of claim 1, wherein
the first bus portion has a width, the width being evenly divisible into one or more bytes. 17. The storage system of claim 16, wherein
the width is one byte. 18. A storage device comprising:
a nonvolatile memory device having a first size; a volatile memory device having a second size smaller than the first size, the volatile memory device configured to store cache data for the nonvolatile memory device and to transmit data to a host in response to a request from the host; and a storage controller configured to perform one of the following based on the command from the host,
load data stored in the nonvolatile memory device to the volatile memory device,
transfer data stored in the volatile memory device to the nonvolatile memory device, or
delete the data stored in the volatile memory device. 19. The storage device of claim 16 wherein the nonvolatile memory device is configured to receive the command from the host, store the command, and transfer the command to the storage controller. 20. The storage device of clam 16 wherein the nonvolatile memory device is configured to receive a cache table from the host and transfer the cache table to the nonvolatile memory device via the storage controller. | A storage system includes a storage device and a host device. The storage device includes a nonvolatile memory device having a first size and a first volatile memory device having a second size smaller than the first size and configured to operate as a cache memory with respect to the nonvolatile memory device. The first volatile memory device is configured to allow a first bus portion access to cache data stored in the first volatile memory device. The host device is configured to generate a cache table corresponding to information in the cache data stored in the first volatile memory device and configured to read the cache data stored in the first volatile memory device via the first bus portion based on the cache table.1. A storage system comprising:
a storage device including, a nonvolatile memory device having a first size, a first volatile memory device having a second size smaller than the first size, the first volatile memory device configured to operate as a cache memory with respect to the nonvolatile memory device, and allow a first bus portion access to cache data stored in the first volatile memory device; and a host device configured to, generate a cache table corresponding to information on the cache data stored in the first volatile memory device, and read the cache data stored in the first volatile memory device via the first bus portion based on the cache table. 2. The storage system of claim 1, wherein the host device includes,
a processor configured to generate a read command for reading data stored in the nonvolatile memory device using a virtual address space; a second volatile memory device configured to store the cache table in which a first address corresponding to the cache data stored in the first volatile memory device in the first bus portion is mapped to a second address corresponding to the cache data in the virtual address space; and a cache controller configured to implement the virtual address space and provide the virtual address space to the processor, convert a virtual address in the virtual address space to a physical address of the first volatile memory device using the cache table, the virtual address being included in the read command, and read a copy of data corresponding to the virtual address and stored in the nonvolatile memory device from the first volatile memory device, via the first bus portion, using the physical address. 3. The storage system of claim 2, wherein, when the virtual address included in the read command from the processor exists in the cache table, the cache controller is configured to read the physical address corresponding to the virtual address from the cache table and to perform a read operation with respect to a region corresponding to the physical address in the first volatile memory device. 4. The storage system of claim 2, wherein, when the virtual address included in the read command from the processor is not included in the cache table, the cache controller is configured to provide a load request signal to the storage device and the storage device is configured to load data corresponding to the virtual address from the nonvolatile memory device to the first volatile memory device as the cache data in response to the load request signal. 5. The storage system of Maim 4, wherein the cache controller is configured to store, in the cache table, the virtual address and the physical address to which the cache data are loaded such that the virtual address is mapped to the physical address, and
perform a read operation with respect to a region corresponding to the physical address in the first volatile memory device. 6. The storage system of claim 2, wherein the storage device further includes:
a storage controller configured to receive a command signal from the cache controller and configured to perform, based on the command signal, at least one of a load operation to load data stored in the nonvolatile memory device to the first volatile memory device as the cache data, a clean operation to store the cache data stored in the first volatile memory device to the nonvolatile memory device, and an invalidating operation to delete the cache data stored in the first volatile memory device. 7. The storage system of claim 6, wherein the cache controller is configured to store the command signal in a predetermined region of the first volatile memory device using the first bus portion, and the storage controller is configured to perform at least one of the load operation, the clean operation and the invalidating operation based on the command signal stored in the predetermined region of the first volatile memory device. 8. The storage system of claim 6, wherein the storage device is configured to allow a second bus portion access to data stored in the nonvolatile memory device. 9. The storage system of claim 8, wherein the cache controller is configured to provide the command signal to the storage controller via the second bus portion, and the storage controller is configured to perform at least one of the load operation, the clean operation and the invalidating operation based on the command signal provided via the second bus portion from the cache controller. 10. The storage system of claim 6, wherein the cache controller is configured to copy the cache table from the second volatile memory device to a region of the first volatile memory device via the first bus portion. 11. The storage system of claim 10, wherein the storage controller is configured to backup the cache table from the region of the first volatile memory device to the nonvolatile memory device if the storage device is powered off and restore the cache table from the nonvolatile memory device to a predetermined region of the first volatile memory device if the storage device is powered on, and
the cache controller is configured to copy the cache table via the first bus portion from the predetermined region of the first volatile memory device to the second volatile memory device when the storage device is powered on. 12. The storage system of claim 1, wherein the first volatile memory device corresponds to a dynamic random access memory (DRAM) device and the nonvolatile memory device corresponds to a flash memory device. 13. The storage system of claim 1, wherein the host device and the first volatile memory device are configured to communicate through a Peripheral Component Interconnect Express (PCIe) bus, the first bus portion being a portion of the PCIe express bus. 14. A storage device comprising:
a nonvolatile memory device having a first size; and a first volatile memory device having a second size smaller than the first size and operating as a cache memory with respect to the nonvolatile memory device, wherein the storage device implements a first interface to allow a first bus portion access to cache data stored in the first volatile memory device and a host device reads the cache data stored in the first volatile memory device via the first bus portion based on a cache table corresponding to information on the cache data stored in the first volatile memory device. 15. The storage device of claim 14, wherein the cache data stored in the first volatile memory device is read by:
generating the cache table in which a first address corresponding to the cache data stored in the first volatile memory device in the first bus portion is mapped to a second address corresponding to the cache data in a virtual address space; generating a read command for reading data stored in the nonvolatile memory device using the virtual address space; converting a virtual address in the virtual address space to a physical address in the first bus portion using the cache table, the virtual address being included in the read command; and reading, from the first volatile memory device using the physical address, a copy of data corresponding to the virtual address and stored in the nonvolatile memory device. 16. The storage system of claim 1, wherein
the first bus portion has a width, the width being evenly divisible into one or more bytes. 17. The storage system of claim 16, wherein
the width is one byte. 18. A storage device comprising:
a nonvolatile memory device having a first size; a volatile memory device having a second size smaller than the first size, the volatile memory device configured to store cache data for the nonvolatile memory device and to transmit data to a host in response to a request from the host; and a storage controller configured to perform one of the following based on the command from the host,
load data stored in the nonvolatile memory device to the volatile memory device,
transfer data stored in the volatile memory device to the nonvolatile memory device, or
delete the data stored in the volatile memory device. 19. The storage device of claim 16 wherein the nonvolatile memory device is configured to receive the command from the host, store the command, and transfer the command to the storage controller. 20. The storage device of clam 16 wherein the nonvolatile memory device is configured to receive a cache table from the host and transfer the cache table to the nonvolatile memory device via the storage controller. | 2,100 |
6,630 | 6,630 | 15,281,279 | 2,169 | Examples disclosed herein relate to relocation of an analytical process based on lineage metadata. In an example, a determination may be made, based on lineage metadata on a hub device, whether relocating an analytical process from the hub device to a remote edge device reduces execution time of the analytical process, wherein the analytical process is part of an analytical workflow that is implemented at least in part on the hub device and the remote edge device. In response to a determination that relocating the analytical process from the hub device to the remote edge device reduces the execution time of the analytical process, the analytical process may be relocated from the hub device to the remote edge device. | 1. A method comprising:
determining, based on lineage metadata on a hub device, whether relocating an analytical process from the hub device to a remote edge device reduces execution time of the analytical process, wherein the analytical process is part of an analytical workflow that is implemented at least in part on the hub device and the remote edge device, and wherein the lineage metadata comprises data associated with input data provided to the analytical process, data associated with output data generated by the analytical process, and data identifying the analytical process used to process the input data to generate the output data, and data related to the analytical workflow; and in response to a determination that relocating the analytical process from the hub device to the remote edge device reduces the execution time of the analytical process, relocating the analytical process from the hub device to the remote edge device. 2. The method of claim 1, wherein the data related to the analytical workflow includes a data flow rate between the hub device and the remote edge device. 3. The method of claim 1, wherein the data related to the analytical workflow includes a data flow rate between a storage component and a processing component of the hub device. 4. The method of claim 1, wherein the data related to the analytical workflow includes a data flow rate between a storage component and a processing component of the remote edge device. 5. The method of claim 1, wherein the data related to the analytical workflow includes processing resources available on the hub device. 6. A device comprising:
a data flow analytics engine to: determine, based on lineage metadata on the device, whether relocating an analytical process from the device to a remote storage device reduces execution time of the analytical process, wherein the analytical process is part of an analytical workflow that is implemented at least in part on the device, and wherein the lineage metadata comprises data associated with input data provided to the analytical process, data associated with output data generated by the analytical process, and data identifying the analytical process used to process the input data to generate the output data, and data related to the analytical workflow; and in response to a determination that relocating the analytical process from the device to the remote storage device reduces the execution time of the analytical workflow, relocate the analytical process from the device to the remote storage device. 7. The device of claim 6, wherein the data flow analytics engine to include the data related to the analytical workflow to the lineage metadata on the device. 8. The device of claim 6, wherein the data related to the analytical workflow includes a frequency of data exchanged between the device and the remote edge device for execution of the analytical process, and wherein the data flow analytics engine to: identify, based on the frequency of data exchanged between the device and the remote edge device, seldom used data for execution of the analytical process; and avoid exchange of the seldom used data between the device and the remote edge device. 9. The device of claim 6, wherein the data related to the analytical workflow includes recency of data exchanged between the device and the remote edge device for execution of the analytical process, and wherein the data flow analytics engine to: identify, based on the recency of data exchanged between the device and the remote edge device, seldom used data for execution of the analytical process; and avoid exchange of the seldom used data between the device and the remote edge device. 10. The device of claim 6, wherein the device is one of an edge device and a hub device. 11. A non-transitory machine-readable storage medium comprising instructions, the instructions executable by a processor to:
determine, based on lineage metadata on an edge device, whether relocating an analytical process from the edge device to a remote hub device reduces execution time of an analytical workflow, wherein the analytical process is part of the analytical workflow that is implemented at least in part on the edge device and the remote hub device, and wherein the lineage metadata comprises data associated with input data provided to the analytical process, data associated with output data generated by the analytical process, and data identifying the analytical process used to process the input data to generate the output data, and data related to the analytical workflow; and in response to a determination that relocating the analytical process from the edge device to the remote hub device reduces the execution time of the analytical workflow, relocate the analytical process from the edge device to the remote hub device. 12. The storage medium of claim 11, wherein the data related to the analytical workflow includes processing resources available on the remote hub device. 13. The storage medium of claim 11, wherein the data related to the analytical workflow includes amount of data transferred between analytical processes in the analytical workflow. 14. The storage medium of claim 11, wherein the data related to the analytical workflow includes a processor time used by each analytical process in the analytical workflow. 15. The storage medium of claim 11, wherein the instructions to determine include instructions to:
determine, based on time data on the edge device, whether relocating the analytical process from the edge device to the remote hub device reduces execution time of the analytical workflow, wherein the time data is related to at least one of execution time of the analytical process and the execution time of the analytical workflow. | Examples disclosed herein relate to relocation of an analytical process based on lineage metadata. In an example, a determination may be made, based on lineage metadata on a hub device, whether relocating an analytical process from the hub device to a remote edge device reduces execution time of the analytical process, wherein the analytical process is part of an analytical workflow that is implemented at least in part on the hub device and the remote edge device. In response to a determination that relocating the analytical process from the hub device to the remote edge device reduces the execution time of the analytical process, the analytical process may be relocated from the hub device to the remote edge device.1. A method comprising:
determining, based on lineage metadata on a hub device, whether relocating an analytical process from the hub device to a remote edge device reduces execution time of the analytical process, wherein the analytical process is part of an analytical workflow that is implemented at least in part on the hub device and the remote edge device, and wherein the lineage metadata comprises data associated with input data provided to the analytical process, data associated with output data generated by the analytical process, and data identifying the analytical process used to process the input data to generate the output data, and data related to the analytical workflow; and in response to a determination that relocating the analytical process from the hub device to the remote edge device reduces the execution time of the analytical process, relocating the analytical process from the hub device to the remote edge device. 2. The method of claim 1, wherein the data related to the analytical workflow includes a data flow rate between the hub device and the remote edge device. 3. The method of claim 1, wherein the data related to the analytical workflow includes a data flow rate between a storage component and a processing component of the hub device. 4. The method of claim 1, wherein the data related to the analytical workflow includes a data flow rate between a storage component and a processing component of the remote edge device. 5. The method of claim 1, wherein the data related to the analytical workflow includes processing resources available on the hub device. 6. A device comprising:
a data flow analytics engine to: determine, based on lineage metadata on the device, whether relocating an analytical process from the device to a remote storage device reduces execution time of the analytical process, wherein the analytical process is part of an analytical workflow that is implemented at least in part on the device, and wherein the lineage metadata comprises data associated with input data provided to the analytical process, data associated with output data generated by the analytical process, and data identifying the analytical process used to process the input data to generate the output data, and data related to the analytical workflow; and in response to a determination that relocating the analytical process from the device to the remote storage device reduces the execution time of the analytical workflow, relocate the analytical process from the device to the remote storage device. 7. The device of claim 6, wherein the data flow analytics engine to include the data related to the analytical workflow to the lineage metadata on the device. 8. The device of claim 6, wherein the data related to the analytical workflow includes a frequency of data exchanged between the device and the remote edge device for execution of the analytical process, and wherein the data flow analytics engine to: identify, based on the frequency of data exchanged between the device and the remote edge device, seldom used data for execution of the analytical process; and avoid exchange of the seldom used data between the device and the remote edge device. 9. The device of claim 6, wherein the data related to the analytical workflow includes recency of data exchanged between the device and the remote edge device for execution of the analytical process, and wherein the data flow analytics engine to: identify, based on the recency of data exchanged between the device and the remote edge device, seldom used data for execution of the analytical process; and avoid exchange of the seldom used data between the device and the remote edge device. 10. The device of claim 6, wherein the device is one of an edge device and a hub device. 11. A non-transitory machine-readable storage medium comprising instructions, the instructions executable by a processor to:
determine, based on lineage metadata on an edge device, whether relocating an analytical process from the edge device to a remote hub device reduces execution time of an analytical workflow, wherein the analytical process is part of the analytical workflow that is implemented at least in part on the edge device and the remote hub device, and wherein the lineage metadata comprises data associated with input data provided to the analytical process, data associated with output data generated by the analytical process, and data identifying the analytical process used to process the input data to generate the output data, and data related to the analytical workflow; and in response to a determination that relocating the analytical process from the edge device to the remote hub device reduces the execution time of the analytical workflow, relocate the analytical process from the edge device to the remote hub device. 12. The storage medium of claim 11, wherein the data related to the analytical workflow includes processing resources available on the remote hub device. 13. The storage medium of claim 11, wherein the data related to the analytical workflow includes amount of data transferred between analytical processes in the analytical workflow. 14. The storage medium of claim 11, wherein the data related to the analytical workflow includes a processor time used by each analytical process in the analytical workflow. 15. The storage medium of claim 11, wherein the instructions to determine include instructions to:
determine, based on time data on the edge device, whether relocating the analytical process from the edge device to the remote hub device reduces execution time of the analytical workflow, wherein the time data is related to at least one of execution time of the analytical process and the execution time of the analytical workflow. | 2,100 |
6,631 | 6,631 | 15,652,003 | 2,159 | A method and system for maintaining a density-based geocode tree for a geographic area, including obtaining a geocode tree including multiple leaf nodes each having a geohash value corresponding to a subdivision of the geographic area, obtaining multiple positions within the geographic area, generating, using the multiple positions, multiple geohashes, mapping a first subset of the multiple geohashes to a first leaf node of the multiple leaf nodes based on the geohash value of the first leaf node, incrementing, by a cardinality of the first subset, a first counter value for the first leaf node, and grafting, in response to the first counter value exceeding a first density threshold, at least one child node onto the first leaf node. | 1-20. (canceled) 21. A method for maintaining a density-based geocode tree for a geographic area, the method comprising:
obtaining a geocode tree comprising a plurality of leaf nodes each having a geohash value corresponding to a subdivision of the geographic area; obtaining, from embedded positions within messages on a social media platform, a plurality of positions within the geographic area; generating, using the plurality of positions, a plurality of geohashes; mapping a first subset of the plurality of geohashes to a first leaf node of the plurality of leaf nodes based on the respective geohash value of the first leaf node; incrementing, by a cardinality of the first subset, a first counter value for the first leaf node, the first counter value comprising a count of user posts on the social media platform associated with the respective geohash value of the first leaf node; and pruning, in response to a first density threshold exceeding the first counter value for the first leaf node, the first leaf node and one or more siblings of the first leaf node from the geocode tree. 22. The method of claim 21, further comprising:
mapping a second subset of the plurality of geohashes to a second leaf node of the plurality of leaf nodes based on the respective geohash value of the second leaf node; incrementing, by a cardinality of the second subset, a second counter value for the second leaf node, the second counter value comprising a second count of user posts on the social media platform associated with the respective geohash value of the second leaf node; and grafting, in response to the second counter value exceeding a second density threshold, at least one child node onto the second leaf node. 23. The method of claim 21, wherein the geocode tree is of a type selected from a group consisting of a perfect tree, a complete tree, and a balanced tree. 24. (canceled) 25. The method of claim 21, wherein the plurality of positions comprise global positions of mobile devices. 26. The method of claim 21, further comprising:
receiving, from a client device with a global positioning system, a reverse geocode lookup request comprising a new global position;
generating a new geohash using the new global position;
mapping the new geohash to a leaf node of the plurality of leaf nodes of the geocode tree based on the geohash value of the leaf node; and
returning an identifier of a subdivision of a geographic area corresponding to the geohash value of the leaf node. 27. The method of claim 26, wherein the identifier is of a type selected from a group consisting of a map tile, an address range, a zip code, and a place name. 28. The method of claim 26, wherein the reverse geocode lookup request comprises a first reverse geocode lookup request, wherein the method further comprises:
receiving, from a second client device with a respective global positioning system, a second reverse geocode lookup request comprising a second new global position different than the new global position; generating a second new geohash using the second new global position, wherein the second new geohash has a length different than a length for the new geohash; mapping the second new geohash to a second leaf node of the plurality of leaf nodes of the geocode tree based on a geohash value of the second leaf node; and returning a second identifier of a second subdivision of a geographic area corresponding to the geohash value of the second leaf node. 29. The method of claim 21, wherein the second leaf node and the one or more siblings of the second leaf node are associated with contiguous subdivisions of the geographic area. 30. A system for maintaining a density-based geocode tree, the system comprising:
a computer processor; and a memory configured to store instructions that are executable by the computer processor to:
obtain a geocode tree comprising a plurality of leaf nodes each having a geohash value corresponding to a subdivision of the geographic area;
obtain, from embedded positions within messages on a social media platform, a plurality of positions within the geographic area;
generate, using the plurality of positions, a plurality of geohashes;
map a first subset of the plurality of geohashes to a first leaf node of the plurality of leaf nodes based on the respective geohash value of the first leaf node;
increment, by a cardinality of the first subset, a first counter value for the first leaf node, the first counter value comprising a count of user posts on the social media platform associated with the respective geohash value of the first leaf node; and
prune, in response to a first density threshold exceeding the first counter value for the first leaf node, the first leaf node and one or more siblings of the first leaf node from the geocode tree. 31. The system of claim 30, wherein the instructions are further executable by the computer processor to:
map a second subset of the plurality of geohashes to a second leaf node of the plurality of leaf nodes based on the respective geohash value of the second leaf node; increment, by a cardinality of the second subset, a second counter value for the second leaf node, the second counter value comprising a second count of user posts on the social media platform associated with the respective geohash value of the second leaf node; and graft, in response to the second counter value exceeding a second density threshold, at least one child node onto the second leaf node. 32. The system of claim 30, wherein the geocode tree is of a type selected from a group consisting of a perfect tree, a complete tree, and a balanced tree. 33. (canceled) 34. The system of claim 30, wherein the plurality of positions comprise global positions of mobile devices. 35. The system of claim 30, wherein the instructions are further executable by the computer processor to:
receive, from a client device with a global positioning system, a reverse geocode lookup request comprising a new global position; generate a new geohash using the new global position; map the new geohash to a leaf node of the plurality of leaf nodes of the geocode tree based on a geohash value of a leaf node; and return an identifier of the subdivision of a geographic area corresponding to the geohash value of the leaf node. 36. The system of claim 35, wherein the identifier is of a type selected from a group consisting of a map tile, an address range, a zip code, and a place name. 37. The system of claim 30, wherein the second leaf node and the one or more siblings of the second leaf node are associated with contiguous subdivisions of the geographic area. 38. A non-transitory computer readable medium storing instructions for maintaining a density-based geocode tree, the instructions comprising functionality to:
obtain a geocode tree comprising a plurality of leaf nodes each having a geohash value corresponding to a subdivision of the geographic area; obtain, from embedded positions within messages on a social media platform, a plurality of positions within the geographic area; generate, using the plurality of positions, a plurality of geohashes; map a first subset of the plurality of geohashes to a first leaf node of the plurality of leaf nodes based on the respective geohash value of the first leaf node; increment, by a cardinality of the first subset, a first counter value for the first leaf node, the first counter value comprising a count of user posts on the social media platform associated with the respective geohash value of the first leaf node; and prune, in response to a first density threshold exceeding the first counter value for the first leaf node, the first leaf node and one or more siblings of the first leaf node from the geocode tree. 39. The non-transitory computer readable medium of claim 38, wherein the instructions further comprise functionality to:
map a second subset of the plurality of geohashes to a second leaf node of the plurality of leaf nodes based on the respective geohash value of the second leaf node; increment, by a cardinality of the second subset, a second counter value for the second leaf node, the second counter value comprising a second count of user posts on the social media platform associated with the respective geohash value of the second leaf node; and graft, in response to the second counter value exceeding a second density threshold, at least one child node onto the second leaf node. 40. The non-transitory computer readable medium of claim 38, wherein the instructions further comprise functionality to:
receive, from a client device with a global positioning system, a reverse geocode lookup request comprising a new global position; generate a new geohash using the new global position; map the new geohash to a leaf node of the plurality of leaf nodes of the geocode tree based on a geohash value of the leaf node; and return an identifier of a subdivision of a geographic area corresponding to the geohash value of the leaf node, wherein the identifier is of a type selected from a group consisting of a map tile, an address range, a zip code, and a place name. 41. The method of claim 21, wherein obtaining the plurality of positions within the geographic area comprises obtaining the plurality of positions from messages on the social media platform transmitted during a predetermined time interval. 42. The method of claim 21, wherein obtaining the plurality of positions within the geographic area comprises obtaining the plurality of positions that are located within a predetermined time zone. 43. The method of claim 21, wherein pruning the first leaf node and the one or more siblings of the first leaf node from the geocode tree comprises pruning every sibling of the first leaf node from the geocode tree. 44. The system of claim 30, wherein the geocode tree includes a first set of nodes for a first subdivision of the geographic area and a second set of nodes for a second subdivision of the geographic area, the first set of nodes having a greater number of levels than the second set of nodes, wherein a first geohash of a first leaf node of the first set of nodes is more precise than a second geohash of a second leaf node of the second set of nodes. 45. The method of claim 22, wherein said grafting, in response to the second counter value exceeding a second density threshold, comprises grafting a plurality of child nodes onto the second leaf node, wherein the plurality of child nodes comprises a respective child node for each of a plurality of subdivisions of the geographic area to which the second leaf node corresponds. | A method and system for maintaining a density-based geocode tree for a geographic area, including obtaining a geocode tree including multiple leaf nodes each having a geohash value corresponding to a subdivision of the geographic area, obtaining multiple positions within the geographic area, generating, using the multiple positions, multiple geohashes, mapping a first subset of the multiple geohashes to a first leaf node of the multiple leaf nodes based on the geohash value of the first leaf node, incrementing, by a cardinality of the first subset, a first counter value for the first leaf node, and grafting, in response to the first counter value exceeding a first density threshold, at least one child node onto the first leaf node.1-20. (canceled) 21. A method for maintaining a density-based geocode tree for a geographic area, the method comprising:
obtaining a geocode tree comprising a plurality of leaf nodes each having a geohash value corresponding to a subdivision of the geographic area; obtaining, from embedded positions within messages on a social media platform, a plurality of positions within the geographic area; generating, using the plurality of positions, a plurality of geohashes; mapping a first subset of the plurality of geohashes to a first leaf node of the plurality of leaf nodes based on the respective geohash value of the first leaf node; incrementing, by a cardinality of the first subset, a first counter value for the first leaf node, the first counter value comprising a count of user posts on the social media platform associated with the respective geohash value of the first leaf node; and pruning, in response to a first density threshold exceeding the first counter value for the first leaf node, the first leaf node and one or more siblings of the first leaf node from the geocode tree. 22. The method of claim 21, further comprising:
mapping a second subset of the plurality of geohashes to a second leaf node of the plurality of leaf nodes based on the respective geohash value of the second leaf node; incrementing, by a cardinality of the second subset, a second counter value for the second leaf node, the second counter value comprising a second count of user posts on the social media platform associated with the respective geohash value of the second leaf node; and grafting, in response to the second counter value exceeding a second density threshold, at least one child node onto the second leaf node. 23. The method of claim 21, wherein the geocode tree is of a type selected from a group consisting of a perfect tree, a complete tree, and a balanced tree. 24. (canceled) 25. The method of claim 21, wherein the plurality of positions comprise global positions of mobile devices. 26. The method of claim 21, further comprising:
receiving, from a client device with a global positioning system, a reverse geocode lookup request comprising a new global position;
generating a new geohash using the new global position;
mapping the new geohash to a leaf node of the plurality of leaf nodes of the geocode tree based on the geohash value of the leaf node; and
returning an identifier of a subdivision of a geographic area corresponding to the geohash value of the leaf node. 27. The method of claim 26, wherein the identifier is of a type selected from a group consisting of a map tile, an address range, a zip code, and a place name. 28. The method of claim 26, wherein the reverse geocode lookup request comprises a first reverse geocode lookup request, wherein the method further comprises:
receiving, from a second client device with a respective global positioning system, a second reverse geocode lookup request comprising a second new global position different than the new global position; generating a second new geohash using the second new global position, wherein the second new geohash has a length different than a length for the new geohash; mapping the second new geohash to a second leaf node of the plurality of leaf nodes of the geocode tree based on a geohash value of the second leaf node; and returning a second identifier of a second subdivision of a geographic area corresponding to the geohash value of the second leaf node. 29. The method of claim 21, wherein the second leaf node and the one or more siblings of the second leaf node are associated with contiguous subdivisions of the geographic area. 30. A system for maintaining a density-based geocode tree, the system comprising:
a computer processor; and a memory configured to store instructions that are executable by the computer processor to:
obtain a geocode tree comprising a plurality of leaf nodes each having a geohash value corresponding to a subdivision of the geographic area;
obtain, from embedded positions within messages on a social media platform, a plurality of positions within the geographic area;
generate, using the plurality of positions, a plurality of geohashes;
map a first subset of the plurality of geohashes to a first leaf node of the plurality of leaf nodes based on the respective geohash value of the first leaf node;
increment, by a cardinality of the first subset, a first counter value for the first leaf node, the first counter value comprising a count of user posts on the social media platform associated with the respective geohash value of the first leaf node; and
prune, in response to a first density threshold exceeding the first counter value for the first leaf node, the first leaf node and one or more siblings of the first leaf node from the geocode tree. 31. The system of claim 30, wherein the instructions are further executable by the computer processor to:
map a second subset of the plurality of geohashes to a second leaf node of the plurality of leaf nodes based on the respective geohash value of the second leaf node; increment, by a cardinality of the second subset, a second counter value for the second leaf node, the second counter value comprising a second count of user posts on the social media platform associated with the respective geohash value of the second leaf node; and graft, in response to the second counter value exceeding a second density threshold, at least one child node onto the second leaf node. 32. The system of claim 30, wherein the geocode tree is of a type selected from a group consisting of a perfect tree, a complete tree, and a balanced tree. 33. (canceled) 34. The system of claim 30, wherein the plurality of positions comprise global positions of mobile devices. 35. The system of claim 30, wherein the instructions are further executable by the computer processor to:
receive, from a client device with a global positioning system, a reverse geocode lookup request comprising a new global position; generate a new geohash using the new global position; map the new geohash to a leaf node of the plurality of leaf nodes of the geocode tree based on a geohash value of a leaf node; and return an identifier of the subdivision of a geographic area corresponding to the geohash value of the leaf node. 36. The system of claim 35, wherein the identifier is of a type selected from a group consisting of a map tile, an address range, a zip code, and a place name. 37. The system of claim 30, wherein the second leaf node and the one or more siblings of the second leaf node are associated with contiguous subdivisions of the geographic area. 38. A non-transitory computer readable medium storing instructions for maintaining a density-based geocode tree, the instructions comprising functionality to:
obtain a geocode tree comprising a plurality of leaf nodes each having a geohash value corresponding to a subdivision of the geographic area; obtain, from embedded positions within messages on a social media platform, a plurality of positions within the geographic area; generate, using the plurality of positions, a plurality of geohashes; map a first subset of the plurality of geohashes to a first leaf node of the plurality of leaf nodes based on the respective geohash value of the first leaf node; increment, by a cardinality of the first subset, a first counter value for the first leaf node, the first counter value comprising a count of user posts on the social media platform associated with the respective geohash value of the first leaf node; and prune, in response to a first density threshold exceeding the first counter value for the first leaf node, the first leaf node and one or more siblings of the first leaf node from the geocode tree. 39. The non-transitory computer readable medium of claim 38, wherein the instructions further comprise functionality to:
map a second subset of the plurality of geohashes to a second leaf node of the plurality of leaf nodes based on the respective geohash value of the second leaf node; increment, by a cardinality of the second subset, a second counter value for the second leaf node, the second counter value comprising a second count of user posts on the social media platform associated with the respective geohash value of the second leaf node; and graft, in response to the second counter value exceeding a second density threshold, at least one child node onto the second leaf node. 40. The non-transitory computer readable medium of claim 38, wherein the instructions further comprise functionality to:
receive, from a client device with a global positioning system, a reverse geocode lookup request comprising a new global position; generate a new geohash using the new global position; map the new geohash to a leaf node of the plurality of leaf nodes of the geocode tree based on a geohash value of the leaf node; and return an identifier of a subdivision of a geographic area corresponding to the geohash value of the leaf node, wherein the identifier is of a type selected from a group consisting of a map tile, an address range, a zip code, and a place name. 41. The method of claim 21, wherein obtaining the plurality of positions within the geographic area comprises obtaining the plurality of positions from messages on the social media platform transmitted during a predetermined time interval. 42. The method of claim 21, wherein obtaining the plurality of positions within the geographic area comprises obtaining the plurality of positions that are located within a predetermined time zone. 43. The method of claim 21, wherein pruning the first leaf node and the one or more siblings of the first leaf node from the geocode tree comprises pruning every sibling of the first leaf node from the geocode tree. 44. The system of claim 30, wherein the geocode tree includes a first set of nodes for a first subdivision of the geographic area and a second set of nodes for a second subdivision of the geographic area, the first set of nodes having a greater number of levels than the second set of nodes, wherein a first geohash of a first leaf node of the first set of nodes is more precise than a second geohash of a second leaf node of the second set of nodes. 45. The method of claim 22, wherein said grafting, in response to the second counter value exceeding a second density threshold, comprises grafting a plurality of child nodes onto the second leaf node, wherein the plurality of child nodes comprises a respective child node for each of a plurality of subdivisions of the geographic area to which the second leaf node corresponds. | 2,100 |
6,632 | 6,632 | 15,081,153 | 2,153 | Personalized search results are provided to a user by sending to the user a direct marketing email having associated therewith one or more opt-out elements. Information related to the user's interaction with the one or more opt-out elements is maintained in a database. When a search request is thereafter submitted to a search engine by the user, the information related to the user's interaction with the one or more opt-out elements is used to inhibit inclusion within the search results of one or more items. In addition or alternatively, when the user accesses a search interface having a plurality of search options selectable to provide a search request to the search engine, the information related to the user's interaction with the one or more opt-out elements is used to inhibit inclusion within the search interface of one or more of the plurality of user selectable search options. | 1. A method for providing personalized search results, the method comprising:
storing within a database for each of a plurality of users information related to an interaction with a one or more opt-out elements associated with a direct marketing email electronically sent to each of the plurality of users wherein the stored information is cross-referenced to a unique user identifier for each of the plurality of users; receiving at a server device from a user device associated with a one of the plurality of users a search request for searching an electronic catalog; using by the server device a unique user identifier for the one of the plurality of users to locate within the database information related to interaction by the one of the plurality of users with a one or more opt-out elements associated with a direct marketing email; providing by the server device to a search engine associated with the electronic catalog the search request received from the one of the plurality of users and information related to interaction by the one of the plurality of users with a one or more opt-out elements associated with a direct marketing email as located within the database; using by the search engine the provided search request to generate an electronic search result wherein the electronic search result comprises a plurality of items within the electronic catalog and the provided information related to interaction by the one of the plurality of users with a one or more opt-out elements associated with a direct marketing email as located within the database to personalize the generated electronic search result wherein one or more items from the plurality of items within the electronic catalog are removed from the generated electronic search result; and causing the server device to electronically transmit the personalized search result to the user device for display on a display associated with the user device as a response to the received search query. 2. The method as recited in claim 1, wherein information related to interaction by the one of the plurality of users with a one or more opt-out elements associated with a direct marketing email as located within the database comprises information indicative of the one of the plurality of users desire to not receive any direct marketing emails for a particular category or subcategory of product. 3. The method as recited in claim 1, wherein the direct marketing email comprises a promotional offer for a product within a particular category or subcategory of product. 4. The method as recited in claim 1, wherein the search engine further uses information related to interaction by the one of the plurality of users with a one or more opt-out elements associated with a direct marketing email as located within the database to promote within the generated search results one or more items. 5. The computer-readable media as recited in claim 1, wherein, in response to the one of the plurality of users accessing a search interface having a plurality of search options selectable to provide a search request to a search engine, the server device uses the unique user identifier for the one of the plurality of users to locate within the database information related to interaction by the one of the plurality of users with a one or more opt-out elements associated with a direct marketing email and uses information related to interaction by the one of the plurality of users with a one or more opt-out elements associated with a direct marketing email as located within the database to inhibit inclusion within the search interface of one or more of the plurality of user selectable search options. 6. The method as recited in claim 1, wherein the unique user identifier for the one of the plurality of users comprises a purchasing account number that is associated with the one of the plurality of users. 7. The method as recited in claim 1, wherein the unique identifier for the one of the plurality of users comprises an email address that is associated with the one of the plurality of users. 8. The method as recited in claim 1, wherein the unique identifier for the one of the plurality of users comprises an identifier extracted from a cookie placed on the user device. | Personalized search results are provided to a user by sending to the user a direct marketing email having associated therewith one or more opt-out elements. Information related to the user's interaction with the one or more opt-out elements is maintained in a database. When a search request is thereafter submitted to a search engine by the user, the information related to the user's interaction with the one or more opt-out elements is used to inhibit inclusion within the search results of one or more items. In addition or alternatively, when the user accesses a search interface having a plurality of search options selectable to provide a search request to the search engine, the information related to the user's interaction with the one or more opt-out elements is used to inhibit inclusion within the search interface of one or more of the plurality of user selectable search options.1. A method for providing personalized search results, the method comprising:
storing within a database for each of a plurality of users information related to an interaction with a one or more opt-out elements associated with a direct marketing email electronically sent to each of the plurality of users wherein the stored information is cross-referenced to a unique user identifier for each of the plurality of users; receiving at a server device from a user device associated with a one of the plurality of users a search request for searching an electronic catalog; using by the server device a unique user identifier for the one of the plurality of users to locate within the database information related to interaction by the one of the plurality of users with a one or more opt-out elements associated with a direct marketing email; providing by the server device to a search engine associated with the electronic catalog the search request received from the one of the plurality of users and information related to interaction by the one of the plurality of users with a one or more opt-out elements associated with a direct marketing email as located within the database; using by the search engine the provided search request to generate an electronic search result wherein the electronic search result comprises a plurality of items within the electronic catalog and the provided information related to interaction by the one of the plurality of users with a one or more opt-out elements associated with a direct marketing email as located within the database to personalize the generated electronic search result wherein one or more items from the plurality of items within the electronic catalog are removed from the generated electronic search result; and causing the server device to electronically transmit the personalized search result to the user device for display on a display associated with the user device as a response to the received search query. 2. The method as recited in claim 1, wherein information related to interaction by the one of the plurality of users with a one or more opt-out elements associated with a direct marketing email as located within the database comprises information indicative of the one of the plurality of users desire to not receive any direct marketing emails for a particular category or subcategory of product. 3. The method as recited in claim 1, wherein the direct marketing email comprises a promotional offer for a product within a particular category or subcategory of product. 4. The method as recited in claim 1, wherein the search engine further uses information related to interaction by the one of the plurality of users with a one or more opt-out elements associated with a direct marketing email as located within the database to promote within the generated search results one or more items. 5. The computer-readable media as recited in claim 1, wherein, in response to the one of the plurality of users accessing a search interface having a plurality of search options selectable to provide a search request to a search engine, the server device uses the unique user identifier for the one of the plurality of users to locate within the database information related to interaction by the one of the plurality of users with a one or more opt-out elements associated with a direct marketing email and uses information related to interaction by the one of the plurality of users with a one or more opt-out elements associated with a direct marketing email as located within the database to inhibit inclusion within the search interface of one or more of the plurality of user selectable search options. 6. The method as recited in claim 1, wherein the unique user identifier for the one of the plurality of users comprises a purchasing account number that is associated with the one of the plurality of users. 7. The method as recited in claim 1, wherein the unique identifier for the one of the plurality of users comprises an email address that is associated with the one of the plurality of users. 8. The method as recited in claim 1, wherein the unique identifier for the one of the plurality of users comprises an identifier extracted from a cookie placed on the user device. | 2,100 |
6,633 | 6,633 | 16,256,676 | 2,136 | An example method of managing memory in a computer system implementing non-uniform memory access (NUMA) by a plurality of sockets each having a processor component and a memory component is described. The method includes replicating page tables for an application executing on a first socket of the plurality of sockets across each of the plurality of sockets; associating metadata for pages of the memory storing the replicated page tables in each of the plurality of sockets; and updating the replicated page tables using the metadata to locate the pages of the memory that store the replicated page tables. | 1. A method of managing memory in a computer system implementing non-uniform memory access (NUMA) by a plurality of sockets each having a processor component and a memory component, the method, comprising:
replicating page tables for an application executing on a first socket of the plurality of sockets across each of the plurality of sockets; associating metadata for pages of the memory storing the replicated page tables in each of the plurality of sockets; and updating the replicated page tables using the metadata to locate the pages of the memory that store the replicated page tables. 2. The method of claim 1, wherein the step of associating comprises:
forming a circular linked list of metadata structures, each metadata structure associated with a page of the memory storing data of a respective replicated page table and including a pointer to a next metadata structure. 3. The method of claim 2, wherein the step of updating comprises:
storing updates to the replicated page tables in a shared log; and applying the updates to the replicated page tables from the shared log. 4. The method of claim 2, wherein the step of updating comprises:
applying changes to a first set of replicated page tables based on a shared log; writing an update for the replicated page tables to the shared log; and applying the update to the first set of replicated page tables. 5. The method of claim 1, wherein the computer system is a virtualized computer system comprising a hypervisor, and wherein the replicated page tables comprises first replicated page tables of a guest operating system executing in a virtual machine managed by the hypervisor, and second replicated page tables of the hypervisor. 6. The method of claim 1, wherein the computer system is a virtualized computer system comprising a hypervisor, wherein a guest operating system executing in a virtual machine managed by the hypervisor includes a guest page table, and wherein the replicated page tables include nested page tables managed by the hypervisor. 7. The method of claim 1, wherein the computer system is a virtualized computer system comprising a hypervisor, wherein a guest operating system executing in a virtual machine managed by the hypervisor includes a guest page table, and wherein the replicated page tables include shadow page tables managed by the hypervisor. 8. A non-transitory computer readable medium comprising instructions, which when executed in a computer system, causes the computer system to carry out a method of managing memory in a computer system implementing non-uniform memory access (NUMA) by a plurality of sockets each having a processor component and a memory component, the method, comprising:
replicating page tables for an application executing on a first socket of the plurality of sockets across each of the plurality of sockets; associating metadata for pages of the memory storing the replicated page tables in each of the plurality of sockets; and updating the replicated page tables using the metadata to locate the pages of the memory that store the replicated page tables. 9. The non-transitory computer readable medium of claim 8, wherein the step of associating comprises:
forming a circular linked list of metadata structures, each metadata structure associated with a page of the memory storing data of a respective replicated page table and including a pointer to a next metadata structure. 10. The non-transitory computer readable medium of claim 9, wherein the step of updating comprises:
storing updates to the replicated page tables in a shared log; and applying the updates to the replicated page tables from the shared log. 11. The non-transitory computer readable medium of claim 9, wherein the step of updating comprises:
applying changes to a first set of replicated page tables based on a shared log; writing an update for the replicated page tables to the shared log; and applying the update to the first set of replicated page tables. 12. The non-transitory computer readable medium of claim 8, wherein the computer system is a virtualized computer system comprising a hypervisor, and wherein the replicated page tables comprises first replicated page tables of a guest operating system executing in a virtual machine managed by the hypervisor, and second replicated page tables of the hypervisor. 13. The non-transitory computer readable medium of claim 8, wherein the computer system is a virtualized computer system comprising a hypervisor, wherein a guest operating system executing in a virtual machine managed by the hypervisor includes a guest page table, and wherein the replicated page tables include nested page tables managed by the hypervisor. 14. The non-transitory computer readable medium of claim 8, wherein the computer system is a virtualized computer system comprising a hypervisor, wherein a guest operating system executing in a virtual machine managed by the hypervisor includes a guest page table, and wherein the replicated page tables include shadow page tables managed by the hypervisor. 15. A computer system, comprising:
a plurality of sockets implementing non-uniform memory access (NUMA) each having a processor component and a memory component; software, executing on the plurality of sockets, configured to:
replicate page tables for an application executing on a first socket of the plurality of sockets across each of the plurality of sockets;
associate metadata for pages of the memory storing the replicated page tables in each of the plurality of sockets; and
update the replicated page tables using the metadata to locate the pages of the memory that store the replicated page tables. 16. The computer system of claim 15, wherein the associating comprises:
forming a circular linked list of metadata structures, each metadata structure associated with a page of the memory storing data of a respective replicated page table and including a pointer to a next metadata structure. 17. The computer system of claim 16, wherein the updating comprises:
storing updates to the replicated page tables in a shared log; and applying the updates to the replicated page tables from the shared log. 18. The computer system of claim 16, wherein the updating comprises:
applying changes to a first set of replicated page tables based on a shared log; writing an update for the replicated page tables to the shared log; and applying the update to the first set of replicated page tables. 19. The computer system of claim 15, further comprising a hypervisor, wherein the replicated page tables comprises first replicated page tables of a guest operating system executing in a virtual machine managed by the hypervisor, and second replicated page tables of the hypervisor. 20. The computer system of claim 15, further comprising a hypervisor, wherein a guest operating system executing in a virtual machine managed by the hypervisor includes a guest page table, and wherein the replicated page tables include nested page tables managed by the hypervisor. | An example method of managing memory in a computer system implementing non-uniform memory access (NUMA) by a plurality of sockets each having a processor component and a memory component is described. The method includes replicating page tables for an application executing on a first socket of the plurality of sockets across each of the plurality of sockets; associating metadata for pages of the memory storing the replicated page tables in each of the plurality of sockets; and updating the replicated page tables using the metadata to locate the pages of the memory that store the replicated page tables.1. A method of managing memory in a computer system implementing non-uniform memory access (NUMA) by a plurality of sockets each having a processor component and a memory component, the method, comprising:
replicating page tables for an application executing on a first socket of the plurality of sockets across each of the plurality of sockets; associating metadata for pages of the memory storing the replicated page tables in each of the plurality of sockets; and updating the replicated page tables using the metadata to locate the pages of the memory that store the replicated page tables. 2. The method of claim 1, wherein the step of associating comprises:
forming a circular linked list of metadata structures, each metadata structure associated with a page of the memory storing data of a respective replicated page table and including a pointer to a next metadata structure. 3. The method of claim 2, wherein the step of updating comprises:
storing updates to the replicated page tables in a shared log; and applying the updates to the replicated page tables from the shared log. 4. The method of claim 2, wherein the step of updating comprises:
applying changes to a first set of replicated page tables based on a shared log; writing an update for the replicated page tables to the shared log; and applying the update to the first set of replicated page tables. 5. The method of claim 1, wherein the computer system is a virtualized computer system comprising a hypervisor, and wherein the replicated page tables comprises first replicated page tables of a guest operating system executing in a virtual machine managed by the hypervisor, and second replicated page tables of the hypervisor. 6. The method of claim 1, wherein the computer system is a virtualized computer system comprising a hypervisor, wherein a guest operating system executing in a virtual machine managed by the hypervisor includes a guest page table, and wherein the replicated page tables include nested page tables managed by the hypervisor. 7. The method of claim 1, wherein the computer system is a virtualized computer system comprising a hypervisor, wherein a guest operating system executing in a virtual machine managed by the hypervisor includes a guest page table, and wherein the replicated page tables include shadow page tables managed by the hypervisor. 8. A non-transitory computer readable medium comprising instructions, which when executed in a computer system, causes the computer system to carry out a method of managing memory in a computer system implementing non-uniform memory access (NUMA) by a plurality of sockets each having a processor component and a memory component, the method, comprising:
replicating page tables for an application executing on a first socket of the plurality of sockets across each of the plurality of sockets; associating metadata for pages of the memory storing the replicated page tables in each of the plurality of sockets; and updating the replicated page tables using the metadata to locate the pages of the memory that store the replicated page tables. 9. The non-transitory computer readable medium of claim 8, wherein the step of associating comprises:
forming a circular linked list of metadata structures, each metadata structure associated with a page of the memory storing data of a respective replicated page table and including a pointer to a next metadata structure. 10. The non-transitory computer readable medium of claim 9, wherein the step of updating comprises:
storing updates to the replicated page tables in a shared log; and applying the updates to the replicated page tables from the shared log. 11. The non-transitory computer readable medium of claim 9, wherein the step of updating comprises:
applying changes to a first set of replicated page tables based on a shared log; writing an update for the replicated page tables to the shared log; and applying the update to the first set of replicated page tables. 12. The non-transitory computer readable medium of claim 8, wherein the computer system is a virtualized computer system comprising a hypervisor, and wherein the replicated page tables comprises first replicated page tables of a guest operating system executing in a virtual machine managed by the hypervisor, and second replicated page tables of the hypervisor. 13. The non-transitory computer readable medium of claim 8, wherein the computer system is a virtualized computer system comprising a hypervisor, wherein a guest operating system executing in a virtual machine managed by the hypervisor includes a guest page table, and wherein the replicated page tables include nested page tables managed by the hypervisor. 14. The non-transitory computer readable medium of claim 8, wherein the computer system is a virtualized computer system comprising a hypervisor, wherein a guest operating system executing in a virtual machine managed by the hypervisor includes a guest page table, and wherein the replicated page tables include shadow page tables managed by the hypervisor. 15. A computer system, comprising:
a plurality of sockets implementing non-uniform memory access (NUMA) each having a processor component and a memory component; software, executing on the plurality of sockets, configured to:
replicate page tables for an application executing on a first socket of the plurality of sockets across each of the plurality of sockets;
associate metadata for pages of the memory storing the replicated page tables in each of the plurality of sockets; and
update the replicated page tables using the metadata to locate the pages of the memory that store the replicated page tables. 16. The computer system of claim 15, wherein the associating comprises:
forming a circular linked list of metadata structures, each metadata structure associated with a page of the memory storing data of a respective replicated page table and including a pointer to a next metadata structure. 17. The computer system of claim 16, wherein the updating comprises:
storing updates to the replicated page tables in a shared log; and applying the updates to the replicated page tables from the shared log. 18. The computer system of claim 16, wherein the updating comprises:
applying changes to a first set of replicated page tables based on a shared log; writing an update for the replicated page tables to the shared log; and applying the update to the first set of replicated page tables. 19. The computer system of claim 15, further comprising a hypervisor, wherein the replicated page tables comprises first replicated page tables of a guest operating system executing in a virtual machine managed by the hypervisor, and second replicated page tables of the hypervisor. 20. The computer system of claim 15, further comprising a hypervisor, wherein a guest operating system executing in a virtual machine managed by the hypervisor includes a guest page table, and wherein the replicated page tables include nested page tables managed by the hypervisor. | 2,100 |
6,634 | 6,634 | 15,076,287 | 2,144 | Text is intelligently annotated by first creating a topic map summarizing topics of interest of the user. A data structure is created. The topic map is used to create two linked user dictionaries, a topic dictionary reflecting topic names and a traversal dictionary reflecting the knowledge structure of a topic. Actions may be linked with topic types. When the text to be annotated is being read, the topic data structure of the topics found in the text are automatically instantiated using the dictionaries and any actions previously linked to topic types. Instantiated topic data structures are automatically attached to the text being annotated. A user GUI may be created to allow the user to access and interact with the text annotations. | 1-17. (canceled) 18. A computer-implemented method for annotating a text to be read by a user, comprising:
reading the text to identify occurrences of topics of interest associated to the user; creating a topic data structure, for each topic of interest associated to the user and found within the text, using stored topic information; attaching the created topic data structure to the text as an annotation to a corresponding topic found in the text; and creating a graphical user interface for the text that allows the user to access the annotation upon reading the text on a display device. 19. The method of claim 18, wherein
the stored topic information includes a topic identifier and a topic knowledge structure. 20. The method of claim 19, wherein
the topic identifier is stored in a topic dictionary, and the topic knowledge structure is stored in a traversal dictionary. 21. The method of claim 19, wherein
the topic identifier is stored in a FSA-based dictionary. 22. The method of claim 18, further comprising:
creating a topic map comprising the topics of interest associated to the user; linking, after creation of the topic map, actions with the topics of interest associated to the user; including the actions in the created topic data structure; and linking a handler to each of the actions. 23. The method of claim 18, wherein
the topic data structure is created by instantiating, for each topic of interest associated to the user and found within the text, a topic class. 24. A computer hardware system configured for annotating a text to be read by a user, comprising:
a hardware processor configured to perform:
reading the text to identify occurrences of topics of interest associated to the user;
creating a topic data structure, for each topic of interest associated to the user and found within the text, using stored topic information;
attaching the created topic data structure to the text as an annotation to a corresponding topic found in the text; and
creating a graphical user interface for the text that allows the user to access the annotation upon reading the text on a display device. 25. The system of claim 24, wherein
the stored topic information includes a topic identifier and a topic knowledge structure. 26. The system of claim 25, wherein
the topic identifier is stored in a topic dictionary, and the topic knowledge structure is stored in a traversal dictionary. 27. The system of claim 25, wherein
the topic identifier is stored in a FSA-based dictionary. 28. The system of claim 24, wherein the hardware processor is further configured to perform:
creating a topic map comprising the topics of interest associated to the user; linking, after creation of the topic map, actions with the topics of interest associated to the user; including the actions in the created topic data structure; and linking a handler to each of the actions. 29. The system of claim 24, wherein
the topic data structure is created by instantiating, for each topic of interest associated to the user and found within the text, a topic class. 30. A computer program product for annotating a text to be read by a user, comprising:
a computer usable storage device having computer usable program code embodied therewith, the computer usable program code, which when executed by a computer hardware system, causes the computer hardware system to perform:
reading the text to identify occurrences of topics of interest associated to the user;
creating a topic data structure, for each topic of interest associated to the user and found within the text, using stored topic information;
attaching the created topic data structure to the text as an annotation to a corresponding topic found in the text; and
creating a graphical user interface for the text that allows the user to access the annotation upon reading the text on a display device, wherein
the computer usable storage device does not consist of a transitory, propagating signal. 31. The computer program product of claim 30, wherein
the stored topic information includes a topic identifier and a topic knowledge structure. 32. The computer program product of claim 31, wherein
the topic identifier is stored in a topic dictionary, and the topic knowledge structure is stored in a traversal dictionary. 33. The computer program product of claim 31, wherein
the topic identifier is stored in a FSA-based dictionary. 34. The computer program product of claim 30, further comprising:
creating a topic map comprising the topics of interest associated to the user; linking, after creation of the topic map, actions with the topics of interest associated to the user; including the actions in the created topic data structure; and linking a handler to each of the actions. 35. The computer program product of claim 30, wherein
the topic data structure is created by instantiating, for each topic of interest associated to the user and found within the text, a topic class. | Text is intelligently annotated by first creating a topic map summarizing topics of interest of the user. A data structure is created. The topic map is used to create two linked user dictionaries, a topic dictionary reflecting topic names and a traversal dictionary reflecting the knowledge structure of a topic. Actions may be linked with topic types. When the text to be annotated is being read, the topic data structure of the topics found in the text are automatically instantiated using the dictionaries and any actions previously linked to topic types. Instantiated topic data structures are automatically attached to the text being annotated. A user GUI may be created to allow the user to access and interact with the text annotations.1-17. (canceled) 18. A computer-implemented method for annotating a text to be read by a user, comprising:
reading the text to identify occurrences of topics of interest associated to the user; creating a topic data structure, for each topic of interest associated to the user and found within the text, using stored topic information; attaching the created topic data structure to the text as an annotation to a corresponding topic found in the text; and creating a graphical user interface for the text that allows the user to access the annotation upon reading the text on a display device. 19. The method of claim 18, wherein
the stored topic information includes a topic identifier and a topic knowledge structure. 20. The method of claim 19, wherein
the topic identifier is stored in a topic dictionary, and the topic knowledge structure is stored in a traversal dictionary. 21. The method of claim 19, wherein
the topic identifier is stored in a FSA-based dictionary. 22. The method of claim 18, further comprising:
creating a topic map comprising the topics of interest associated to the user; linking, after creation of the topic map, actions with the topics of interest associated to the user; including the actions in the created topic data structure; and linking a handler to each of the actions. 23. The method of claim 18, wherein
the topic data structure is created by instantiating, for each topic of interest associated to the user and found within the text, a topic class. 24. A computer hardware system configured for annotating a text to be read by a user, comprising:
a hardware processor configured to perform:
reading the text to identify occurrences of topics of interest associated to the user;
creating a topic data structure, for each topic of interest associated to the user and found within the text, using stored topic information;
attaching the created topic data structure to the text as an annotation to a corresponding topic found in the text; and
creating a graphical user interface for the text that allows the user to access the annotation upon reading the text on a display device. 25. The system of claim 24, wherein
the stored topic information includes a topic identifier and a topic knowledge structure. 26. The system of claim 25, wherein
the topic identifier is stored in a topic dictionary, and the topic knowledge structure is stored in a traversal dictionary. 27. The system of claim 25, wherein
the topic identifier is stored in a FSA-based dictionary. 28. The system of claim 24, wherein the hardware processor is further configured to perform:
creating a topic map comprising the topics of interest associated to the user; linking, after creation of the topic map, actions with the topics of interest associated to the user; including the actions in the created topic data structure; and linking a handler to each of the actions. 29. The system of claim 24, wherein
the topic data structure is created by instantiating, for each topic of interest associated to the user and found within the text, a topic class. 30. A computer program product for annotating a text to be read by a user, comprising:
a computer usable storage device having computer usable program code embodied therewith, the computer usable program code, which when executed by a computer hardware system, causes the computer hardware system to perform:
reading the text to identify occurrences of topics of interest associated to the user;
creating a topic data structure, for each topic of interest associated to the user and found within the text, using stored topic information;
attaching the created topic data structure to the text as an annotation to a corresponding topic found in the text; and
creating a graphical user interface for the text that allows the user to access the annotation upon reading the text on a display device, wherein
the computer usable storage device does not consist of a transitory, propagating signal. 31. The computer program product of claim 30, wherein
the stored topic information includes a topic identifier and a topic knowledge structure. 32. The computer program product of claim 31, wherein
the topic identifier is stored in a topic dictionary, and the topic knowledge structure is stored in a traversal dictionary. 33. The computer program product of claim 31, wherein
the topic identifier is stored in a FSA-based dictionary. 34. The computer program product of claim 30, further comprising:
creating a topic map comprising the topics of interest associated to the user; linking, after creation of the topic map, actions with the topics of interest associated to the user; including the actions in the created topic data structure; and linking a handler to each of the actions. 35. The computer program product of claim 30, wherein
the topic data structure is created by instantiating, for each topic of interest associated to the user and found within the text, a topic class. | 2,100 |
6,635 | 6,635 | 15,880,861 | 2,166 | Described herein are systems, methods, and software to enhance the management of data objects in a data storage system. In one implementation, a client in a data object environment is configured to identify a request for a data object in a first version from process on the client. Once the request is identified and the data object is provided or made available to the requesting process, the object storage system identifies a modification request for the data object to modify the data object from the first version to a second version. In response to the modification request, the object storage system generates an undo log entry to reflect the changes from the first version to the second version and updates the data object to the second version. | 1. A method of operating a client system in a plurality of client systems to manage versioned data objects shared by the plurality of client systems, the method comprising:
identifying a request from a process executing on the client system for a data object of the versioned data objects in a first version, wherein each client system of the plurality of client system maintains a copy of the versioned data objects; providing the process with access to the data object in the first version; identifying a modification request for the data object from the process to modify the data object from the first version to a second version; in response to the modification request, generating an undo log entry for an undo log maintained locally by the client system to reflect changes from the first version to the second version; and updating the data object to the second version. 2. The method of claim 1 further comprising:
generating a redo log entry to reflect changes from the first version to the second version; and
providing the redo log entry to a redo log shared by the plurality of client systems. 3. The method of claim 1 further comprising:
obtaining one or more redo log entries from a redo log shared by the plurality of client systems;
updating the versioned data objects based on the one or more redo log entries. 4. The method of claim 3 further comprising updating the undo log based on the one or more redo log entries. 5. The method of claim 3, wherein obtaining the one or more redo log entries from the redo log comprises obtaining the one or more redo log entries from the redo log at periodic intervals. 6. The method of claim 1 further comprising:
after updating the data object to the second version, obtaining a second request for the data object of the versioned data objects in the first version;
applying the undo log entry to the data object to revert the data object from the second version to the first version; and
providing the process with access to the data object in the first version. 7. The method of claim 1 further comprising:
after updating the data object to the second version, obtaining a second request for the data object of the versioned data objects in a third version;
applying one or more undo log entries to the data object to revert the data object from the second version to the third version; and
providing the process with access to the data object in the third version. 8. The method of claim 1, wherein identifying the modification request for the data object to modify the data object from the first version to the second version comprises identifying a commit action by a user of the client system to modify the data object from the first version to a second version. 9. A computing apparatus:
one or more non-transitory computer readable storage media; a processing system operatively coupled to the one or more non-transitory computer readable storage media; and program instructions stored on the one or more non-transitory computer readable storage media to operate a client system in a plurality of client systems to manage versioned data objects shared by the plurality of client systems that, when read and executed by the processing system, direct the processing to at least:
identify a request from a process executing on the client system for a data object of the versioned data objects in a first version, wherein each client system of the plurality of client systems maintains a copy of the versioned data objects;
provide the process with access to the data object in the first version;
identify a modification request for the data object from the process to modify the data object from the first version to a second version;
in response to the modification request, generate an undo log entry for an undo log maintained locally by the client system to reflect changes from the first version to the second version; and
update the data object to the second version. 10. The computing apparatus of claim 9, wherein the program instructions further direct the processing system to:
generate a redo log entry to reflect changes from the first version to the second version; and; and provide the redo log entry to a redo log shared by the plurality of client systems. 11. The computing apparatus of claim 9, wherein the program instructions further direct the processing system to:
obtain one or more redo log entries from a redo log shared by the plurality of client systems; update the versioned data objects based on the one or more redo log entries. 12. The computing apparatus of claim 11, wherein the program instructions further direct the processing system to update the undo log based on the one or more redo log entries. 13. The computing apparatus of claim 12, wherein obtaining the one or more redo log entries from the redo log comprises obtaining the one or more redo log entries from the redo log at periodic intervals. 14. The computing apparatus of claim 9, wherein the program instructions further direct the processing system to:
after updating the data object to the second version, obtain a second request for the data object of the versioned data objects in the first version; apply the undo log entry to the data object to revert the data object from the second version to the first version; and provide the process with access to the data object in the first version. 15. The computing apparatus of claim 9, wherein the program instructions further direct the processing system to:
after updating the data object to the second version, obtain a second request for the data object of the versioned data objects in a third version; apply one or more undo log entries to the data object to revert the data object from the second version to the third version; and provide the process with access to the data object in the third version. 16. The computing apparatus of claim 9, wherein identifying the modification request for the data object to modify the data object from the first version to the second version comprises identifying a commit action by a user of the client system to modify the data object from the first version to a second version. 17. A system comprising:
a plurality of client systems that each maintains a copy of versioned data objects; a first client system in the plurality of client systems configured to:
identify a request from a process executing on the client system for a data object of the versioned data objects in a first version;
provide the process with access to the data object in the first version;
identify a modification request for the data object from the process to modify the data object from the first version to a second version;
in response to the modification request, generate an undo log entry for an undo log maintained locally by the client system to reflect changes from the first version to the second version; and
update the data object to the second version. 18. The system of claim 17, wherein the first client system is further configured to:
generate a redo log entry to reflect changes from the first version to the second version; and; and provide the redo log entry to a redo log shared by the plurality of client systems. 19. The system of claim 18, wherein a second client system of the plurality of client systems is further configured to:
obtain one or more redo log entries from the redo log, wherein the one or more redo log entries comprise at least the redo log entry; and update the versioned data objects of the second client system based on the one or more redo log entries. 20. The system of claim 19, wherein the second client system is further configured to update an undo log of the second client system based on the one or more redo log entries. | Described herein are systems, methods, and software to enhance the management of data objects in a data storage system. In one implementation, a client in a data object environment is configured to identify a request for a data object in a first version from process on the client. Once the request is identified and the data object is provided or made available to the requesting process, the object storage system identifies a modification request for the data object to modify the data object from the first version to a second version. In response to the modification request, the object storage system generates an undo log entry to reflect the changes from the first version to the second version and updates the data object to the second version.1. A method of operating a client system in a plurality of client systems to manage versioned data objects shared by the plurality of client systems, the method comprising:
identifying a request from a process executing on the client system for a data object of the versioned data objects in a first version, wherein each client system of the plurality of client system maintains a copy of the versioned data objects; providing the process with access to the data object in the first version; identifying a modification request for the data object from the process to modify the data object from the first version to a second version; in response to the modification request, generating an undo log entry for an undo log maintained locally by the client system to reflect changes from the first version to the second version; and updating the data object to the second version. 2. The method of claim 1 further comprising:
generating a redo log entry to reflect changes from the first version to the second version; and
providing the redo log entry to a redo log shared by the plurality of client systems. 3. The method of claim 1 further comprising:
obtaining one or more redo log entries from a redo log shared by the plurality of client systems;
updating the versioned data objects based on the one or more redo log entries. 4. The method of claim 3 further comprising updating the undo log based on the one or more redo log entries. 5. The method of claim 3, wherein obtaining the one or more redo log entries from the redo log comprises obtaining the one or more redo log entries from the redo log at periodic intervals. 6. The method of claim 1 further comprising:
after updating the data object to the second version, obtaining a second request for the data object of the versioned data objects in the first version;
applying the undo log entry to the data object to revert the data object from the second version to the first version; and
providing the process with access to the data object in the first version. 7. The method of claim 1 further comprising:
after updating the data object to the second version, obtaining a second request for the data object of the versioned data objects in a third version;
applying one or more undo log entries to the data object to revert the data object from the second version to the third version; and
providing the process with access to the data object in the third version. 8. The method of claim 1, wherein identifying the modification request for the data object to modify the data object from the first version to the second version comprises identifying a commit action by a user of the client system to modify the data object from the first version to a second version. 9. A computing apparatus:
one or more non-transitory computer readable storage media; a processing system operatively coupled to the one or more non-transitory computer readable storage media; and program instructions stored on the one or more non-transitory computer readable storage media to operate a client system in a plurality of client systems to manage versioned data objects shared by the plurality of client systems that, when read and executed by the processing system, direct the processing to at least:
identify a request from a process executing on the client system for a data object of the versioned data objects in a first version, wherein each client system of the plurality of client systems maintains a copy of the versioned data objects;
provide the process with access to the data object in the first version;
identify a modification request for the data object from the process to modify the data object from the first version to a second version;
in response to the modification request, generate an undo log entry for an undo log maintained locally by the client system to reflect changes from the first version to the second version; and
update the data object to the second version. 10. The computing apparatus of claim 9, wherein the program instructions further direct the processing system to:
generate a redo log entry to reflect changes from the first version to the second version; and; and provide the redo log entry to a redo log shared by the plurality of client systems. 11. The computing apparatus of claim 9, wherein the program instructions further direct the processing system to:
obtain one or more redo log entries from a redo log shared by the plurality of client systems; update the versioned data objects based on the one or more redo log entries. 12. The computing apparatus of claim 11, wherein the program instructions further direct the processing system to update the undo log based on the one or more redo log entries. 13. The computing apparatus of claim 12, wherein obtaining the one or more redo log entries from the redo log comprises obtaining the one or more redo log entries from the redo log at periodic intervals. 14. The computing apparatus of claim 9, wherein the program instructions further direct the processing system to:
after updating the data object to the second version, obtain a second request for the data object of the versioned data objects in the first version; apply the undo log entry to the data object to revert the data object from the second version to the first version; and provide the process with access to the data object in the first version. 15. The computing apparatus of claim 9, wherein the program instructions further direct the processing system to:
after updating the data object to the second version, obtain a second request for the data object of the versioned data objects in a third version; apply one or more undo log entries to the data object to revert the data object from the second version to the third version; and provide the process with access to the data object in the third version. 16. The computing apparatus of claim 9, wherein identifying the modification request for the data object to modify the data object from the first version to the second version comprises identifying a commit action by a user of the client system to modify the data object from the first version to a second version. 17. A system comprising:
a plurality of client systems that each maintains a copy of versioned data objects; a first client system in the plurality of client systems configured to:
identify a request from a process executing on the client system for a data object of the versioned data objects in a first version;
provide the process with access to the data object in the first version;
identify a modification request for the data object from the process to modify the data object from the first version to a second version;
in response to the modification request, generate an undo log entry for an undo log maintained locally by the client system to reflect changes from the first version to the second version; and
update the data object to the second version. 18. The system of claim 17, wherein the first client system is further configured to:
generate a redo log entry to reflect changes from the first version to the second version; and; and provide the redo log entry to a redo log shared by the plurality of client systems. 19. The system of claim 18, wherein a second client system of the plurality of client systems is further configured to:
obtain one or more redo log entries from the redo log, wherein the one or more redo log entries comprise at least the redo log entry; and update the versioned data objects of the second client system based on the one or more redo log entries. 20. The system of claim 19, wherein the second client system is further configured to update an undo log of the second client system based on the one or more redo log entries. | 2,100 |
6,636 | 6,636 | 15,972,985 | 2,143 | A personal display system with which a user may adjust the configuration of displayed media is provided. The personal display system may include an electronic device operative to provide media to a personal display device operative to display the received media. Using one or more optical and digital components, the personal display device may adjust displayed media to overlay features of a theater, thus giving the user of the personal display device the impression of being in the theater. In some embodiments, the personal display device may detect the user's movements using one or more sensors and may adjust the displayed image based on the user's movements. For example, the device may detect a user's head movement and cause the portion of media displayed to reflect the head movement. | 1. A system for presenting media, comprising:
a display device; a sensor operative to track user head movements; one or more processors; and a memory operatively coupled to the one or more processors and comprising instructions that when executed by the one or more processors cause the one or more processors to:
receive a user selection of a virtual position from a plurality of virtual positions, each virtual position corresponding to a different viewing perspective of media;
cause the media to be displayed on the display device based on the user selected virtual position;
receive a user selection to zoom the displayed media;
receive a signal from the sensor indicating the user's head movement;
modify the displayed media based on the sensor signal and the selected virtual position; and
adjust a resolution of the displayed media based on the user selected zoom. 2. The system of claim 1, wherein the sensor signal comprises data identifying at least one of an amount of the user's head movements, a direction of the user's head movements, a speed of the user's head movements, and an acceleration of the user's head movements. 3. The system of claim 1, further comprising instructions that when executed by the one or more processors cause the one or more processors to:
receive the media from an electronic device operatively coupled to the system. 4. The system of claim 1, wherein the plurality of virtual positions are associated with seats at different locations of a virtual theater. 5. The system of claim 1, further comprising instructions that when executed by the one or more processors cause the one or more processors to:
disable the sensor from tracking the user head movements in response to receiving at least one input from a user interface. 6. The system of claim 1, further comprising instructions that when executed by the one or more processors cause the one or more processors to:
display the media by displaying a left media image offset from a right media image such that the user has an impression of viewing the media in three dimensions. 7. The system of claim 1, wherein the instructions to adjust the resolution of the displayed media based on the user selected zoom further comprise instructions that when executed by the one or more processors cause the one or more processors to:
increase a resolution of the displayed media as the user zooms in to the displayed media. 8. The system of claim 1, wherein the instructions to adjust the resolution of the displayed media based on the user selected zoom further comprise instructions that when executed by the one or more processors cause the one or more processors to:
decrease a resolution of the displayed media as the user zooms out from the displayed media. 9. A method of presenting media, comprising:
receiving, by a processor, a user selection of a virtual position from a plurality of virtual positions, each virtual position corresponding to a different viewing perspective of media; displaying the media on a display device based on the user selected virtual position; receiving a user selection to zoom the displayed media; receiving a signal from a sensor indicating user's head movements; modifying the displayed media based on the sensor signal and the selected virtual position; and adjusting a resolution of the displayed media based on the user selected zoom. 10. The method of claim 9, wherein the plurality of virtual positions are associated with seats at different locations of a virtual theater. 11. The method of claim 9, wherein displaying the media includes displaying a left media image offset from a right media image such that the user has an impression of viewing the image in three dimensions. 12. The method of claim 9, wherein the sensor signal comprises data identifying at least one of an amount of the user's head movements, a direction of the user's head movements, a speed of the user's head movements, and an acceleration of the user's head movements. 13. The method of claim 9, wherein adjusting the resolution of the displayed media based on the user selected zoom includes increasing the resolution of the displayed media as the user zooms in to the displayed media. 14. The method of claim 9, wherein adjusting the resolution of the displayed media based on the user selected zoom includes decreasing the resolution of the displayed media as the user zooms out of the displayed media. 15. A non-transitory machine-readable medium having instructions stored thereon, which when executed by one or more processors, cause the one or more processors to:
receive a user selection of a virtual position from a plurality of virtual positions, each virtual position corresponding to a different viewing perspective of a media; display the media on a display device based on the user selected virtual position; receive a user selection to zoom the displayed media; receive a signal from a sensor indicating the user's head movements; modify the displayed media based on the sensor signal and the selected virtual position; and adjust a resolution of the displayed media based on the user selected zoom. 16. The non-transitory machine-readable medium of claim 15, wherein the plurality of virtual positions are associated with seats at different locations of a virtual theater. 17. The non-transitory machine-readable medium of claim 15, wherein the sensor signal comprises data identifying at least one of an amount of the user's head movements, a direction of the user's head movements, a speed of the user's head movements, or an acceleration of the user's head movements. 18. The non-transitory machine-readable medium of claim 15, wherein the instructions further cause the one or more processors to receive the media from an electronic device coupled to a personal display device. 19. The non-transitory machine-readable medium of claim 15, further comprising instructions that when executed by the one or more processors cause the one or more processors to:
display the media by displaying a left media image offset from a right media image such that the user has an impression of viewing the media in three dimensions. 20. The non-transitory machine-readable medium of claim 15, wherein the instructions to adjust a resolution of the displayed media based on the user selected zoom further comprise instructions that when executed by the one or more processors cause the one or more processors to:
increase the resolution of the displayed media as the user zooms in to the displayed media. 21. The non-transitory machine-readable medium of claim 15, wherein the instructions to adjust a resolution of the displayed media based on the user selected zoom further comprise instructions that when executed by the one or more processors cause the one or more processors to:
decrease the resolution of the displayed media as the user zooms out of the displayed media. | A personal display system with which a user may adjust the configuration of displayed media is provided. The personal display system may include an electronic device operative to provide media to a personal display device operative to display the received media. Using one or more optical and digital components, the personal display device may adjust displayed media to overlay features of a theater, thus giving the user of the personal display device the impression of being in the theater. In some embodiments, the personal display device may detect the user's movements using one or more sensors and may adjust the displayed image based on the user's movements. For example, the device may detect a user's head movement and cause the portion of media displayed to reflect the head movement.1. A system for presenting media, comprising:
a display device; a sensor operative to track user head movements; one or more processors; and a memory operatively coupled to the one or more processors and comprising instructions that when executed by the one or more processors cause the one or more processors to:
receive a user selection of a virtual position from a plurality of virtual positions, each virtual position corresponding to a different viewing perspective of media;
cause the media to be displayed on the display device based on the user selected virtual position;
receive a user selection to zoom the displayed media;
receive a signal from the sensor indicating the user's head movement;
modify the displayed media based on the sensor signal and the selected virtual position; and
adjust a resolution of the displayed media based on the user selected zoom. 2. The system of claim 1, wherein the sensor signal comprises data identifying at least one of an amount of the user's head movements, a direction of the user's head movements, a speed of the user's head movements, and an acceleration of the user's head movements. 3. The system of claim 1, further comprising instructions that when executed by the one or more processors cause the one or more processors to:
receive the media from an electronic device operatively coupled to the system. 4. The system of claim 1, wherein the plurality of virtual positions are associated with seats at different locations of a virtual theater. 5. The system of claim 1, further comprising instructions that when executed by the one or more processors cause the one or more processors to:
disable the sensor from tracking the user head movements in response to receiving at least one input from a user interface. 6. The system of claim 1, further comprising instructions that when executed by the one or more processors cause the one or more processors to:
display the media by displaying a left media image offset from a right media image such that the user has an impression of viewing the media in three dimensions. 7. The system of claim 1, wherein the instructions to adjust the resolution of the displayed media based on the user selected zoom further comprise instructions that when executed by the one or more processors cause the one or more processors to:
increase a resolution of the displayed media as the user zooms in to the displayed media. 8. The system of claim 1, wherein the instructions to adjust the resolution of the displayed media based on the user selected zoom further comprise instructions that when executed by the one or more processors cause the one or more processors to:
decrease a resolution of the displayed media as the user zooms out from the displayed media. 9. A method of presenting media, comprising:
receiving, by a processor, a user selection of a virtual position from a plurality of virtual positions, each virtual position corresponding to a different viewing perspective of media; displaying the media on a display device based on the user selected virtual position; receiving a user selection to zoom the displayed media; receiving a signal from a sensor indicating user's head movements; modifying the displayed media based on the sensor signal and the selected virtual position; and adjusting a resolution of the displayed media based on the user selected zoom. 10. The method of claim 9, wherein the plurality of virtual positions are associated with seats at different locations of a virtual theater. 11. The method of claim 9, wherein displaying the media includes displaying a left media image offset from a right media image such that the user has an impression of viewing the image in three dimensions. 12. The method of claim 9, wherein the sensor signal comprises data identifying at least one of an amount of the user's head movements, a direction of the user's head movements, a speed of the user's head movements, and an acceleration of the user's head movements. 13. The method of claim 9, wherein adjusting the resolution of the displayed media based on the user selected zoom includes increasing the resolution of the displayed media as the user zooms in to the displayed media. 14. The method of claim 9, wherein adjusting the resolution of the displayed media based on the user selected zoom includes decreasing the resolution of the displayed media as the user zooms out of the displayed media. 15. A non-transitory machine-readable medium having instructions stored thereon, which when executed by one or more processors, cause the one or more processors to:
receive a user selection of a virtual position from a plurality of virtual positions, each virtual position corresponding to a different viewing perspective of a media; display the media on a display device based on the user selected virtual position; receive a user selection to zoom the displayed media; receive a signal from a sensor indicating the user's head movements; modify the displayed media based on the sensor signal and the selected virtual position; and adjust a resolution of the displayed media based on the user selected zoom. 16. The non-transitory machine-readable medium of claim 15, wherein the plurality of virtual positions are associated with seats at different locations of a virtual theater. 17. The non-transitory machine-readable medium of claim 15, wherein the sensor signal comprises data identifying at least one of an amount of the user's head movements, a direction of the user's head movements, a speed of the user's head movements, or an acceleration of the user's head movements. 18. The non-transitory machine-readable medium of claim 15, wherein the instructions further cause the one or more processors to receive the media from an electronic device coupled to a personal display device. 19. The non-transitory machine-readable medium of claim 15, further comprising instructions that when executed by the one or more processors cause the one or more processors to:
display the media by displaying a left media image offset from a right media image such that the user has an impression of viewing the media in three dimensions. 20. The non-transitory machine-readable medium of claim 15, wherein the instructions to adjust a resolution of the displayed media based on the user selected zoom further comprise instructions that when executed by the one or more processors cause the one or more processors to:
increase the resolution of the displayed media as the user zooms in to the displayed media. 21. The non-transitory machine-readable medium of claim 15, wherein the instructions to adjust a resolution of the displayed media based on the user selected zoom further comprise instructions that when executed by the one or more processors cause the one or more processors to:
decrease the resolution of the displayed media as the user zooms out of the displayed media. | 2,100 |
6,637 | 6,637 | 15,409,464 | 2,159 | An API on a server system automatically gathers an instance of each multiple resources from different sources, storing each instance in the server system. Later, a call to the API is received from a querying application, the call comprising a search query comprising. In response, the API selects one or more of the plurality of resources as search results based on evaluating the criterion or criteria against the already-gathered instances of the resources as stored in the server system. The API returns a search response to the querying application, making the corresponding stored instances available to a consuming party In embodiments the API is “holistic” in nature, in that the search results may comprise different types of resource (e.g. file, email, task), from different types source (e.g. type of application they originate from), and/or related to the consuming part by different types of activity (e.g. used, modified shared, trending). | 1. A server system comprising storage for storing instances of a plurality of data resources, and an application programming interface for interfacing with a plurality of data sources each being a respective source of a respective subset of the resources, wherein the application programming interface is arranged to perform operations of:
automatically gathering a respective instance of each the plurality of resources from the respective sources, including storing each respective instance on the storage of the server system; subsequent to said gathering, receiving a call to the application programming interface from a querying application, the call comprising a search query comprising one or more search criteria; and in response to said call, selecting one or more of the plurality of resources as search results based on evaluating the one or more search criteria against the already-gathered instances of the resources as stored in the storage of the server system; returning to the querying application a search response indicative of the search results; and making the instances of the resources indicated in the search response available to a consuming party through the querying application from said storage of the server system. 2. The server system of claim 1, wherein said plurality of sources comprise a plurality of target applications other than the querying application, the target applications including a plurality of applications of different types to one another. 3. The server system of claim 2, wherein the plurality of different types of applications comprise any two or more of: word processing application, spreadsheet application, slideshow application, drawing application, email client, IM client, VoIP client, calendar application, collaborative workspace application, social media application, and/or file sharing application. 4. The server system of claim 2, wherein the application programming interface is configured so as, if the search query does not specify the type of application as a search criterion, then in response to search amongst multiple different ones of said types of application for inclusion in the search results. 5. The server system of claim 1, wherein said plurality of resources comprise different types of resource. 6. The server system of claim 5, wherein the plurality of different types of resource comprise any two or more of: files, stored communications, calendar events, tasks, sites, and/or user profile information. 7. The server system of claim 5, wherein the application programming interface is configured so as, if the search query does not specify the type of resources as a search criterion, then in response to search amongst multiple different ones of said types of resource for inclusion in the search results. 8. The server system of claim 1, wherein some or all of the resources comprise a plurality of files, and the files comprise different types of file. 9. The server system of claim 8, wherein the different types of file comprise any two or more of: word processing document, spreadsheet, slideshow, vector graphic drawing, image and/or video. 10. The server system of claim 8, wherein the application programming interface is configured so as, if the search query does not specify the type of file as a search criterion, then in response to search amongst multiple different ones of said types of file for includes in the search results. 11. The server system of claim 1, wherein each instance in the storage of the server system is stored in a form comprising a first portion and metadata, the first portion either comprising a duplication of the content of the resource stored in said storage of the server system or comprising a link to the resource stored elsewhere, and the metadata describing a relationship between the resource and the consuming party; and
the application programming interface is configured to select which of the resources to include in the search results based on an evaluation of the one or more search criteria against the metadata. 12. The server system of claim 11, wherein the relationships described by the metadata in the instances of different ones of the resources comprise different types of activity performed on the respective resource by the consuming party or one or more other parties associated with the consuming party, the different types of activity comprising any two or more of:
the resource having been previously used by the consuming party; the resource having been shared with the consuming party by one or more other parties; the resource having been shared by the consuming party with one or more other parties; and/or the resource having been used by one or more other users associated with the consuming party, thereby enabling the consuming user to discover resources used by the one or more other users. 13. The server system of claim 12, wherein the application programming interface is configured so as, if the search query does not specify the type of activity as a search criterion, then in response to select to include in the search results resources related to the consuming user by multiple different ones of said types of activity. 14. The server system of claim 1, wherein one, some or all of the sources are provided by a provider of said server system but are hosted elsewhere within said server system. 15. The server system of claim 1, wherein one, some or all of the sources are third-party sources outside of said server system. 16. The server system of claim 1, wherein at least some of said sources are comprised by different server units to one another, the different server units being implemented in separate housings, racks, rooms, buildings or geographic locations. 17. The server system of claim 1, wherein:
said storage comprises a separate storage area for each of a plurality of parties; said gathering comprises storing a primary instance of each of said plurality of resources in the respective storage area of a respective one of the parties associated with the resource, and for at least some of the resources where a respective second party has formed a relationship with the respective resource, additionally storing a respective secondary instance of the resource in the storage area of the respective second party; and the application programming interface is configured so as, if the consuming party is the second party, then to perform said evaluation of the one or more search criteria against the respective secondary instances, the instances being made available to the respective second party being the respective secondary instances. 18. The server of claim 17, wherein each of the secondary instances is stored in the storage area of the respective second party in a form comprising a first portion and metadata, the first portion either comprising a duplication of the content of the resource stored in said storage of the server system or comprising a link to the resource stored elsewhere, and the metadata describing an action performed on the resource by the second party and/or a relationship between the resource and the second party; and
the application programming interface is configured to select which of the resources to include in the search results based on an evaluation of the one or more search criteria against the metadata of the second party included in the secondary instances. 19. The server system of claim 17, wherein at least some of the separate storage areas, including at least the respective storage areas of the first and second parties, are implemented on separate server units in separate housings, racks, rooms, buildings or geographic locations. 20. A method of storing instances of a plurality of data resources, the method comprising:
providing an application programming interface for interfacing with a plurality of data sources each being a respective source of a respective subset of the resources; the application programming interface automatically gathering a respective instance of each the plurality of resources from the respective sources, including storing each respective instance on the storage of the server system;
subsequent to said gathering, receiving a call to the application programming interface from a querying application, the call comprising a search query comprising one or more search criteria; and
in response to said call, the application programming interface selecting one or more of the plurality of resources as search results based on evaluating the one or more search criteria against the already-gathered instances of the resources as stored in the storage of the server system;
wherein the application programming interface returns to the querying application a search response indicative of the search results, and makes the instances of the resources indicated in the search response available to a consuming party through the querying application from said storage of the server system. | An API on a server system automatically gathers an instance of each multiple resources from different sources, storing each instance in the server system. Later, a call to the API is received from a querying application, the call comprising a search query comprising. In response, the API selects one or more of the plurality of resources as search results based on evaluating the criterion or criteria against the already-gathered instances of the resources as stored in the server system. The API returns a search response to the querying application, making the corresponding stored instances available to a consuming party In embodiments the API is “holistic” in nature, in that the search results may comprise different types of resource (e.g. file, email, task), from different types source (e.g. type of application they originate from), and/or related to the consuming part by different types of activity (e.g. used, modified shared, trending).1. A server system comprising storage for storing instances of a plurality of data resources, and an application programming interface for interfacing with a plurality of data sources each being a respective source of a respective subset of the resources, wherein the application programming interface is arranged to perform operations of:
automatically gathering a respective instance of each the plurality of resources from the respective sources, including storing each respective instance on the storage of the server system; subsequent to said gathering, receiving a call to the application programming interface from a querying application, the call comprising a search query comprising one or more search criteria; and in response to said call, selecting one or more of the plurality of resources as search results based on evaluating the one or more search criteria against the already-gathered instances of the resources as stored in the storage of the server system; returning to the querying application a search response indicative of the search results; and making the instances of the resources indicated in the search response available to a consuming party through the querying application from said storage of the server system. 2. The server system of claim 1, wherein said plurality of sources comprise a plurality of target applications other than the querying application, the target applications including a plurality of applications of different types to one another. 3. The server system of claim 2, wherein the plurality of different types of applications comprise any two or more of: word processing application, spreadsheet application, slideshow application, drawing application, email client, IM client, VoIP client, calendar application, collaborative workspace application, social media application, and/or file sharing application. 4. The server system of claim 2, wherein the application programming interface is configured so as, if the search query does not specify the type of application as a search criterion, then in response to search amongst multiple different ones of said types of application for inclusion in the search results. 5. The server system of claim 1, wherein said plurality of resources comprise different types of resource. 6. The server system of claim 5, wherein the plurality of different types of resource comprise any two or more of: files, stored communications, calendar events, tasks, sites, and/or user profile information. 7. The server system of claim 5, wherein the application programming interface is configured so as, if the search query does not specify the type of resources as a search criterion, then in response to search amongst multiple different ones of said types of resource for inclusion in the search results. 8. The server system of claim 1, wherein some or all of the resources comprise a plurality of files, and the files comprise different types of file. 9. The server system of claim 8, wherein the different types of file comprise any two or more of: word processing document, spreadsheet, slideshow, vector graphic drawing, image and/or video. 10. The server system of claim 8, wherein the application programming interface is configured so as, if the search query does not specify the type of file as a search criterion, then in response to search amongst multiple different ones of said types of file for includes in the search results. 11. The server system of claim 1, wherein each instance in the storage of the server system is stored in a form comprising a first portion and metadata, the first portion either comprising a duplication of the content of the resource stored in said storage of the server system or comprising a link to the resource stored elsewhere, and the metadata describing a relationship between the resource and the consuming party; and
the application programming interface is configured to select which of the resources to include in the search results based on an evaluation of the one or more search criteria against the metadata. 12. The server system of claim 11, wherein the relationships described by the metadata in the instances of different ones of the resources comprise different types of activity performed on the respective resource by the consuming party or one or more other parties associated with the consuming party, the different types of activity comprising any two or more of:
the resource having been previously used by the consuming party; the resource having been shared with the consuming party by one or more other parties; the resource having been shared by the consuming party with one or more other parties; and/or the resource having been used by one or more other users associated with the consuming party, thereby enabling the consuming user to discover resources used by the one or more other users. 13. The server system of claim 12, wherein the application programming interface is configured so as, if the search query does not specify the type of activity as a search criterion, then in response to select to include in the search results resources related to the consuming user by multiple different ones of said types of activity. 14. The server system of claim 1, wherein one, some or all of the sources are provided by a provider of said server system but are hosted elsewhere within said server system. 15. The server system of claim 1, wherein one, some or all of the sources are third-party sources outside of said server system. 16. The server system of claim 1, wherein at least some of said sources are comprised by different server units to one another, the different server units being implemented in separate housings, racks, rooms, buildings or geographic locations. 17. The server system of claim 1, wherein:
said storage comprises a separate storage area for each of a plurality of parties; said gathering comprises storing a primary instance of each of said plurality of resources in the respective storage area of a respective one of the parties associated with the resource, and for at least some of the resources where a respective second party has formed a relationship with the respective resource, additionally storing a respective secondary instance of the resource in the storage area of the respective second party; and the application programming interface is configured so as, if the consuming party is the second party, then to perform said evaluation of the one or more search criteria against the respective secondary instances, the instances being made available to the respective second party being the respective secondary instances. 18. The server of claim 17, wherein each of the secondary instances is stored in the storage area of the respective second party in a form comprising a first portion and metadata, the first portion either comprising a duplication of the content of the resource stored in said storage of the server system or comprising a link to the resource stored elsewhere, and the metadata describing an action performed on the resource by the second party and/or a relationship between the resource and the second party; and
the application programming interface is configured to select which of the resources to include in the search results based on an evaluation of the one or more search criteria against the metadata of the second party included in the secondary instances. 19. The server system of claim 17, wherein at least some of the separate storage areas, including at least the respective storage areas of the first and second parties, are implemented on separate server units in separate housings, racks, rooms, buildings or geographic locations. 20. A method of storing instances of a plurality of data resources, the method comprising:
providing an application programming interface for interfacing with a plurality of data sources each being a respective source of a respective subset of the resources; the application programming interface automatically gathering a respective instance of each the plurality of resources from the respective sources, including storing each respective instance on the storage of the server system;
subsequent to said gathering, receiving a call to the application programming interface from a querying application, the call comprising a search query comprising one or more search criteria; and
in response to said call, the application programming interface selecting one or more of the plurality of resources as search results based on evaluating the one or more search criteria against the already-gathered instances of the resources as stored in the storage of the server system;
wherein the application programming interface returns to the querying application a search response indicative of the search results, and makes the instances of the resources indicated in the search response available to a consuming party through the querying application from said storage of the server system. | 2,100 |
6,638 | 6,638 | 15,719,478 | 2,145 | An approach is provided for creating and managing pricing models and subscriptions for packages of computer-implemented applications. As used herein, the term “package” refers to a logical entity that has one or more member computer-implemented applications, where each of the member computer-implemented applications provides one or more services. One or more pricing models may be assigned to a package and made available to subscribers and the pricing models assigned to a package may be changed. Users may subscribe to one or more packages of computer-implemented applications and incur charges based upon the pricing models assigned to the packages of computer-implemented applications to which the users subscribe. Embodiments include providing a graphical user interface for service providers to create and manage packages of computer-implemented applications, define pricing models and to manage pricing model assignments for packages of computer-implemented applications. Embodiments also include providing a graphical user interface for subscribers to view available applications and packages in a “marketplace” and to subscribe to packages of computer-implemented applications and manage their subscriptions. | 1. An apparatus for managing computer-implemented applications using packages, the apparatus comprising:
one or more processors; and a memory storing instructions which, when processed by the one or more processors, causes:
generating and transmitting to a client device, first graphical user interface object data which, when processed at the client device, causes a first graphical user interface object to be displayed at the client device, wherein the first graphical user interface object identifies a package to which computer-implemented applications may be assigned;
generating and transmitting to the client device second graphical user interface object data which, when processed at the client device, causes a plurality of graphical user interface objects to be displayed at the client device, wherein the plurality of graphical user interface objects are associated with, and identify, a plurality of computer-implemented applications that implement a plurality of computer-implemented services, and that are available for the user to assign to the package;
receiving, from the client device, first user selection data that indicates a user selection for assignment, to the package, of two or more computer-implemented applications from the plurality of computer-implemented applications;
in response to receiving, from the client device, the first user selection data that indicates the user selection for assignment, to the package, of the two or more computer-implemented applications from the plurality computer-implemented applications, generating first assignment data that designates an assignment of the two or more computer-implemented applications to the package;
generating and transmitting to the client device third graphical user interface object data which, when processed at the client device, causes a second plurality of graphical user interface objects to be displayed at the client device, wherein the second plurality of graphical user interface objects identify a plurality of pricing models that are available for the user to assign to the package;
receiving, from the client device, second user selection data that indicates a user selection for assignment, to the package, of a particular pricing model, from the plurality pricing models; and
in response to receiving, from the client device, the second user selection data that indicates, for assignment to the package, of the particular pricing model, from the plurality of pricing models, generating second assignment data that designates an assignment of the particular pricing model to the package. | An approach is provided for creating and managing pricing models and subscriptions for packages of computer-implemented applications. As used herein, the term “package” refers to a logical entity that has one or more member computer-implemented applications, where each of the member computer-implemented applications provides one or more services. One or more pricing models may be assigned to a package and made available to subscribers and the pricing models assigned to a package may be changed. Users may subscribe to one or more packages of computer-implemented applications and incur charges based upon the pricing models assigned to the packages of computer-implemented applications to which the users subscribe. Embodiments include providing a graphical user interface for service providers to create and manage packages of computer-implemented applications, define pricing models and to manage pricing model assignments for packages of computer-implemented applications. Embodiments also include providing a graphical user interface for subscribers to view available applications and packages in a “marketplace” and to subscribe to packages of computer-implemented applications and manage their subscriptions.1. An apparatus for managing computer-implemented applications using packages, the apparatus comprising:
one or more processors; and a memory storing instructions which, when processed by the one or more processors, causes:
generating and transmitting to a client device, first graphical user interface object data which, when processed at the client device, causes a first graphical user interface object to be displayed at the client device, wherein the first graphical user interface object identifies a package to which computer-implemented applications may be assigned;
generating and transmitting to the client device second graphical user interface object data which, when processed at the client device, causes a plurality of graphical user interface objects to be displayed at the client device, wherein the plurality of graphical user interface objects are associated with, and identify, a plurality of computer-implemented applications that implement a plurality of computer-implemented services, and that are available for the user to assign to the package;
receiving, from the client device, first user selection data that indicates a user selection for assignment, to the package, of two or more computer-implemented applications from the plurality of computer-implemented applications;
in response to receiving, from the client device, the first user selection data that indicates the user selection for assignment, to the package, of the two or more computer-implemented applications from the plurality computer-implemented applications, generating first assignment data that designates an assignment of the two or more computer-implemented applications to the package;
generating and transmitting to the client device third graphical user interface object data which, when processed at the client device, causes a second plurality of graphical user interface objects to be displayed at the client device, wherein the second plurality of graphical user interface objects identify a plurality of pricing models that are available for the user to assign to the package;
receiving, from the client device, second user selection data that indicates a user selection for assignment, to the package, of a particular pricing model, from the plurality pricing models; and
in response to receiving, from the client device, the second user selection data that indicates, for assignment to the package, of the particular pricing model, from the plurality of pricing models, generating second assignment data that designates an assignment of the particular pricing model to the package. | 2,100 |
6,639 | 6,639 | 15,514,372 | 2,177 | The invention relates to a server for providing a graphical user interface to a client over a communication network. The graphical user interface comprises a graphical user interface element, the graphical user interface element being formed by an element shape and an element text, the element shape being represented by element shape data, the element text being represented by element text data. The server comprises an encoder configured to encode the element shape data into video data, a detector configured to detect a change associated with the graphical user interface element within the graphical user interface, and a communication interface configured to separately transmit the video data and the element text data over the communication network, the element text data being transmitted upon detection of the change associated with the graphical user interface element for providing the graphical user interface to the client. | 1-16. (canceled) 17. A server for providing a graphical user interface to a client over a communication network, the graphical user interface comprising a graphical user interface element, the graphical user interface element being formed by an element shape and an element text, the element shape being represented by element shape data, the element text being represented by element text data, the server comprising:
an encoder configured to encode the element shape data into video data; a detector configured to detect a change associated with the graphical user interface element within the graphical user interface; and a communication interface configured to separately transmit the video data and the element text data over the communication network, the element text data being transmitted upon detection of the change associated with the graphical user interface element for providing the graphical user interface to the client. 18. The server of claim 17, wherein:
the graphical user interface further comprises a video element, the video element being represented by further video data; and the communication interface is configured to separately transmit the further video data, the video data and the element text data over the communication network. 19. The server of claim 17, wherein:
the video data comprises a first timing indicator indicating a validity time of the video data; and the element text data comprises a second timing indicator indicating a validity time of the element text data. 20. The server of claim 17, wherein the encoder is configured to generate a number of video frames upon the basis of the element shape data for encoding the element shape data into the video data, the number of video frames being arranged to form the video data. 21. The server of claim 17, further comprising:
a text encoder configured to encode the element text into plain text data and layout data, the layout data indicating a text size, a text font, or a text path of the element text within the graphical user interface element, the plain text data and the layout data forming the element text data. 22. The server of claim 17, further comprising:
an image encoder configured to encode the element text into image data, the image data representing an image of the element text of the graphical user interface element, the image data forming the element text data. 23. The server of claim 17, wherein the communication interface is configured to:
receive a request signal requesting a change associated with the graphical user interface element within the graphical user interface; and separately transmit the video data and the element text data over the communication network upon reception of the request signal. 24. The server of claim 17, wherein the graphical user interface element comprises a window element, a text box element, a button element, an icon element, a list box element, a menu element, or a carousel menu element. 25. The server of claim 17, wherein the change associated with the graphical user interface element comprises a rearrangement of the graphical user interface element within the graphical user interface, a scaling of the graphical user interface element within the graphical user interface, or a modification of the element text of the graphical user interface element within the graphical user interface. 26. A client for retrieving a graphical user interface from a server over a communication network, the graphical user interface comprising a graphical user interface element, the graphical user interface element being formed by an element shape and an element text, the element shape being represented by element shape data, the element text being represented by element text data, the client comprising:
a communication interface configured to separately receive video data and the element text data over the communication network, the element shape data being encoded into the video data; and a combiner configured to combine the video data with the element text data for retrieving the graphical user interface from the server. 27. The client of claim 26, wherein:
the graphical user interface further comprises a video element, the video element being represented by further video data; the communication interface is configured to separately receive the further video data, the video data and the element text data over the communication network; and the combiner is configured to combine the further video data with the video data and the element text data. 28. The client of claim 26, wherein:
the video data comprises a first timing indicator indicating a validity time of the video data; the element text data comprises a second timing indicator indicating a validity time of the element text data; the client comprises a synchronizer configured to synchronize the video data with the element text data in time upon the basis of the first timing indicator and the second timing indicator; and the combiner is configured to combine the video data with the element text data upon synchronization of the video data with the element text data. 29. The client of claim 26, wherein:
the client comprises a detector configured to detect a request for a change associated with the graphical user interface element within the graphical user interface to obtain a request signal; and the communication interface is configured to transmit the request signal over the communication network upon detection of the request for the change associated with the graphical user interface element. 30. A method for providing a graphical user interface to a client over a communication network, the graphical user interface comprising a graphical user interface element, the graphical user interface element being formed by an element shape and an element text, the element shape being represented by element shape data, the element text being represented by element text data, the method comprising:
encoding the element shape data into video data; detecting a change associated with the graphical user interface element within the graphical user interface; and separately transmitting the video data and the element text data over the communication network, the element text data being transmitted upon detection of the change associated with the graphical user interface element for providing the graphical user interface to the client. | The invention relates to a server for providing a graphical user interface to a client over a communication network. The graphical user interface comprises a graphical user interface element, the graphical user interface element being formed by an element shape and an element text, the element shape being represented by element shape data, the element text being represented by element text data. The server comprises an encoder configured to encode the element shape data into video data, a detector configured to detect a change associated with the graphical user interface element within the graphical user interface, and a communication interface configured to separately transmit the video data and the element text data over the communication network, the element text data being transmitted upon detection of the change associated with the graphical user interface element for providing the graphical user interface to the client.1-16. (canceled) 17. A server for providing a graphical user interface to a client over a communication network, the graphical user interface comprising a graphical user interface element, the graphical user interface element being formed by an element shape and an element text, the element shape being represented by element shape data, the element text being represented by element text data, the server comprising:
an encoder configured to encode the element shape data into video data; a detector configured to detect a change associated with the graphical user interface element within the graphical user interface; and a communication interface configured to separately transmit the video data and the element text data over the communication network, the element text data being transmitted upon detection of the change associated with the graphical user interface element for providing the graphical user interface to the client. 18. The server of claim 17, wherein:
the graphical user interface further comprises a video element, the video element being represented by further video data; and the communication interface is configured to separately transmit the further video data, the video data and the element text data over the communication network. 19. The server of claim 17, wherein:
the video data comprises a first timing indicator indicating a validity time of the video data; and the element text data comprises a second timing indicator indicating a validity time of the element text data. 20. The server of claim 17, wherein the encoder is configured to generate a number of video frames upon the basis of the element shape data for encoding the element shape data into the video data, the number of video frames being arranged to form the video data. 21. The server of claim 17, further comprising:
a text encoder configured to encode the element text into plain text data and layout data, the layout data indicating a text size, a text font, or a text path of the element text within the graphical user interface element, the plain text data and the layout data forming the element text data. 22. The server of claim 17, further comprising:
an image encoder configured to encode the element text into image data, the image data representing an image of the element text of the graphical user interface element, the image data forming the element text data. 23. The server of claim 17, wherein the communication interface is configured to:
receive a request signal requesting a change associated with the graphical user interface element within the graphical user interface; and separately transmit the video data and the element text data over the communication network upon reception of the request signal. 24. The server of claim 17, wherein the graphical user interface element comprises a window element, a text box element, a button element, an icon element, a list box element, a menu element, or a carousel menu element. 25. The server of claim 17, wherein the change associated with the graphical user interface element comprises a rearrangement of the graphical user interface element within the graphical user interface, a scaling of the graphical user interface element within the graphical user interface, or a modification of the element text of the graphical user interface element within the graphical user interface. 26. A client for retrieving a graphical user interface from a server over a communication network, the graphical user interface comprising a graphical user interface element, the graphical user interface element being formed by an element shape and an element text, the element shape being represented by element shape data, the element text being represented by element text data, the client comprising:
a communication interface configured to separately receive video data and the element text data over the communication network, the element shape data being encoded into the video data; and a combiner configured to combine the video data with the element text data for retrieving the graphical user interface from the server. 27. The client of claim 26, wherein:
the graphical user interface further comprises a video element, the video element being represented by further video data; the communication interface is configured to separately receive the further video data, the video data and the element text data over the communication network; and the combiner is configured to combine the further video data with the video data and the element text data. 28. The client of claim 26, wherein:
the video data comprises a first timing indicator indicating a validity time of the video data; the element text data comprises a second timing indicator indicating a validity time of the element text data; the client comprises a synchronizer configured to synchronize the video data with the element text data in time upon the basis of the first timing indicator and the second timing indicator; and the combiner is configured to combine the video data with the element text data upon synchronization of the video data with the element text data. 29. The client of claim 26, wherein:
the client comprises a detector configured to detect a request for a change associated with the graphical user interface element within the graphical user interface to obtain a request signal; and the communication interface is configured to transmit the request signal over the communication network upon detection of the request for the change associated with the graphical user interface element. 30. A method for providing a graphical user interface to a client over a communication network, the graphical user interface comprising a graphical user interface element, the graphical user interface element being formed by an element shape and an element text, the element shape being represented by element shape data, the element text being represented by element text data, the method comprising:
encoding the element shape data into video data; detecting a change associated with the graphical user interface element within the graphical user interface; and separately transmitting the video data and the element text data over the communication network, the element text data being transmitted upon detection of the change associated with the graphical user interface element for providing the graphical user interface to the client. | 2,100 |
6,640 | 6,640 | 15,595,335 | 2,175 | A computing system automates the configuration and management of a live sound system that includes a processor and memory for building in a GUI of a display a representation of the live sound system for a venue. The system loads a venue template that includes loudspeaker arrays and related properties including a setup configuration of the loudspeaker arrays and tuning data for constituent loudspeakers that are operable to provide an audio coverage pattern for the venue. The system overlays on top of the representation of the loudspeaker arrays a wiring circuit representation indicating interconnections of the loudspeakers that define bandpass inputs for each array. The system generates a plurality of amplifiers in the representation to drive the arrays, and associates amplifier channels of the amplifiers with the bandpass inputs. The amplifier channels include representations of output channels of DSPs coupled with respective amplifier channels. The system loads tuning data into respective representation of the DSP and/or amplifiers based on configurations of the associated loudspeakers to complete virtual configuration. The representations of the devices and connections may be matched with physical devices of the live sound system and the tuning data sent down to the physical DSPs and/or amplifiers for their configuration. | 1. A method for automation of configuration and management of a live sound system, the method executable with a computer having a processor and memory, the computer coupled with a display on which is displayable a graphical user interface (GUI), the method comprising:
receiving, by the processor, a signal indicative of selection of a factory-defined group of left and right loudspeaker arrays in a representation of the live sound system in the GUI; enabling, by the processor, creation in the representation of one or more user-defined loudspeaker arrays; integrating, by the processor, the factory-defined group of left and right loudspeaker arrays with the user-defined loudspeaker arrays to create a final group of loudspeaker arrays; creating, by the processor, an associated wiring circuit and a plurality of bandpass inputs for the final group of loudspeaker arrays; and calculating, by the processor, a plurality of properties for the final group of loudspeaker arrays to cover a geographic space of a venue, the properties including physical installation parameters of the final group of loudspeaker arrays and tuning parameters associated with a plurality of passive or active loudspeakers of the final group of loudspeaker arrays. 2. The method of claim 1, further comprising calculating the physical installation parameters with an acoustical modeling tool executed by the processor on the final group of loudspeaker arrays, the acoustical modeling tool embodied in instructions included in the memory, and where the physical installation parameters comprise loudspeaker type, number of loudspeakers required, angles between loudspeakers, overall loudspeaker array elevation, and overall sight angle orientation in order to obtain audience coverage of the venue. 3. The method of claim 1, further comprising calculating the tuning parameters with an acoustical modeling tool executed by the processor on the final group of loudspeaker arrays, the acoustical modeling tool embodied in instructions included in the memory, and where the tuning parameters comprise equalization settings, crossover settings, gain settings, and driver delay settings. 4. The method of claim 1, further comprising:
generating, by the processor, a plurality of amplifiers within the representation to drive the passive or active loudspeakers of the final group of loudspeaker arrays; associating, by the processor, amplifier channels of the amplifiers with the bandpass inputs of the final group of loudspeaker arrays in the representation according to the physical installation parameters, the amplifier channels including representations of digital signal processor (DSP) output channels coupled or integrated with respective amplifier channels; and loading, by the processor, the tuning parameters into respective representations of the DSP output channels based on configurations of the associated passive or active loudspeakers, to complete virtual configuration of the live sound system for the venue. 5. The method of claim 4, the method further comprising:
storing, in the memory, representations of the connections between the bandpass inputs of the final group of loudspeaker arrays and the associated amplifier channels, and of the connections between the associated amplifier channels and the DSP output channels, and storing the loaded tuning parameters in relation to the respective representations of the DSP output channels. 6. The method of claim 4, further comprising:
discovering via a network interconnected physical devices of the venue; receiving input signals indicative of matching the representations of the bandpass inputs of the passive or active loudspeaker arrays, of the amplifier channels, and of the DSP output channels to the corresponding interconnected physical devices of the venue discovered over the network; and in response to the input signals, transmitting addressing and tuning parameters over the network for loading into respective physical devices comprising physical amplifiers or physical DSPs. 7. The method of claim 6, further comprising:
overlaying, on top of the representation of the final group of loudspeaker arrays in the GUI, a wiring circuit representation indicating connections of the passive or active loudspeakers including the bandpass inputs for each loudspeaker array; generating a system monitoring interface in the GUI for use during a live show, the monitoring interface including an overlay of DSP parameter values on top of the representations of corresponding passive or active loudspeakers in the final group of loudspeaker arrays; and displaying in the system monitoring interface each bandpass input associated with amplifier channels that are coupled with corresponding output channels of the DSPs, to enable monitoring DSP output channel behavior in the GUI while visually maintaining the relationship of the loudspeakers with which each is associated. 8. The method of claim 6, further comprising, upon connecting the computer to a network to which physical devices are connected corresponding to the final group of loudspeaker arrays, to the amplifiers, and to the DSPs of the representation of the live sound system in the GUI:
discovering node identifications (IDs) for the bandpass inputs, amplifier channels, and output channels of the DSPs; recognizing the bandpass inputs, amplifier channels, and DSP output channels affiliated with each discovered node ID; and mapping the bandpass inputs, amplifier channels, and DSP output channels to their representations in the GUI. 9. The method of claim 8, further comprising:
transmitting the tuning parameters down to the physical DSPs over the network for configuring the physical DSP output channels remotely; and storing, in a database stored in memory coupled with the network, the tuning parameters in relation to their respective physical DSP output channels to be transmitted for re-loading in the corresponding physical DSPs at a later time. 10. A method for management of a live sound system, the method executable with a computer having a processor and memory, comprising:
enabling, by the processor, creation of a representation of the live sound system in a GUI, the representation including one or more loudspeaker arrays located in a venue and having corresponding bandpass inputs; and generating, by the processor, indicators overlaid over the representation of one or more loudspeaker arrays to display digital signal processing (DSP) parameter values of DSP output channels associated with respective bandpass inputs of the loudspeaker arrays, to enable monitoring DSP output channel behavior in the GUI while visually maintaining the relationship of the loudspeakers with which each is associated. 11. The method of claim 10, wherein at least a portion of the overlaid indicators include a summed combination of two or more DSP parameters from corresponding adjacent bandpass inputs of identical type. 12. The method of claim 10, wherein the one or more loudspeaker arrays include one or more of each of a plurality of passive or active loudspeakers, and one or more powered loudspeakers within which are integrated the DSPs, the indicators including one or more of meters, objects, or parameters. 13. The method of claim 10, wherein the DSPs include DSP devices external to and coupled with the respective amplifiers, or the DSPs include internal DSP devices integrated within the respective amplifiers. 14. The method of claim 10, further comprising grouping a plurality of amplifiers together that drive two or more loudspeakers of the one or more loudspeaker arrays to provide simultaneous control via the GUI of the two or more loudspeakers through the grouped amplifiers. 15. The method of claim 10, wherein the representation of the live sound system includes connections between the amplifier input channels and the respective DSP output channels, and connections between the bandpass inputs and associated amplifier channels. 16. The method of claim 15, further comprising transmitting tuning data over a network for loading into each respective DSP based on configurations of the loudspeakers for audio coverage, pre-defined loudspeaker tunings, and line array calculator information, the tuning data including one or more of equalization settings, crossover settings, gain settings, driver delay settings, and a combination thereof. | A computing system automates the configuration and management of a live sound system that includes a processor and memory for building in a GUI of a display a representation of the live sound system for a venue. The system loads a venue template that includes loudspeaker arrays and related properties including a setup configuration of the loudspeaker arrays and tuning data for constituent loudspeakers that are operable to provide an audio coverage pattern for the venue. The system overlays on top of the representation of the loudspeaker arrays a wiring circuit representation indicating interconnections of the loudspeakers that define bandpass inputs for each array. The system generates a plurality of amplifiers in the representation to drive the arrays, and associates amplifier channels of the amplifiers with the bandpass inputs. The amplifier channels include representations of output channels of DSPs coupled with respective amplifier channels. The system loads tuning data into respective representation of the DSP and/or amplifiers based on configurations of the associated loudspeakers to complete virtual configuration. The representations of the devices and connections may be matched with physical devices of the live sound system and the tuning data sent down to the physical DSPs and/or amplifiers for their configuration.1. A method for automation of configuration and management of a live sound system, the method executable with a computer having a processor and memory, the computer coupled with a display on which is displayable a graphical user interface (GUI), the method comprising:
receiving, by the processor, a signal indicative of selection of a factory-defined group of left and right loudspeaker arrays in a representation of the live sound system in the GUI; enabling, by the processor, creation in the representation of one or more user-defined loudspeaker arrays; integrating, by the processor, the factory-defined group of left and right loudspeaker arrays with the user-defined loudspeaker arrays to create a final group of loudspeaker arrays; creating, by the processor, an associated wiring circuit and a plurality of bandpass inputs for the final group of loudspeaker arrays; and calculating, by the processor, a plurality of properties for the final group of loudspeaker arrays to cover a geographic space of a venue, the properties including physical installation parameters of the final group of loudspeaker arrays and tuning parameters associated with a plurality of passive or active loudspeakers of the final group of loudspeaker arrays. 2. The method of claim 1, further comprising calculating the physical installation parameters with an acoustical modeling tool executed by the processor on the final group of loudspeaker arrays, the acoustical modeling tool embodied in instructions included in the memory, and where the physical installation parameters comprise loudspeaker type, number of loudspeakers required, angles between loudspeakers, overall loudspeaker array elevation, and overall sight angle orientation in order to obtain audience coverage of the venue. 3. The method of claim 1, further comprising calculating the tuning parameters with an acoustical modeling tool executed by the processor on the final group of loudspeaker arrays, the acoustical modeling tool embodied in instructions included in the memory, and where the tuning parameters comprise equalization settings, crossover settings, gain settings, and driver delay settings. 4. The method of claim 1, further comprising:
generating, by the processor, a plurality of amplifiers within the representation to drive the passive or active loudspeakers of the final group of loudspeaker arrays; associating, by the processor, amplifier channels of the amplifiers with the bandpass inputs of the final group of loudspeaker arrays in the representation according to the physical installation parameters, the amplifier channels including representations of digital signal processor (DSP) output channels coupled or integrated with respective amplifier channels; and loading, by the processor, the tuning parameters into respective representations of the DSP output channels based on configurations of the associated passive or active loudspeakers, to complete virtual configuration of the live sound system for the venue. 5. The method of claim 4, the method further comprising:
storing, in the memory, representations of the connections between the bandpass inputs of the final group of loudspeaker arrays and the associated amplifier channels, and of the connections between the associated amplifier channels and the DSP output channels, and storing the loaded tuning parameters in relation to the respective representations of the DSP output channels. 6. The method of claim 4, further comprising:
discovering via a network interconnected physical devices of the venue; receiving input signals indicative of matching the representations of the bandpass inputs of the passive or active loudspeaker arrays, of the amplifier channels, and of the DSP output channels to the corresponding interconnected physical devices of the venue discovered over the network; and in response to the input signals, transmitting addressing and tuning parameters over the network for loading into respective physical devices comprising physical amplifiers or physical DSPs. 7. The method of claim 6, further comprising:
overlaying, on top of the representation of the final group of loudspeaker arrays in the GUI, a wiring circuit representation indicating connections of the passive or active loudspeakers including the bandpass inputs for each loudspeaker array; generating a system monitoring interface in the GUI for use during a live show, the monitoring interface including an overlay of DSP parameter values on top of the representations of corresponding passive or active loudspeakers in the final group of loudspeaker arrays; and displaying in the system monitoring interface each bandpass input associated with amplifier channels that are coupled with corresponding output channels of the DSPs, to enable monitoring DSP output channel behavior in the GUI while visually maintaining the relationship of the loudspeakers with which each is associated. 8. The method of claim 6, further comprising, upon connecting the computer to a network to which physical devices are connected corresponding to the final group of loudspeaker arrays, to the amplifiers, and to the DSPs of the representation of the live sound system in the GUI:
discovering node identifications (IDs) for the bandpass inputs, amplifier channels, and output channels of the DSPs; recognizing the bandpass inputs, amplifier channels, and DSP output channels affiliated with each discovered node ID; and mapping the bandpass inputs, amplifier channels, and DSP output channels to their representations in the GUI. 9. The method of claim 8, further comprising:
transmitting the tuning parameters down to the physical DSPs over the network for configuring the physical DSP output channels remotely; and storing, in a database stored in memory coupled with the network, the tuning parameters in relation to their respective physical DSP output channels to be transmitted for re-loading in the corresponding physical DSPs at a later time. 10. A method for management of a live sound system, the method executable with a computer having a processor and memory, comprising:
enabling, by the processor, creation of a representation of the live sound system in a GUI, the representation including one or more loudspeaker arrays located in a venue and having corresponding bandpass inputs; and generating, by the processor, indicators overlaid over the representation of one or more loudspeaker arrays to display digital signal processing (DSP) parameter values of DSP output channels associated with respective bandpass inputs of the loudspeaker arrays, to enable monitoring DSP output channel behavior in the GUI while visually maintaining the relationship of the loudspeakers with which each is associated. 11. The method of claim 10, wherein at least a portion of the overlaid indicators include a summed combination of two or more DSP parameters from corresponding adjacent bandpass inputs of identical type. 12. The method of claim 10, wherein the one or more loudspeaker arrays include one or more of each of a plurality of passive or active loudspeakers, and one or more powered loudspeakers within which are integrated the DSPs, the indicators including one or more of meters, objects, or parameters. 13. The method of claim 10, wherein the DSPs include DSP devices external to and coupled with the respective amplifiers, or the DSPs include internal DSP devices integrated within the respective amplifiers. 14. The method of claim 10, further comprising grouping a plurality of amplifiers together that drive two or more loudspeakers of the one or more loudspeaker arrays to provide simultaneous control via the GUI of the two or more loudspeakers through the grouped amplifiers. 15. The method of claim 10, wherein the representation of the live sound system includes connections between the amplifier input channels and the respective DSP output channels, and connections between the bandpass inputs and associated amplifier channels. 16. The method of claim 15, further comprising transmitting tuning data over a network for loading into each respective DSP based on configurations of the loudspeakers for audio coverage, pre-defined loudspeaker tunings, and line array calculator information, the tuning data including one or more of equalization settings, crossover settings, gain settings, driver delay settings, and a combination thereof. | 2,100 |
6,641 | 6,641 | 16,226,738 | 2,143 | The system provides a method of sorting and presenting messages in a way so that the relationship in message threads can be easily observed and related messages can be identified. The system provides a way to view messages and map message threads and inboxes in two and three dimensions so that the content of messages can be easily reviewed and the relationship between messages can be seen and followed. The system is not limited to email messages but can present the relationship between multiple types of communications including emails, instant messages, texts, tweets, bulletin boards, wilds, blogs, voice conversations postings on social networks and other types of communications. In addition, the system allows for the inclusion of transactional information, including financial transactions, physical movement, asset deployment, or other acts or activities that may be related to, or independent of, the communications. | 1. A method for displaying the relationship of messages and transactions in a thread by displaying a root originator and one or more participants. | The system provides a method of sorting and presenting messages in a way so that the relationship in message threads can be easily observed and related messages can be identified. The system provides a way to view messages and map message threads and inboxes in two and three dimensions so that the content of messages can be easily reviewed and the relationship between messages can be seen and followed. The system is not limited to email messages but can present the relationship between multiple types of communications including emails, instant messages, texts, tweets, bulletin boards, wilds, blogs, voice conversations postings on social networks and other types of communications. In addition, the system allows for the inclusion of transactional information, including financial transactions, physical movement, asset deployment, or other acts or activities that may be related to, or independent of, the communications.1. A method for displaying the relationship of messages and transactions in a thread by displaying a root originator and one or more participants. | 2,100 |
6,642 | 6,642 | 15,362,744 | 2,123 | A probabilistic machine learning model is generated to identify potential bugs in a source code file. Source code files with and without bugs are analyzed to find features indicative of a pattern of the context of a software bug, wherein the context is based on a syntactic structure of the source code. The features may be extracted from a line of source code, a method, a class and/or any combination thereof. The features are then converted into a binary representation of feature vectors that train a machine learning model to predict the likelihood of a software bug in a source code file. | 1. A system, comprising:
a memory and at least one processor; the at least one processor configured to:
obtain a plurality of source code statements from at least one source code file, at least one of the plurality of source code statements having a software bug, at least one of the plurality of source code statements not having a software bug;
transform the plurality of source code statements into a plurality of features, at least one of the feature representing a context of a software bug, at least one other feature representing a context not having a software bug;
transform the plurality of features into a plurality of feature vectors;
train a machine learning model using the feature vectors to recognize patterns in a source code file indicative of a software bug; and
generate a probability of a software bug in a target source code file using the machine learning model. 2. The system of claim 1, wherein the at least one processor transforms the plurality of source code statements into a plurality of features by converting each source code statement of the plurality of source code statements into a sequence of tokens, wherein a token is associated with a syntactic element associated with a grammar of the source code file. 3. The system of claim 2, wherein the machine learning model is a long short term memory (LSTM) model. 4. The system of claim 1, wherein the at least one processor transforms the plurality of source code statements into a plurality of features by converting and/or concatenating at least one element in at least one source code statement of the plurality of source code statements into a token, wherein a token is associated with a syntactic element associated with a grammar of the source code file. 5. The system of claim 4, wherein the machine learning model is a recurrent neural network (RNN). 6. The system of claim 1, wherein the at least one processor transforms the plurality of source code statements into a plurality of features by converting each source code statement of the plurality of source code statements into a sequence of metrics, wherein a metric is associated with a measurement of a syntactic element of a source code statement. 7. The system of claim 6, wherein the machine learning model is an artificial neural network (ANN). 8. The system of claim 1, wherein the at least one processor is further configured to:
visualize one or more source code statements from a target source code file with a corresponding probability for at least one of the one or more source code statements. 9. The system of claim 8, wherein the visualization of the one or more source code statements includes at least one of:
highlighting a source code statement in accordance with a probability; altering a font size or text color of a source code statement in accordance with a probability; annotating a source code statement with a numeric probability value; and/or annotating a source code statement with an icon representing a probability value; 10. The system of claim 8, wherein the visualization is displayed when a probability of the one or more source code statements exceeds a threshold value. 11. A method, comprising:
obtaining a plurality of source code files, at least one source code file of the plurality of source code files having a software bug, at least one source code file of the plurality of source code files not having a software bug; converting at least one portion of a source code file of the plurality of source code files into a sequence of metrics, a metric representing a measurement of a syntactic element; and using the sequence of metrics to train a machine learning model to predict a likelihood of a software bug in a portion of a target source code file. 12. The method of claim 11, wherein the portion of the target source code file includes a source code statement, a method, and/or a class. 13. The method of claim 11, wherein obtaining a plurality of source code files further comprises:
mining change records of a source code repository for source code files having been changed to fix a software bug. 14. The method of claim 11, wherein the sequence of metrics includes one or more of a number of variables, a number of mathematical operations, a number of a particular data type of elements referenced, a number of loop constructs, a usage of a particular method, and a usage of a particular data type. 15. A device, comprising:
a memory and at least one processor; a data mining engine including instructions that when executed on the at least one processor searches a source code repository for a plurality of source code files; a code analysis engine including instructions that when executed on the at least one processor converts a portion of at least one source code file having a software bug into a sequence of syntactic elements that represent a context in which a software bug exists and converts a portion of at least one source code file not having a software bug into a sequence of syntactic elements that represent a context in which a software bug fails to exist; and a training engine including instructions that when executed on the at least one processor uses the sequence of syntactic elements that represent a context in which a software bug exists and the sequence of syntactic elements that represent a context in which a software bug fails to exist to train a machine learning model to predict a likelihood of a software bug in a target source code file. 16. The device of claim 15, wherein the training engine includes further instructions that when executed on the at least one processor aggregates a contiguous set of sequences of syntactic elements into a window to generate a feature vector. 17. The device of claim 16, wherein the contiguous set of sequences includes an amount of sequences of syntactic elements preceding a select sequence and an amount of sequences of syntactic element following the select sequence. 18. The device of claim 15, wherein the portion of the at least one source code file includes one or more lines of source code and/or classes of the at least one source code file. 19. The device of claim 15, further comprising:
a visualization engine that generates a visualization identifying at least one portion of a target source code file having a likelihood of a software bug. 20. The device of claim 19, wherein the visualization includes the at least one portion of the target source code and probabilities associated with the at least one portion of the target source code. | A probabilistic machine learning model is generated to identify potential bugs in a source code file. Source code files with and without bugs are analyzed to find features indicative of a pattern of the context of a software bug, wherein the context is based on a syntactic structure of the source code. The features may be extracted from a line of source code, a method, a class and/or any combination thereof. The features are then converted into a binary representation of feature vectors that train a machine learning model to predict the likelihood of a software bug in a source code file.1. A system, comprising:
a memory and at least one processor; the at least one processor configured to:
obtain a plurality of source code statements from at least one source code file, at least one of the plurality of source code statements having a software bug, at least one of the plurality of source code statements not having a software bug;
transform the plurality of source code statements into a plurality of features, at least one of the feature representing a context of a software bug, at least one other feature representing a context not having a software bug;
transform the plurality of features into a plurality of feature vectors;
train a machine learning model using the feature vectors to recognize patterns in a source code file indicative of a software bug; and
generate a probability of a software bug in a target source code file using the machine learning model. 2. The system of claim 1, wherein the at least one processor transforms the plurality of source code statements into a plurality of features by converting each source code statement of the plurality of source code statements into a sequence of tokens, wherein a token is associated with a syntactic element associated with a grammar of the source code file. 3. The system of claim 2, wherein the machine learning model is a long short term memory (LSTM) model. 4. The system of claim 1, wherein the at least one processor transforms the plurality of source code statements into a plurality of features by converting and/or concatenating at least one element in at least one source code statement of the plurality of source code statements into a token, wherein a token is associated with a syntactic element associated with a grammar of the source code file. 5. The system of claim 4, wherein the machine learning model is a recurrent neural network (RNN). 6. The system of claim 1, wherein the at least one processor transforms the plurality of source code statements into a plurality of features by converting each source code statement of the plurality of source code statements into a sequence of metrics, wherein a metric is associated with a measurement of a syntactic element of a source code statement. 7. The system of claim 6, wherein the machine learning model is an artificial neural network (ANN). 8. The system of claim 1, wherein the at least one processor is further configured to:
visualize one or more source code statements from a target source code file with a corresponding probability for at least one of the one or more source code statements. 9. The system of claim 8, wherein the visualization of the one or more source code statements includes at least one of:
highlighting a source code statement in accordance with a probability; altering a font size or text color of a source code statement in accordance with a probability; annotating a source code statement with a numeric probability value; and/or annotating a source code statement with an icon representing a probability value; 10. The system of claim 8, wherein the visualization is displayed when a probability of the one or more source code statements exceeds a threshold value. 11. A method, comprising:
obtaining a plurality of source code files, at least one source code file of the plurality of source code files having a software bug, at least one source code file of the plurality of source code files not having a software bug; converting at least one portion of a source code file of the plurality of source code files into a sequence of metrics, a metric representing a measurement of a syntactic element; and using the sequence of metrics to train a machine learning model to predict a likelihood of a software bug in a portion of a target source code file. 12. The method of claim 11, wherein the portion of the target source code file includes a source code statement, a method, and/or a class. 13. The method of claim 11, wherein obtaining a plurality of source code files further comprises:
mining change records of a source code repository for source code files having been changed to fix a software bug. 14. The method of claim 11, wherein the sequence of metrics includes one or more of a number of variables, a number of mathematical operations, a number of a particular data type of elements referenced, a number of loop constructs, a usage of a particular method, and a usage of a particular data type. 15. A device, comprising:
a memory and at least one processor; a data mining engine including instructions that when executed on the at least one processor searches a source code repository for a plurality of source code files; a code analysis engine including instructions that when executed on the at least one processor converts a portion of at least one source code file having a software bug into a sequence of syntactic elements that represent a context in which a software bug exists and converts a portion of at least one source code file not having a software bug into a sequence of syntactic elements that represent a context in which a software bug fails to exist; and a training engine including instructions that when executed on the at least one processor uses the sequence of syntactic elements that represent a context in which a software bug exists and the sequence of syntactic elements that represent a context in which a software bug fails to exist to train a machine learning model to predict a likelihood of a software bug in a target source code file. 16. The device of claim 15, wherein the training engine includes further instructions that when executed on the at least one processor aggregates a contiguous set of sequences of syntactic elements into a window to generate a feature vector. 17. The device of claim 16, wherein the contiguous set of sequences includes an amount of sequences of syntactic elements preceding a select sequence and an amount of sequences of syntactic element following the select sequence. 18. The device of claim 15, wherein the portion of the at least one source code file includes one or more lines of source code and/or classes of the at least one source code file. 19. The device of claim 15, further comprising:
a visualization engine that generates a visualization identifying at least one portion of a target source code file having a likelihood of a software bug. 20. The device of claim 19, wherein the visualization includes the at least one portion of the target source code and probabilities associated with the at least one portion of the target source code. | 2,100 |
6,643 | 6,643 | 16,126,460 | 2,194 | Methods, systems, and computer readable media may be operable to facilitate an anticipation of an execution of a process termination tool. An allocation stall counter may be queried at a certain frequency, and from the query of the allocation stall counter, a number of allocation stall counter increments occurring over a certain duration of time may be determined. If the number of allocation stall counter increments is greater than a threshold, a determination may be made that system memory is running low and that an execution of a process termination tool is imminent. In response to the determination that system memory is running low, a flag indicating that system memory is running low may be set, and one or more programs, in response to reading the flag, may free memory that is not necessary or required for execution. | 1. A method comprising:
querying an allocation stall counter; based on the query of the allocation stall counter, determining a number of allocation stall counter increments occurring over a certain duration of time; if the number of allocation stall counter increments occurring over the certain duration of time is greater than a threshold, determining that an execution of a process termination tool is imminent. 2. The method of claim 1, further comprising:
in response to the determination that an execution of a process termination tool is imminent, setting a flag indicating that a system associated with the allocation stall counter is low on available memory. 3. The method of claim 2, wherein the flag is readable by one or more processes that are running on the system. 4. The method of claim 2, further comprising:
monitoring a free memory indicator; and if free memory available to the system increases over a threshold amount, clearing the flag. 5. The method of claim 4, further comprising:
saving the value of the allocation stall counter for use in a subsequent low memory test. 6. The method of claim 1, wherein the allocation stall counter is associated with a Linux system. 7. The method of claim 1, wherein the allocation stall counter is queried at a predetermined frequency. 8. An apparatus comprising one or more modules that:
query an allocation stall counter; based on the query of the allocation stall counter, determine a number of allocation stall counter increments occurring over a certain duration of time; if the number of allocation stall counter increments occurring over the certain duration of time is greater than a threshold, determines that an execution of a process termination tool is imminent. 9. The apparatus of claim 8, wherein the one or more modules further:
in response to the determination that an execution of a process termination tool is imminent, set a flag indicating that a system associated with the allocation stall counter is low on available memory. 10. The apparatus of claim 9, wherein the flag is readable by one or more processes that are running on the system. 11. The apparatus of claim 9, wherein the one or more modules further:
monitor a free memory indicator; and if free memory available to the system increases over a threshold amount, clear the flag. 12. The apparatus of claim 11, wherein the one or more modules further:
save the value of the allocation stall counter for use in a subsequent low memory test. 13. The apparatus of claim 8, wherein the allocation stall counter is queried at a predetermined frequency. 14. One or more non-transitory computer readable media having instructions operable to cause one or more processors to perform the operations comprising:
querying an allocation stall counter; based on the query of the allocation stall counter, determining a number of allocation stall counter increments occurring over a certain duration of time; if the number of allocation stall counter increments occurring over the certain duration of time is greater than a threshold, determining that an execution of a process termination tool is imminent. 15. The one or more non-transitory computer-readable media of claim 14, wherein the instructions are further operable to cause the one or more processors to perform the operations comprising:
in response to the determination that an execution of a process termination tool is imminent, setting a flag indicating that a system associated with the allocation stall counter is low on available memory. 16. The one or more non-transitory computer-readable media of claim 15, wherein the flag is readable by one or more processes that are running on the system. 17. The one or more non-transitory computer-readable media of claim 15, wherein the instructions are further operable to cause the one or more processors to perform the operations comprising:
monitoring a free memory indicator; and if free memory available to the system increases over a threshold amount, clearing the flag. 18. The one or more non-transitory computer-readable media of claim 17, wherein the instructions are further operable to cause the one or more processors to perform the operations comprising:
saving the value of the allocation stall counter for use in a subsequent low memory test. 19. The one or more non-transitory computer-readable media of claim 14, wherein the allocation stall counter is associated with a Linux system. 20. The one or more non-transitory computer-readable media of claim 14, wherein the allocation stall counter is queried at a predetermined frequency. | Methods, systems, and computer readable media may be operable to facilitate an anticipation of an execution of a process termination tool. An allocation stall counter may be queried at a certain frequency, and from the query of the allocation stall counter, a number of allocation stall counter increments occurring over a certain duration of time may be determined. If the number of allocation stall counter increments is greater than a threshold, a determination may be made that system memory is running low and that an execution of a process termination tool is imminent. In response to the determination that system memory is running low, a flag indicating that system memory is running low may be set, and one or more programs, in response to reading the flag, may free memory that is not necessary or required for execution.1. A method comprising:
querying an allocation stall counter; based on the query of the allocation stall counter, determining a number of allocation stall counter increments occurring over a certain duration of time; if the number of allocation stall counter increments occurring over the certain duration of time is greater than a threshold, determining that an execution of a process termination tool is imminent. 2. The method of claim 1, further comprising:
in response to the determination that an execution of a process termination tool is imminent, setting a flag indicating that a system associated with the allocation stall counter is low on available memory. 3. The method of claim 2, wherein the flag is readable by one or more processes that are running on the system. 4. The method of claim 2, further comprising:
monitoring a free memory indicator; and if free memory available to the system increases over a threshold amount, clearing the flag. 5. The method of claim 4, further comprising:
saving the value of the allocation stall counter for use in a subsequent low memory test. 6. The method of claim 1, wherein the allocation stall counter is associated with a Linux system. 7. The method of claim 1, wherein the allocation stall counter is queried at a predetermined frequency. 8. An apparatus comprising one or more modules that:
query an allocation stall counter; based on the query of the allocation stall counter, determine a number of allocation stall counter increments occurring over a certain duration of time; if the number of allocation stall counter increments occurring over the certain duration of time is greater than a threshold, determines that an execution of a process termination tool is imminent. 9. The apparatus of claim 8, wherein the one or more modules further:
in response to the determination that an execution of a process termination tool is imminent, set a flag indicating that a system associated with the allocation stall counter is low on available memory. 10. The apparatus of claim 9, wherein the flag is readable by one or more processes that are running on the system. 11. The apparatus of claim 9, wherein the one or more modules further:
monitor a free memory indicator; and if free memory available to the system increases over a threshold amount, clear the flag. 12. The apparatus of claim 11, wherein the one or more modules further:
save the value of the allocation stall counter for use in a subsequent low memory test. 13. The apparatus of claim 8, wherein the allocation stall counter is queried at a predetermined frequency. 14. One or more non-transitory computer readable media having instructions operable to cause one or more processors to perform the operations comprising:
querying an allocation stall counter; based on the query of the allocation stall counter, determining a number of allocation stall counter increments occurring over a certain duration of time; if the number of allocation stall counter increments occurring over the certain duration of time is greater than a threshold, determining that an execution of a process termination tool is imminent. 15. The one or more non-transitory computer-readable media of claim 14, wherein the instructions are further operable to cause the one or more processors to perform the operations comprising:
in response to the determination that an execution of a process termination tool is imminent, setting a flag indicating that a system associated with the allocation stall counter is low on available memory. 16. The one or more non-transitory computer-readable media of claim 15, wherein the flag is readable by one or more processes that are running on the system. 17. The one or more non-transitory computer-readable media of claim 15, wherein the instructions are further operable to cause the one or more processors to perform the operations comprising:
monitoring a free memory indicator; and if free memory available to the system increases over a threshold amount, clearing the flag. 18. The one or more non-transitory computer-readable media of claim 17, wherein the instructions are further operable to cause the one or more processors to perform the operations comprising:
saving the value of the allocation stall counter for use in a subsequent low memory test. 19. The one or more non-transitory computer-readable media of claim 14, wherein the allocation stall counter is associated with a Linux system. 20. The one or more non-transitory computer-readable media of claim 14, wherein the allocation stall counter is queried at a predetermined frequency. | 2,100 |
6,644 | 6,644 | 16,207,508 | 2,175 | A method, computer program product, and system are provided for window placement in a visual display of a data processing system. A computer gathers data of user preferences of size and position of windows in a visual display through use of the visual display, wherein windows relate to resources accessed by a user of the data processing system. Upon a new display action, the computer determines a current context of the visual display, wherein the current context includes existing windows in the visual display. The computer applies the data of user preferences to the new display action to provide an updated display context, wherein the applying includes influencing one or more sizes and one or more positions of one or more windows in the visual display. | 1. A computer-implemented method for window placement, comprising:
gathering data of user preferences of size and position of windows in a visual display through use of the visual display, wherein windows relate to resources accessed by a user of a data processing system; determining, upon a new display action, a current context of the visual display, wherein the current context includes existing windows in the visual display; and applying the data of user preferences to the new display action to provide an updated display context, wherein the applying includes influencing one or more sizes and one or more positions of one or more windows in the visual display. 2. The method of claim 1, wherein the user preferences relate to resource categories of windows;
wherein the new display action is opening a window for a resource in the visual display; and wherein applying the data of user preferences further comprises applying a function of a size and a position of a window of a same category of resource. 3. The method of claim 1, wherein the new display action is a change to a number of monitors in the visual display and wherein applying the data of user preferences further comprises influencing one or more sizes and one or more positions of multiple windows across one or more monitors. 4. The method of claim 1, wherein gathering data of user preferences further comprises gathering window data from one or more of an operating system and screen capture. 5. The method of claim 1, wherein gathering data of user preferences further comprises gathering data when a new window is displayed. 6. The method of claim 1, further comprising receiving user categorization of windows as belonging to a category of resources. 7. The method of claim 1, further comprising:
monitoring a user reaction to the updated display context, and wherein the user reaction comprises movement of one or more windows; and adding the user reaction to the data of user preferences. 8. The method of claim 1, further comprising:
rearranging one or more existing windows in the current context of the display to accommodate the new display action. 9. The method of claim 8, wherein rearranging one or more existing windows is dependent on one or more types of content of the existing windows. 10. The method of claim 1, wherein providing the updated display context includes applying a decision tree to result in a nearest matching updated display context, and wherein the nearest matching updated display context accommodates the new display action with a least movement of existing windows. 11. A system for window placement, comprising:
one or more processors; and a memory communicatively coupled to the one or more processors, wherein the memory comprises instructions which, when executed by the one or more processors, cause the one or more processors to perform a method comprising: gathering data of user preferences of size and position of windows in a visual display through use of the visual display, wherein windows relate to resources accessed by a user of a data processing system; determining, upon a new display action, a current context of the visual display, wherein the current context includes existing windows in the visual display; and applying the data of user preferences to the new display action to provide an updated display context, wherein the applying includes influencing one or more sizes and one or more positions of one or more windows in the visual display. 12. The system of claim 11, wherein the user preferences relate to resource categories of windows;
wherein the new display action is opening a window for a resource in the visual display; and wherein applying the data of user preferences further comprises applying a function of a size and a position of a window of a same category of resource. 13. The system of claim 11, wherein the new display action is a change to a number of monitors in the visual display and wherein applying the data of user preferences further comprises influencing one or more sizes and one or more positions of multiple windows across one or more monitors. 14. The system of claim 11, wherein gathering data of user preferences further comprises gathering window data from one or more of an operating system and screen capture. 15. The system of claim 11, wherein gathering data of user preferences further comprises gathering data when a new window is displayed. 16. The system of claim 11, further comprising receiving user categorization of windows as belonging to a category of resources. 17. The system of claim 11, further comprising:
monitoring a user reaction to the updated display context, and wherein the user reaction comprises movement of one or more windows; and adding the user reaction to the data of user preferences. 18. The system of claim 11, further comprising:
rearranging one or more existing windows in the current context of the display to accommodate the new display action. 19. The system of claim 11, wherein providing the updated display context includes applying a decision tree to result in a nearest matching updated display context, and wherein the nearest matching updated display context accommodates the new display action with a least movement of existing windows. 20. A computer program product for window placement, the computer program product comprising a computer readable storage medium having program instructions embodied therewith, the program instructions executable by a processor to cause the processor to:
gather data of user preferences of size and position of windows in a visual display through use of the visual display, wherein windows relate to resources accessed by a user of a data processing system; determine, upon a new display action, a current context of the visual display, wherein the current context includes existing windows in the visual display; and applying the data of user preferences to the new display action to provide an updated display context, wherein the applying includes influencing the one or more sizes and one or more positions of one or more windows in the visual display. | A method, computer program product, and system are provided for window placement in a visual display of a data processing system. A computer gathers data of user preferences of size and position of windows in a visual display through use of the visual display, wherein windows relate to resources accessed by a user of the data processing system. Upon a new display action, the computer determines a current context of the visual display, wherein the current context includes existing windows in the visual display. The computer applies the data of user preferences to the new display action to provide an updated display context, wherein the applying includes influencing one or more sizes and one or more positions of one or more windows in the visual display.1. A computer-implemented method for window placement, comprising:
gathering data of user preferences of size and position of windows in a visual display through use of the visual display, wherein windows relate to resources accessed by a user of a data processing system; determining, upon a new display action, a current context of the visual display, wherein the current context includes existing windows in the visual display; and applying the data of user preferences to the new display action to provide an updated display context, wherein the applying includes influencing one or more sizes and one or more positions of one or more windows in the visual display. 2. The method of claim 1, wherein the user preferences relate to resource categories of windows;
wherein the new display action is opening a window for a resource in the visual display; and wherein applying the data of user preferences further comprises applying a function of a size and a position of a window of a same category of resource. 3. The method of claim 1, wherein the new display action is a change to a number of monitors in the visual display and wherein applying the data of user preferences further comprises influencing one or more sizes and one or more positions of multiple windows across one or more monitors. 4. The method of claim 1, wherein gathering data of user preferences further comprises gathering window data from one or more of an operating system and screen capture. 5. The method of claim 1, wherein gathering data of user preferences further comprises gathering data when a new window is displayed. 6. The method of claim 1, further comprising receiving user categorization of windows as belonging to a category of resources. 7. The method of claim 1, further comprising:
monitoring a user reaction to the updated display context, and wherein the user reaction comprises movement of one or more windows; and adding the user reaction to the data of user preferences. 8. The method of claim 1, further comprising:
rearranging one or more existing windows in the current context of the display to accommodate the new display action. 9. The method of claim 8, wherein rearranging one or more existing windows is dependent on one or more types of content of the existing windows. 10. The method of claim 1, wherein providing the updated display context includes applying a decision tree to result in a nearest matching updated display context, and wherein the nearest matching updated display context accommodates the new display action with a least movement of existing windows. 11. A system for window placement, comprising:
one or more processors; and a memory communicatively coupled to the one or more processors, wherein the memory comprises instructions which, when executed by the one or more processors, cause the one or more processors to perform a method comprising: gathering data of user preferences of size and position of windows in a visual display through use of the visual display, wherein windows relate to resources accessed by a user of a data processing system; determining, upon a new display action, a current context of the visual display, wherein the current context includes existing windows in the visual display; and applying the data of user preferences to the new display action to provide an updated display context, wherein the applying includes influencing one or more sizes and one or more positions of one or more windows in the visual display. 12. The system of claim 11, wherein the user preferences relate to resource categories of windows;
wherein the new display action is opening a window for a resource in the visual display; and wherein applying the data of user preferences further comprises applying a function of a size and a position of a window of a same category of resource. 13. The system of claim 11, wherein the new display action is a change to a number of monitors in the visual display and wherein applying the data of user preferences further comprises influencing one or more sizes and one or more positions of multiple windows across one or more monitors. 14. The system of claim 11, wherein gathering data of user preferences further comprises gathering window data from one or more of an operating system and screen capture. 15. The system of claim 11, wherein gathering data of user preferences further comprises gathering data when a new window is displayed. 16. The system of claim 11, further comprising receiving user categorization of windows as belonging to a category of resources. 17. The system of claim 11, further comprising:
monitoring a user reaction to the updated display context, and wherein the user reaction comprises movement of one or more windows; and adding the user reaction to the data of user preferences. 18. The system of claim 11, further comprising:
rearranging one or more existing windows in the current context of the display to accommodate the new display action. 19. The system of claim 11, wherein providing the updated display context includes applying a decision tree to result in a nearest matching updated display context, and wherein the nearest matching updated display context accommodates the new display action with a least movement of existing windows. 20. A computer program product for window placement, the computer program product comprising a computer readable storage medium having program instructions embodied therewith, the program instructions executable by a processor to cause the processor to:
gather data of user preferences of size and position of windows in a visual display through use of the visual display, wherein windows relate to resources accessed by a user of a data processing system; determine, upon a new display action, a current context of the visual display, wherein the current context includes existing windows in the visual display; and applying the data of user preferences to the new display action to provide an updated display context, wherein the applying includes influencing the one or more sizes and one or more positions of one or more windows in the visual display. | 2,100 |
6,645 | 6,645 | 15,536,733 | 2,157 | The invention describes a data management system, where a declaration of causal dependency of values is realized by unified abstract connectors directly at the level of data meta-description and, for these purposes, supplements the generally accepted data model with a new fundamental member. The technical result of the invention is a significant increase in the speed of development and reliability of execution of application programs. The technical result is achieved through the use of advanced methods of internal organization and interaction of data model managing structures, as well as by methods of declarative programming. | 1. A system of data management in a computing environment, including at least one computer processor for data processing, and memory for data storage linked to the processor, in which data are represented, including but not limited to, by machine- readable information values, which are instances, derivatives from declarative subjects of a meta-description, including but not limited to, formalized machine-readable declarations, describing conceptual entities, as well as characteristics of these entities, named entity attributes; the system being demarcated by the fact that a declaration of the causal dependency of the aforementioned information values is the subject of a meta-description and assumes the form of an abstract unified connector of the aforementioned attributes, and thus the aforementioned dependency declaration is realized in such way that each of the attributes linked by a dependency receives a uniquely identified instance derivative from a unified data structure, named a socket, and thus each instance of a socket stores a full set of address, functional and event characteristics of the aforementioned abstract connector respective to the entity attribute to which it belongs. 2. The system of data management according to claim 1, wherein an identifying feature being the fact that the full set of aforementioned sockets of a single entity attribute creates an attribute socket tuple in the form of a unified array, where each socket is uniquely identified by its fixed sequential number. 3. The system of data management according to claim 1, wherein the aforementioned address characteristics of an abstract connector are represented, including but not limited to, by identifiers of a target entity attribute and socket attribute on the opposite side of the connector; the aforementioned event characteristics of the abstract connector represented, including but not limited to, by flags, indicating the direction and defining condition for a value transfer through a connector; the aforementioned functional characteristics of an abstract connector represented, including but not limited to, by flags managing the modification of a value at transmission. 4. The system of data management according to claim 1, wherein a causal dependency of the aforementioned address, functional and event characteristics of the single socket of an abstract connector of two entity attributes from an information value, derived from a third entity attribute, is also realized by the creation of two sockets forming an additional abstract connector according to claim 1 and in this case the second socket of the additional connector belongs to the attribute, which owns the dependent socket and both these sockets are linked by mutual pointers. 5. The system of data management according to claim 4, wherein the dependent socket encapsulates the socket of an additional connector. 6. The system of data management according to claim 1, wherein the aforementioned conceptual entity attribute has an additional characteristic, which contains the identifier of a system functional calculation method, and each individual calculation method of the full set of calculation methods of the mentioned system could be uniquely identified. 7. The system of data management according to claim 1, wherein the aforementioned conceptual entity attribute has an additional characteristic: a flag, the setting of which changes the behavior of the attribute in such a way that it is not anymore associated with a derivative value in the storage form, but creates and returns a derivative value at the time of the attribute reference, by using acting instances of an abstract connector. 8. A method of data management in a computing environment inclusive of at least one computer processor for data processing and connected to the processor memory for data storage, in which data are represented, among others, by machine-readable information values, which are instances, derivatives from a meta-description, which includes, among others, formalized machine-readable declarations of conceptual entities and characteristics of these entities, named entity attributes, with a special feature being the fact that a casual dependency of the aforementioned information values is declared in an abstract form of the unified connector of the aforementioned attributes and realized directly in a meta-description by the creation, for each of the linked attributes, of an instance of a unified data structure, named a socket, and in this case the mentioned attributes created derived from them information values, among others, according to the dependencies declared by the acting set of attribute sockets in such a way that the whole set of information values retains its continuous integrity and logical consistency. 9. The method of data management according to claim 8, wherein the aforementioned derivative information value is extracted by the reference to the respective attribute, where this attribute forms the return value, including but not limited to, by querying its sockets, then applies the thus obtained set of values assigned to it a method of the result value calculation; in this case the socket returns the value, received by calling the attribute referenced by a socket, if the socket assigned a flag, which allows data extraction from this socket. 10. The method of data management according to claim 8, wherein the aforementioned derivative information value is modified by passing a new value to the respective attribute, where the mentioned attribute utilizes, including but not limited to, an assigned, uniquely identifiable method of the result value calculation, and also their sockets to pass through them the changes of the derivative value, and, in doing that, initiates the process of a new value creation for the attributes referenced by the sockets; and in this case the socket effects the transmission if it is assigned a respective flag. 11. The method of data management according to claim 10, wherein the aforementioned transmission changes of a derivative value is realized by the transmission to an attribute, referenced by a socket, of pointers to the value before the change and the new value. 12. A method of unified identification of metadata subjects in a computing information value management system, where the information values are instances, derivatives from declarative metadata subjects, which include instances of the unified data structure, describing a single conceptual entity and a named entity, which, in its turn, contains, including but not limited to, instances of a unified data structure, describing the entity characteristics and named attribute of entity, which, in its turn contains, including but not limited to, instances of a unified data structure, named socket of attribute and describing causal attributes dependency, with an identifying feature being the fact that each single set of the aforementioned unified metadata subjects, namely:
1) entities respective to the instances associated with the external environment;
2) attributes of a single entity object;
3) sockets of a single attribute instance;
creates, respective to its owner, a tuple of unified instances in a form of a simple array, where each subject occupies a position, unchanged in time and uniquely identified by its sequential number. 13. A machine-readable information carrier that contains a programming code, which, when executed on a computer or microprocessor, realizes the system according to claim 1. 14. A unit for data management, containing, including but not limited to, any number of independent processors or processor cores, which together form a computing environment, where each processor or processor core has its own, connected local memory for data storage and programs, as well as programming unit communication channels to the other processors or processor cores of this unit; and data represented by, including but not limited to, information values, which are the instances, derivatives from the declarations of conceptual entitles and characteristics of these entitles, characterized by the fact that the declarations of the characteristics of conceptual entitles and their derivative information values are distributed in the local memory of processors or processor cores in such a way that functional and event links of the aforementioned characteristics utilize the mentioned communication channels to realize the causal dependency of the mentioned values. 15. The unit for data management according to claim 14, wherein the aforementioned processors or processor cores are located on one crystal. | The invention describes a data management system, where a declaration of causal dependency of values is realized by unified abstract connectors directly at the level of data meta-description and, for these purposes, supplements the generally accepted data model with a new fundamental member. The technical result of the invention is a significant increase in the speed of development and reliability of execution of application programs. The technical result is achieved through the use of advanced methods of internal organization and interaction of data model managing structures, as well as by methods of declarative programming.1. A system of data management in a computing environment, including at least one computer processor for data processing, and memory for data storage linked to the processor, in which data are represented, including but not limited to, by machine- readable information values, which are instances, derivatives from declarative subjects of a meta-description, including but not limited to, formalized machine-readable declarations, describing conceptual entities, as well as characteristics of these entities, named entity attributes; the system being demarcated by the fact that a declaration of the causal dependency of the aforementioned information values is the subject of a meta-description and assumes the form of an abstract unified connector of the aforementioned attributes, and thus the aforementioned dependency declaration is realized in such way that each of the attributes linked by a dependency receives a uniquely identified instance derivative from a unified data structure, named a socket, and thus each instance of a socket stores a full set of address, functional and event characteristics of the aforementioned abstract connector respective to the entity attribute to which it belongs. 2. The system of data management according to claim 1, wherein an identifying feature being the fact that the full set of aforementioned sockets of a single entity attribute creates an attribute socket tuple in the form of a unified array, where each socket is uniquely identified by its fixed sequential number. 3. The system of data management according to claim 1, wherein the aforementioned address characteristics of an abstract connector are represented, including but not limited to, by identifiers of a target entity attribute and socket attribute on the opposite side of the connector; the aforementioned event characteristics of the abstract connector represented, including but not limited to, by flags, indicating the direction and defining condition for a value transfer through a connector; the aforementioned functional characteristics of an abstract connector represented, including but not limited to, by flags managing the modification of a value at transmission. 4. The system of data management according to claim 1, wherein a causal dependency of the aforementioned address, functional and event characteristics of the single socket of an abstract connector of two entity attributes from an information value, derived from a third entity attribute, is also realized by the creation of two sockets forming an additional abstract connector according to claim 1 and in this case the second socket of the additional connector belongs to the attribute, which owns the dependent socket and both these sockets are linked by mutual pointers. 5. The system of data management according to claim 4, wherein the dependent socket encapsulates the socket of an additional connector. 6. The system of data management according to claim 1, wherein the aforementioned conceptual entity attribute has an additional characteristic, which contains the identifier of a system functional calculation method, and each individual calculation method of the full set of calculation methods of the mentioned system could be uniquely identified. 7. The system of data management according to claim 1, wherein the aforementioned conceptual entity attribute has an additional characteristic: a flag, the setting of which changes the behavior of the attribute in such a way that it is not anymore associated with a derivative value in the storage form, but creates and returns a derivative value at the time of the attribute reference, by using acting instances of an abstract connector. 8. A method of data management in a computing environment inclusive of at least one computer processor for data processing and connected to the processor memory for data storage, in which data are represented, among others, by machine-readable information values, which are instances, derivatives from a meta-description, which includes, among others, formalized machine-readable declarations of conceptual entities and characteristics of these entities, named entity attributes, with a special feature being the fact that a casual dependency of the aforementioned information values is declared in an abstract form of the unified connector of the aforementioned attributes and realized directly in a meta-description by the creation, for each of the linked attributes, of an instance of a unified data structure, named a socket, and in this case the mentioned attributes created derived from them information values, among others, according to the dependencies declared by the acting set of attribute sockets in such a way that the whole set of information values retains its continuous integrity and logical consistency. 9. The method of data management according to claim 8, wherein the aforementioned derivative information value is extracted by the reference to the respective attribute, where this attribute forms the return value, including but not limited to, by querying its sockets, then applies the thus obtained set of values assigned to it a method of the result value calculation; in this case the socket returns the value, received by calling the attribute referenced by a socket, if the socket assigned a flag, which allows data extraction from this socket. 10. The method of data management according to claim 8, wherein the aforementioned derivative information value is modified by passing a new value to the respective attribute, where the mentioned attribute utilizes, including but not limited to, an assigned, uniquely identifiable method of the result value calculation, and also their sockets to pass through them the changes of the derivative value, and, in doing that, initiates the process of a new value creation for the attributes referenced by the sockets; and in this case the socket effects the transmission if it is assigned a respective flag. 11. The method of data management according to claim 10, wherein the aforementioned transmission changes of a derivative value is realized by the transmission to an attribute, referenced by a socket, of pointers to the value before the change and the new value. 12. A method of unified identification of metadata subjects in a computing information value management system, where the information values are instances, derivatives from declarative metadata subjects, which include instances of the unified data structure, describing a single conceptual entity and a named entity, which, in its turn, contains, including but not limited to, instances of a unified data structure, describing the entity characteristics and named attribute of entity, which, in its turn contains, including but not limited to, instances of a unified data structure, named socket of attribute and describing causal attributes dependency, with an identifying feature being the fact that each single set of the aforementioned unified metadata subjects, namely:
1) entities respective to the instances associated with the external environment;
2) attributes of a single entity object;
3) sockets of a single attribute instance;
creates, respective to its owner, a tuple of unified instances in a form of a simple array, where each subject occupies a position, unchanged in time and uniquely identified by its sequential number. 13. A machine-readable information carrier that contains a programming code, which, when executed on a computer or microprocessor, realizes the system according to claim 1. 14. A unit for data management, containing, including but not limited to, any number of independent processors or processor cores, which together form a computing environment, where each processor or processor core has its own, connected local memory for data storage and programs, as well as programming unit communication channels to the other processors or processor cores of this unit; and data represented by, including but not limited to, information values, which are the instances, derivatives from the declarations of conceptual entitles and characteristics of these entitles, characterized by the fact that the declarations of the characteristics of conceptual entitles and their derivative information values are distributed in the local memory of processors or processor cores in such a way that functional and event links of the aforementioned characteristics utilize the mentioned communication channels to realize the causal dependency of the mentioned values. 15. The unit for data management according to claim 14, wherein the aforementioned processors or processor cores are located on one crystal. | 2,100 |
6,646 | 6,646 | 15,282,561 | 2,184 | The application provides a field programmable gate array (FPGA) and a communication method. At least one application specific integrated circuit based (ASIC-based) hard core is embedded in the FPGA. The ASIC-based hard core includes a high-speed exchange and interconnection unit and at least one station. Each station is connected to the high-speed exchange and interconnection unit. The station is configured to transmit data between each functional module in the FPGA and the ASIC-based hard core. The high-speed exchange and interconnection unit is configured to transmit data between the stations. In the FPGA provided by the application, an ASIC-based hard core is embedded, which can facilitate data exchange between each functional module and the ASIC-based hard core in proximity and reduce a time delay. | 1. A field programmable gate array (FPGA), comprising:
an application specific integrated circuit based (ASIC-based) hard core embedded in the FPGA, wherein the ASIC-based hard core comprises:
a high-speed exchange and interconnection unit; and
at least one station;
wherein
each station is connected to the high-speed exchange and interconnection unit;
the station is configured to transmit data between each functional module in the FPGA and the ASIC-based hard core;
and
the high-speed exchange and interconnection unit is configured to transmit data between the stations. 2. The FPGA according to claim 1, wherein a quantity of the stations is equal to a quantity of functional modules, and one of the stations is connected to one of the functional modules; or
each station corresponds to multiple functional modules, and each station is connected to the corresponding multiple functional modules. 3. The FPGA according to claim 2, wherein when the quantity of the stations is equal to the quantity of the functional modules, a clock frequency, a data bit width, and a time sequence that are consistent with those of a corresponding functional module are configured for each station. 4. The FPGA according to claim 1, wherein an on-chip interconnect bus protocol of the ASIC-based hard core comprises at least one of the following: AVALON, Wishbone, CoreConnect, or AMBA. 5. The FPGA according to claim 1, wherein the ASIC-based hard core is one of multiple ASIC-based hard cores which are evenly distributed in the FPGA by using a crossbar switch matrix. 6. The FPGA according to claim 4, wherein the ASIC-based hard core is a hard core using an AXI-Interconnection bus protocol, wherein the AXI bus protocol belongs to the AMBA 7. The FPGA according to claim 6, wherein the FPGA comprises at least one logic cell bank, wherein each logic cell bank comprises at least one master station and at least one slave station. 8. The FPGA according to claim 5, wherein the ASIC-based hard core comprises two or more hard cores using an AXI-Interconnection bus protocol, wherein the hard cores using the AXI-Interconnection bus protocol communicate with each other by using an AXI bridge. 9. The FPGA according to claim 8, wherein the hard cores using the AXI-Interconnection bus protocol comprise a same quantity of master stations and a same quantity of slave stations, and have a same bit width and a same clock frequency. 10. The FPGA according to claim 8, wherein the hard cores using the AXI-Interconnection bus protocol comprise different quantities of master stations and different quantities of slave stations, and have different bit widths and different clock frequencies. 11. The FPGA according to claim 8, wherein some hard cores using the AXI-Interconnection bus protocol comprise a same quantity of master stations and a same quantity of slave stations, and have a same bit width and a same clock frequency; and other hard cores using the AXI-Interconnection bus protocol comprise different quantities of master stations and different quantities of slave stations, and have different bit widths and different clock frequencies. 12. The FPGA according to claim 1, wherein the ASIC-based hard core is one of multiple ASIC-based hard cores which are evenly distributed in the FPGA by using a ring bus. 13. A method for data communication based on a field programmable gate array (FPGA), wherein an application specific integrated circuit based (ASIC-based) hard core used for communication and interconnection is embedded in the FPGA; the ASIC-based hard core comprises a high-speed exchange and interconnection unit and at least one station; each station is connected to the high-speed exchange and interconnection unit; the station implements data transmission between each functional module in the FPGA and the ASIC-based hard core; the high-speed exchange and interconnection unit implements data transmission between the stations; and the method comprises the following steps:
receiving, by the high-speed exchange and interconnection unit by using a station corresponding to a source functional module, data sent by the source functional module, wherein the data carries information about a destination functional module; and sending, by the high-speed exchange and interconnection unit, the received data to the destination functional module according to the information about the destination functional module by using a station corresponding to the destination functional module. 14. The method according to claim 13, wherein
a quantity of the stations is equal to a quantity of functional modules, and one of the stations is connected to one of the functional modules; or each station corresponds to multiple functional modules, and each station is connected to the corresponding multiple functional modules. 15. The method according to claim 14, wherein when the quantity of the stations is equal to the quantity of the functional modules, a clock frequency, a data bit width, and a time sequence that are consistent with those of a corresponding functional module are configured for each station. 16. The method according to claim 13, wherein an on-chip interconnect bus protocol of the ASIC-based hard core comprises at least one of the following: AVALON, Wishbone, CoreConnect, or AMBA. 17. The method according to claim 13, wherein the ASIC-based hard core is one of multiple ASIC-based hard cores which are evenly distributed in the FPGA by using a crossbar switch matrix. 18. The method according to claim 13, wherein the ASIC-based hard core is one of multiple ASIC-based hard cores which are evenly distributed in the FPGA by using a ring bus. | The application provides a field programmable gate array (FPGA) and a communication method. At least one application specific integrated circuit based (ASIC-based) hard core is embedded in the FPGA. The ASIC-based hard core includes a high-speed exchange and interconnection unit and at least one station. Each station is connected to the high-speed exchange and interconnection unit. The station is configured to transmit data between each functional module in the FPGA and the ASIC-based hard core. The high-speed exchange and interconnection unit is configured to transmit data between the stations. In the FPGA provided by the application, an ASIC-based hard core is embedded, which can facilitate data exchange between each functional module and the ASIC-based hard core in proximity and reduce a time delay.1. A field programmable gate array (FPGA), comprising:
an application specific integrated circuit based (ASIC-based) hard core embedded in the FPGA, wherein the ASIC-based hard core comprises:
a high-speed exchange and interconnection unit; and
at least one station;
wherein
each station is connected to the high-speed exchange and interconnection unit;
the station is configured to transmit data between each functional module in the FPGA and the ASIC-based hard core;
and
the high-speed exchange and interconnection unit is configured to transmit data between the stations. 2. The FPGA according to claim 1, wherein a quantity of the stations is equal to a quantity of functional modules, and one of the stations is connected to one of the functional modules; or
each station corresponds to multiple functional modules, and each station is connected to the corresponding multiple functional modules. 3. The FPGA according to claim 2, wherein when the quantity of the stations is equal to the quantity of the functional modules, a clock frequency, a data bit width, and a time sequence that are consistent with those of a corresponding functional module are configured for each station. 4. The FPGA according to claim 1, wherein an on-chip interconnect bus protocol of the ASIC-based hard core comprises at least one of the following: AVALON, Wishbone, CoreConnect, or AMBA. 5. The FPGA according to claim 1, wherein the ASIC-based hard core is one of multiple ASIC-based hard cores which are evenly distributed in the FPGA by using a crossbar switch matrix. 6. The FPGA according to claim 4, wherein the ASIC-based hard core is a hard core using an AXI-Interconnection bus protocol, wherein the AXI bus protocol belongs to the AMBA 7. The FPGA according to claim 6, wherein the FPGA comprises at least one logic cell bank, wherein each logic cell bank comprises at least one master station and at least one slave station. 8. The FPGA according to claim 5, wherein the ASIC-based hard core comprises two or more hard cores using an AXI-Interconnection bus protocol, wherein the hard cores using the AXI-Interconnection bus protocol communicate with each other by using an AXI bridge. 9. The FPGA according to claim 8, wherein the hard cores using the AXI-Interconnection bus protocol comprise a same quantity of master stations and a same quantity of slave stations, and have a same bit width and a same clock frequency. 10. The FPGA according to claim 8, wherein the hard cores using the AXI-Interconnection bus protocol comprise different quantities of master stations and different quantities of slave stations, and have different bit widths and different clock frequencies. 11. The FPGA according to claim 8, wherein some hard cores using the AXI-Interconnection bus protocol comprise a same quantity of master stations and a same quantity of slave stations, and have a same bit width and a same clock frequency; and other hard cores using the AXI-Interconnection bus protocol comprise different quantities of master stations and different quantities of slave stations, and have different bit widths and different clock frequencies. 12. The FPGA according to claim 1, wherein the ASIC-based hard core is one of multiple ASIC-based hard cores which are evenly distributed in the FPGA by using a ring bus. 13. A method for data communication based on a field programmable gate array (FPGA), wherein an application specific integrated circuit based (ASIC-based) hard core used for communication and interconnection is embedded in the FPGA; the ASIC-based hard core comprises a high-speed exchange and interconnection unit and at least one station; each station is connected to the high-speed exchange and interconnection unit; the station implements data transmission between each functional module in the FPGA and the ASIC-based hard core; the high-speed exchange and interconnection unit implements data transmission between the stations; and the method comprises the following steps:
receiving, by the high-speed exchange and interconnection unit by using a station corresponding to a source functional module, data sent by the source functional module, wherein the data carries information about a destination functional module; and sending, by the high-speed exchange and interconnection unit, the received data to the destination functional module according to the information about the destination functional module by using a station corresponding to the destination functional module. 14. The method according to claim 13, wherein
a quantity of the stations is equal to a quantity of functional modules, and one of the stations is connected to one of the functional modules; or each station corresponds to multiple functional modules, and each station is connected to the corresponding multiple functional modules. 15. The method according to claim 14, wherein when the quantity of the stations is equal to the quantity of the functional modules, a clock frequency, a data bit width, and a time sequence that are consistent with those of a corresponding functional module are configured for each station. 16. The method according to claim 13, wherein an on-chip interconnect bus protocol of the ASIC-based hard core comprises at least one of the following: AVALON, Wishbone, CoreConnect, or AMBA. 17. The method according to claim 13, wherein the ASIC-based hard core is one of multiple ASIC-based hard cores which are evenly distributed in the FPGA by using a crossbar switch matrix. 18. The method according to claim 13, wherein the ASIC-based hard core is one of multiple ASIC-based hard cores which are evenly distributed in the FPGA by using a ring bus. | 2,100 |
6,647 | 6,647 | 15,329,487 | 2,171 | Embodiments of the present invention disclose a method and an apparatus for setting a background of a UI control. The method includes : acquiring, after a wallpaper of a terminal is updated, a fuzzy wallpaper obtained after fuzzy processing is performed on the updated wallpaper; capturing, according to a location of the UI control on a screen of the terminal, a wallpaper area block from the acquired fuzzy wallpaper; and setting the one captured wallpaper area block to a background of the UI control. In the embodiments of the present invention, the background of the UI control of the terminal can be enabled to dynamically change as the wallpaper changes, thereby enhancing flexibility and variability of a background image of the UI control. | 1-22. (canceled) 23. A method for setting a background of a user interface (UI) control, the method comprising:
acquiring, after a wallpaper of a terminal is updated, a fuzzy wallpaper obtained after fuzzy processing is performed on the updated wallpaper; capturing, according to a location of the UI control on a screen of the terminal, a wallpaper area block from the acquired fuzzy wallpaper; and setting the captured wallpaper area block to a background of the UI control. 24. The method according to claim 23, wherein:
the wallpaper comprises a live wallpaper; and acquiring a fuzzy wallpaper obtained after fuzzy processing is performed on the updated wallpaper comprises:
acquiring a fuzzy wallpaper obtained after fuzzy processing is performed on each changing state of an updated live wallpaper. 25. The method according to claim 23, wherein:
capturing, according to a location of the UI control on a screen of the terminal, a wallpaper area block from the acquired fuzzy wallpaper comprises:
capturing, according to a location of the UI control on a touchscreen of the terminal, a wallpaper area block at a location corresponding to the location of the UI control from the acquired fuzzy wallpaper. 26. The method according to claim 23, wherein:
capturing, according to a location of the UI control on a screen of the terminal, a wallpaper area block from the acquired fuzzy wallpaper comprises:
capturing multiple wallpaper area blocks; and
setting the captured wallpaper area block to a background of the UI control comprises:
setting any wallpaper area block of the multiple captured wallpaper area blocks to the background of the UI control, or
setting the multiple wallpaper area blocks to the background of the UI control one by one according to a preset polling policy. 27. The method according to claim 23, further comprising:
before acquiring, after a wallpaper of a terminal is updated, a fuzzy wallpaper obtained after fuzzy processing is performed on the updated wallpaper, performing fuzzy processing on the updated wallpaper; or when the terminal generates one wallpaper, performing fuzzy processing on the generated wallpaper. 28. The method according to claim 27, further comprising:
caching the fuzzy wallpaper obtained after the fuzzy processing is performed in internal memory of the terminal or saving the fuzzy wallpaper in a memory of the terminal, so as to acquire the fuzzy wallpaper obtained after fuzzy processing is performed on the updated wallpaper. 29. The method according to claim 23, wherein before capturing, according to a location of the UI control on a screen of the terminal, a wallpaper area block from the acquired fuzzy wallpaper, the method further comprises:
sampling the acquired fuzzy wallpaper and forming a new fuzzy wallpaper from sampled data; and during the capturing, according to a location of the UI control on a screen of the terminal, a wallpaper area block from the acquired fuzzy wallpaper, capturing a wallpaper area block from the new fuzzy wallpaper formed from the sampled data. 30. An apparatus for setting a background of a user interface (UI) control, the apparatus comprising:
an acquisition module, configured to acquire, after a wallpaper of a terminal is updated, a fuzzy wallpaper obtained after fuzzy processing is performed on the updated wallpaper; an image capturing module, configured to capture, according to a location of the UI control on a screen of the terminal, a wallpaper area block from the fuzzy wallpaper acquired by the acquisition module; and a setting module, configured to set the wallpaper area block captured by the image capturing module to a background of the UI control. 31. The apparatus according to claim 30, wherein:
the wallpaper comprises a live wallpaper; and the acquisition module is configured to: after the wallpaper of the terminal is updated, acquire a fuzzy wallpaper obtained after fuzzy processing is performed on each changing state of an updated live wallpaper. 32. The apparatus according to claim 30, wherein the image capturing module is configured to capture, according to a location of the UI control on a touchscreen of the terminal, a wallpaper area block at a location corresponding to the location of the UI control from the fuzzy wallpaper acquired by the acquisition module. 33. The apparatus according to claim 30, wherein:
when the image capturing module captures a wallpaper area block from the fuzzy wallpaper acquired by the acquisition module, multiple wallpaper area blocks are captured; and the setting module is configured to:
set any wallpaper area block of the multiple captured wallpaper area blocks to the background of the UI control,
or set the multiple wallpaper area blocks to the background of the UI control one by one according to a preset polling policy. 34. The apparatus according to claim 30, further comprising:
a fuzzy processing module, configured to:
before the acquisition module acquires, after the wallpaper of the terminal is updated, a fuzzy wallpaper obtained after fuzzy processing is performed on the updated wallpaper, perform fuzzy processing on the updated wallpaper; or
when the terminal generates one wallpaper, perform fuzzy processing on the generated wallpaper. 35. The apparatus according to claim 30, further comprising:
a sampling module, configured to: sample the fuzzy wallpaper acquired by the acquisition module and form a new fuzzy wallpaper from sampled data, wherein when the image capturing module captures, according to a location of the UI control on a screen of the terminal, a wallpaper area block from the acquired fuzzy wallpaper, the image capturing module captures a wallpaper area block from the new fuzzy wallpaper formed from the sampled data. 36. A terminal, comprising:
an input/output apparatus configured to interact with a user and receive an instruction input by the user or output data to the user; a memory configured to save program data having various functions; and a processor, coupled to the memory and the input/output apparatus via a bus, configured to call program data stored in the memory which, when executed by the processor, cause the processor to:
acquire, after a wallpaper of a terminal is updated, a fuzzy wallpaper obtained after fuzzy processing is performed on the updated wallpaper;
capture, according to a location of a user interface (UI) control on a screen of the terminal, a wallpaper area block from the acquired fuzzy wallpaper; and
setting the captured wallpaper area block to a background of the UI control. 37. The terminal according to claim 36, wherein:
the wallpaper comprises a live wallpaper; and the processor is further configured to call program data stored in the memory which, when executed by the processor, cause the processor to
acquire a fuzzy wallpaper obtained after fuzzy processing is performed on each changing state of an updated live wallpaper. 38. The terminal according to claim 36, wherein the processor is further configured to call program data stored in the memory which, when executed by the processor, cause the processor to:
capture, according to a location of the UI control on a touchscreen of the terminal, a wallpaper area block at a location corresponding to the location of the UI control from the acquired fuzzy wallpaper. 39. The terminal according to claim 36, wherein the processor is further configured to call program data stored in the memory which, when executed by the processor, cause the processor to:
capture multiple wallpaper blocks; and set any wallpaper area block of the multiple captured wallpaper area blocks to the background of the UI control, or set the multiple wallpaper area blocks to the background of the UI control one by one according to a preset polling policy. 40. The terminal according to claim 36, wherein the processor is further configured to call program data stored in the memory which, when executed by the processor, cause the processor to:
perform fuzzy processing on the updated wallpaper; or when the terminal generates one wallpaper, performing fuzzy processing on the generated wallpaper. 41. The terminal according to claim 40, wherein the processor is further configured to call program data stored in the memory which, when executed by the processor, cause the processor to:
cache the fuzzy wallpaper obtained after the fuzzy processing is performed in internal memory of the terminal or saving the fuzzy wallpaper in a memory of the terminal, so as to acquire the fuzzy wallpaper obtained after fuzzy processing is performed on the updated wallpaper. 42. The terminal according to claim 36, wherein the processor is further configured to call program data stored in the memory which, when executed by the processor, cause the processor to:
sample the acquired fuzzy wallpaper and forming a new fuzzy wallpaper from sampled data; and capture a wallpaper area block from the new fuzzy wallpaper formed from the sampled data. | Embodiments of the present invention disclose a method and an apparatus for setting a background of a UI control. The method includes : acquiring, after a wallpaper of a terminal is updated, a fuzzy wallpaper obtained after fuzzy processing is performed on the updated wallpaper; capturing, according to a location of the UI control on a screen of the terminal, a wallpaper area block from the acquired fuzzy wallpaper; and setting the one captured wallpaper area block to a background of the UI control. In the embodiments of the present invention, the background of the UI control of the terminal can be enabled to dynamically change as the wallpaper changes, thereby enhancing flexibility and variability of a background image of the UI control.1-22. (canceled) 23. A method for setting a background of a user interface (UI) control, the method comprising:
acquiring, after a wallpaper of a terminal is updated, a fuzzy wallpaper obtained after fuzzy processing is performed on the updated wallpaper; capturing, according to a location of the UI control on a screen of the terminal, a wallpaper area block from the acquired fuzzy wallpaper; and setting the captured wallpaper area block to a background of the UI control. 24. The method according to claim 23, wherein:
the wallpaper comprises a live wallpaper; and acquiring a fuzzy wallpaper obtained after fuzzy processing is performed on the updated wallpaper comprises:
acquiring a fuzzy wallpaper obtained after fuzzy processing is performed on each changing state of an updated live wallpaper. 25. The method according to claim 23, wherein:
capturing, according to a location of the UI control on a screen of the terminal, a wallpaper area block from the acquired fuzzy wallpaper comprises:
capturing, according to a location of the UI control on a touchscreen of the terminal, a wallpaper area block at a location corresponding to the location of the UI control from the acquired fuzzy wallpaper. 26. The method according to claim 23, wherein:
capturing, according to a location of the UI control on a screen of the terminal, a wallpaper area block from the acquired fuzzy wallpaper comprises:
capturing multiple wallpaper area blocks; and
setting the captured wallpaper area block to a background of the UI control comprises:
setting any wallpaper area block of the multiple captured wallpaper area blocks to the background of the UI control, or
setting the multiple wallpaper area blocks to the background of the UI control one by one according to a preset polling policy. 27. The method according to claim 23, further comprising:
before acquiring, after a wallpaper of a terminal is updated, a fuzzy wallpaper obtained after fuzzy processing is performed on the updated wallpaper, performing fuzzy processing on the updated wallpaper; or when the terminal generates one wallpaper, performing fuzzy processing on the generated wallpaper. 28. The method according to claim 27, further comprising:
caching the fuzzy wallpaper obtained after the fuzzy processing is performed in internal memory of the terminal or saving the fuzzy wallpaper in a memory of the terminal, so as to acquire the fuzzy wallpaper obtained after fuzzy processing is performed on the updated wallpaper. 29. The method according to claim 23, wherein before capturing, according to a location of the UI control on a screen of the terminal, a wallpaper area block from the acquired fuzzy wallpaper, the method further comprises:
sampling the acquired fuzzy wallpaper and forming a new fuzzy wallpaper from sampled data; and during the capturing, according to a location of the UI control on a screen of the terminal, a wallpaper area block from the acquired fuzzy wallpaper, capturing a wallpaper area block from the new fuzzy wallpaper formed from the sampled data. 30. An apparatus for setting a background of a user interface (UI) control, the apparatus comprising:
an acquisition module, configured to acquire, after a wallpaper of a terminal is updated, a fuzzy wallpaper obtained after fuzzy processing is performed on the updated wallpaper; an image capturing module, configured to capture, according to a location of the UI control on a screen of the terminal, a wallpaper area block from the fuzzy wallpaper acquired by the acquisition module; and a setting module, configured to set the wallpaper area block captured by the image capturing module to a background of the UI control. 31. The apparatus according to claim 30, wherein:
the wallpaper comprises a live wallpaper; and the acquisition module is configured to: after the wallpaper of the terminal is updated, acquire a fuzzy wallpaper obtained after fuzzy processing is performed on each changing state of an updated live wallpaper. 32. The apparatus according to claim 30, wherein the image capturing module is configured to capture, according to a location of the UI control on a touchscreen of the terminal, a wallpaper area block at a location corresponding to the location of the UI control from the fuzzy wallpaper acquired by the acquisition module. 33. The apparatus according to claim 30, wherein:
when the image capturing module captures a wallpaper area block from the fuzzy wallpaper acquired by the acquisition module, multiple wallpaper area blocks are captured; and the setting module is configured to:
set any wallpaper area block of the multiple captured wallpaper area blocks to the background of the UI control,
or set the multiple wallpaper area blocks to the background of the UI control one by one according to a preset polling policy. 34. The apparatus according to claim 30, further comprising:
a fuzzy processing module, configured to:
before the acquisition module acquires, after the wallpaper of the terminal is updated, a fuzzy wallpaper obtained after fuzzy processing is performed on the updated wallpaper, perform fuzzy processing on the updated wallpaper; or
when the terminal generates one wallpaper, perform fuzzy processing on the generated wallpaper. 35. The apparatus according to claim 30, further comprising:
a sampling module, configured to: sample the fuzzy wallpaper acquired by the acquisition module and form a new fuzzy wallpaper from sampled data, wherein when the image capturing module captures, according to a location of the UI control on a screen of the terminal, a wallpaper area block from the acquired fuzzy wallpaper, the image capturing module captures a wallpaper area block from the new fuzzy wallpaper formed from the sampled data. 36. A terminal, comprising:
an input/output apparatus configured to interact with a user and receive an instruction input by the user or output data to the user; a memory configured to save program data having various functions; and a processor, coupled to the memory and the input/output apparatus via a bus, configured to call program data stored in the memory which, when executed by the processor, cause the processor to:
acquire, after a wallpaper of a terminal is updated, a fuzzy wallpaper obtained after fuzzy processing is performed on the updated wallpaper;
capture, according to a location of a user interface (UI) control on a screen of the terminal, a wallpaper area block from the acquired fuzzy wallpaper; and
setting the captured wallpaper area block to a background of the UI control. 37. The terminal according to claim 36, wherein:
the wallpaper comprises a live wallpaper; and the processor is further configured to call program data stored in the memory which, when executed by the processor, cause the processor to
acquire a fuzzy wallpaper obtained after fuzzy processing is performed on each changing state of an updated live wallpaper. 38. The terminal according to claim 36, wherein the processor is further configured to call program data stored in the memory which, when executed by the processor, cause the processor to:
capture, according to a location of the UI control on a touchscreen of the terminal, a wallpaper area block at a location corresponding to the location of the UI control from the acquired fuzzy wallpaper. 39. The terminal according to claim 36, wherein the processor is further configured to call program data stored in the memory which, when executed by the processor, cause the processor to:
capture multiple wallpaper blocks; and set any wallpaper area block of the multiple captured wallpaper area blocks to the background of the UI control, or set the multiple wallpaper area blocks to the background of the UI control one by one according to a preset polling policy. 40. The terminal according to claim 36, wherein the processor is further configured to call program data stored in the memory which, when executed by the processor, cause the processor to:
perform fuzzy processing on the updated wallpaper; or when the terminal generates one wallpaper, performing fuzzy processing on the generated wallpaper. 41. The terminal according to claim 40, wherein the processor is further configured to call program data stored in the memory which, when executed by the processor, cause the processor to:
cache the fuzzy wallpaper obtained after the fuzzy processing is performed in internal memory of the terminal or saving the fuzzy wallpaper in a memory of the terminal, so as to acquire the fuzzy wallpaper obtained after fuzzy processing is performed on the updated wallpaper. 42. The terminal according to claim 36, wherein the processor is further configured to call program data stored in the memory which, when executed by the processor, cause the processor to:
sample the acquired fuzzy wallpaper and forming a new fuzzy wallpaper from sampled data; and capture a wallpaper area block from the new fuzzy wallpaper formed from the sampled data. | 2,100 |
6,648 | 6,648 | 14,947,816 | 2,161 | A system and method for recovering a dataset is provided that analyzes the dataset as it currently exists in order to determine those portions that do not need to be recovered. In some embodiments, the method includes identifying a dataset stored on a set of storage devices and corresponding to a first point in time. A request to restore the dataset to a second point in time is received, and a subset of the dataset is identified that is different between the first point in time and the second point in time. Data associated with the subset is selectively retrieved that corresponds to the second point in time, and the retrieved data is merged with the dataset stored on the set of storage devices. The two points in time may have any relationship, and in various examples, the method performs a roll-back or a roll-forward of the dataset. | 1. A method comprising:
identifying a dataset stored on a set of storage devices and corresponding to a first point in time; receiving a request to restore the dataset to a second point in time; identifying a subset of the dataset for which the subset is different between the first point in time and the second point in time; selectively retrieving data associated with the subset and corresponding to the second point in time; and merging the selectively retrieved data with the dataset stored on the set of storage devices. 2. The method of claim 1, wherein the retrieved data is structured as at least one data object, and wherein the identifying of the subset includes:
comparing a first manifest recording a first set of data objects associated with a first recovery point to a second manifest recording a second set of data objects associated with a second recovery point to identify a data object that is different between the first set and the second set. 3. The method of claim 1, wherein the identifying of the subset includes tracing a chain of recovery points between the first point in time and the second point in time. 4. The method of claim 3, wherein the identifying of the subset further includes:
for each recovery point in the chain of recovery points, comparing a first manifest recording a first set of data objects associated with the recovery point to a second manifest recording a second set of data objects associated with at least one of: a preceding recovery point or a subsequent recovery point. 5. The method of claim 1, wherein the identifying of the subset includes:
analyzing a local write log recording data extents that have been modified since a previous recovery point. 6. The method of claim 1, wherein the second point in time is previous to the first point and wherein the merging of the selectively retrieved data performs a roll-back of the dataset. 7. The method of claim 1, wherein the second point in time is subsequent to the first point and wherein the merging of the selectively retrieved data performs a roll-forward of the dataset. 8. The method of claim 1 further comprising, in response to the request, creating and storing a set of recovery objects representing the copy of the dataset stored on a set of storage devices. 9. A non-transitory machine-readable medium having stored thereon instructions for performing a method of data recovery, comprising machine executable code which when executed by at least one machine, causes the machine to:
identify a dataset corresponding to a first point in time and a recovery point of the dataset to be restored; identify data within the dataset at the first point in time that is different from corresponding data associated with the recovery point, wherein the identifying includes comparing a first manifest of recovery objects to a second manifest of recovery objects to identify at least one recovery object that is different therebetween; and selectively recover the corresponding data associated with the recovery point by retrieving the at least one recovery object. 10. The non-transitory machine-readable medium of claim 9 having stored thereon further instructions that cause the machine to trace a chain of recovery points between the first point in time and the recovery point. 11. The non-transitory machine-readable medium of claim 9 wherein the instructions that cause the machine to identify the data that is different includes instructions that cause the machine to examine a write log that records data extents that have been modified since a previous recovery point. 12. The non-transitory machine-readable medium of claim 9,
wherein the recovery point is previous to the first point in time; and wherein the instructions that cause the machine to identify the data that is different and selectively recover the corresponding data includes instructions that cause the machine to perform a roll-back of the dataset. 13. The non-transitory machine-readable medium of claim 9,
wherein the recovery point is subsequent to the first point in time; and wherein the instructions that cause the machine to identify the data that is different and selectively recover the corresponding data includes instructions that cause the machine to perform a roll-forward of the dataset. 14. A computing device comprising:
a memory containing a machine-readable medium comprising machine executable code having stored thereon instructions for performing a method of data recovery; and a processor coupled to the memory, the processor configured to execute the machine executable code to:
identify a dataset and a recovery point of the dataset to be recovered using a set of data objects stored on a data recovery system;
identify a first subset of the set of data objects that have data that is different from a corresponding portion of the dataset; and
selectively merge the data of the first subset with the dataset without merging data of a second subset of the set of data objects based on the second subset containing data that is not different from the dataset. 15. The computing device of claim 14, wherein the processor is further configured to execute the machine executable code to compare a first recovery object manifest to a second recovery object manifest to identify the first subset. 16. The computing device of claim 14, wherein the processor is further configured to execute the machine executable code to trace a chain of recovery points to identify the first subset. 17. The computing device of claim 14, wherein the processor is further configured to execute the machine executable code to identify the first subset utilizing a write log recording a portion of the dataset that has been modified since a previous recovery point. 18. The computing device of claim 14, wherein the set of data objects corresponds to a first point in time that is before a second point of time associated with the dataset prior to the merge; and wherein the merge performs a roll-back of the dataset. 19. The computing device of claim 14, wherein the set of data objects corresponds to a first point in time that is after a second point of time associated with the dataset prior to the merge; and wherein the merge performs a roll-forward of the dataset. 20. The computing device of claim 14, wherein the processor is further configured to execute the machine executable code to store another set of data object on the data recovery system that represents an incremental backup of the copy of the dataset prior to the merge. | A system and method for recovering a dataset is provided that analyzes the dataset as it currently exists in order to determine those portions that do not need to be recovered. In some embodiments, the method includes identifying a dataset stored on a set of storage devices and corresponding to a first point in time. A request to restore the dataset to a second point in time is received, and a subset of the dataset is identified that is different between the first point in time and the second point in time. Data associated with the subset is selectively retrieved that corresponds to the second point in time, and the retrieved data is merged with the dataset stored on the set of storage devices. The two points in time may have any relationship, and in various examples, the method performs a roll-back or a roll-forward of the dataset.1. A method comprising:
identifying a dataset stored on a set of storage devices and corresponding to a first point in time; receiving a request to restore the dataset to a second point in time; identifying a subset of the dataset for which the subset is different between the first point in time and the second point in time; selectively retrieving data associated with the subset and corresponding to the second point in time; and merging the selectively retrieved data with the dataset stored on the set of storage devices. 2. The method of claim 1, wherein the retrieved data is structured as at least one data object, and wherein the identifying of the subset includes:
comparing a first manifest recording a first set of data objects associated with a first recovery point to a second manifest recording a second set of data objects associated with a second recovery point to identify a data object that is different between the first set and the second set. 3. The method of claim 1, wherein the identifying of the subset includes tracing a chain of recovery points between the first point in time and the second point in time. 4. The method of claim 3, wherein the identifying of the subset further includes:
for each recovery point in the chain of recovery points, comparing a first manifest recording a first set of data objects associated with the recovery point to a second manifest recording a second set of data objects associated with at least one of: a preceding recovery point or a subsequent recovery point. 5. The method of claim 1, wherein the identifying of the subset includes:
analyzing a local write log recording data extents that have been modified since a previous recovery point. 6. The method of claim 1, wherein the second point in time is previous to the first point and wherein the merging of the selectively retrieved data performs a roll-back of the dataset. 7. The method of claim 1, wherein the second point in time is subsequent to the first point and wherein the merging of the selectively retrieved data performs a roll-forward of the dataset. 8. The method of claim 1 further comprising, in response to the request, creating and storing a set of recovery objects representing the copy of the dataset stored on a set of storage devices. 9. A non-transitory machine-readable medium having stored thereon instructions for performing a method of data recovery, comprising machine executable code which when executed by at least one machine, causes the machine to:
identify a dataset corresponding to a first point in time and a recovery point of the dataset to be restored; identify data within the dataset at the first point in time that is different from corresponding data associated with the recovery point, wherein the identifying includes comparing a first manifest of recovery objects to a second manifest of recovery objects to identify at least one recovery object that is different therebetween; and selectively recover the corresponding data associated with the recovery point by retrieving the at least one recovery object. 10. The non-transitory machine-readable medium of claim 9 having stored thereon further instructions that cause the machine to trace a chain of recovery points between the first point in time and the recovery point. 11. The non-transitory machine-readable medium of claim 9 wherein the instructions that cause the machine to identify the data that is different includes instructions that cause the machine to examine a write log that records data extents that have been modified since a previous recovery point. 12. The non-transitory machine-readable medium of claim 9,
wherein the recovery point is previous to the first point in time; and wherein the instructions that cause the machine to identify the data that is different and selectively recover the corresponding data includes instructions that cause the machine to perform a roll-back of the dataset. 13. The non-transitory machine-readable medium of claim 9,
wherein the recovery point is subsequent to the first point in time; and wherein the instructions that cause the machine to identify the data that is different and selectively recover the corresponding data includes instructions that cause the machine to perform a roll-forward of the dataset. 14. A computing device comprising:
a memory containing a machine-readable medium comprising machine executable code having stored thereon instructions for performing a method of data recovery; and a processor coupled to the memory, the processor configured to execute the machine executable code to:
identify a dataset and a recovery point of the dataset to be recovered using a set of data objects stored on a data recovery system;
identify a first subset of the set of data objects that have data that is different from a corresponding portion of the dataset; and
selectively merge the data of the first subset with the dataset without merging data of a second subset of the set of data objects based on the second subset containing data that is not different from the dataset. 15. The computing device of claim 14, wherein the processor is further configured to execute the machine executable code to compare a first recovery object manifest to a second recovery object manifest to identify the first subset. 16. The computing device of claim 14, wherein the processor is further configured to execute the machine executable code to trace a chain of recovery points to identify the first subset. 17. The computing device of claim 14, wherein the processor is further configured to execute the machine executable code to identify the first subset utilizing a write log recording a portion of the dataset that has been modified since a previous recovery point. 18. The computing device of claim 14, wherein the set of data objects corresponds to a first point in time that is before a second point of time associated with the dataset prior to the merge; and wherein the merge performs a roll-back of the dataset. 19. The computing device of claim 14, wherein the set of data objects corresponds to a first point in time that is after a second point of time associated with the dataset prior to the merge; and wherein the merge performs a roll-forward of the dataset. 20. The computing device of claim 14, wherein the processor is further configured to execute the machine executable code to store another set of data object on the data recovery system that represents an incremental backup of the copy of the dataset prior to the merge. | 2,100 |
6,649 | 6,649 | 12,818,639 | 2,178 | Within a communication system, a real time communication session comprising a set of participants and a session moderator can be defined. At least a portion of the participants can be geographically remote from each other and can be communicatively linked via networked computing devices, each enabling participation with the real-time communication session. It can be determined that one of the participants is not able to participate in the real time communication session. At least one delegate able to substitute for the one participant subject to approval of the moderator can be ascertained. The ascertained delegate can be substituted for the one participant for the real-time communication session. | 1. A method for improved moderator control of a real time communication session comprising:
within a communication system, defining a real time communication session comprising a plurality of participants and a session moderator, wherein at least a portion of the participants are geographically remote from each other and are communicatively linked via networked computing devices enabling participation with the real-time communication session; determining that one of the participants is not able to participate in the real time communication session; ascertaining at least one delegate able to substitute for the one participant subject to approval of the moderator; and substituting the ascertained delegate for the one participant for the real-time communication session. 2. The method of claim 1, further comprising:
indicating to the moderator via a user interface the one participate and the ascertained delegate; and receiving input from the moderate via the user interface that indicates that the ascertained delegate is approved to substitute for the one participant, wherein the substituting occurs responsive to receiving the input. 3. The method of claim 1, wherein the at least one delegate able to substitute for the one participant comprises a plurality of delegates, said method further comprising:
for each of the plurality of delegates, generating a suitability score indicating a level of suitability of the corresponding delegate to substitute for the one participant; and prioritizing the plurality of delegates by suitability score. 4. The method of claim 3, further comprising:
attempting to get each of the plurality of delegates to agree to substitute for the one participant by starting with the delegate having the highest suitability score and proceeding downward in set of delegates based on decreasing suitability score until a delegate agrees to substitute for the participant. 5. The method of claim 1, further comprising:
presenting the moderator with contact information for the delegate, which the moderator is able to utilize to communicate with the delegate to get the delegate to substitute for the one participant. 6. The method of claim 1, further comprising:
presenting the one participant with a user interface for designating at least one delegate for the real time communication session; receiving from the one participant information entered via the user interface that designates the at least one delegate; storing the received information; and presenting the stored information to the moderator responsive to determining that the one participant is not able to participate in the real time communication session. 7. The method of claim 1, further comprising:
executing a set of rules to determine a suitable person to function as the one participate, wherein suitability is based on factors specific to the real time communication session; determining contact information for the suitable person; receiving confirmation from the suitable person that the person will substitute for the one participant; and responsive to receiving the confirmation, confirming the suitable person as the ascertained delegate and substituting the suitable person for the one participant for the real time communication session. 8. The method of claim 1, wherein the real time communication session is a conference call, wherein the substituting occurs within records of the communication system. 9. The method of claim 1, wherein the defining of the real-time communication session occurs within a calendaring system that maintains calendar entries for each of the participants and the session moderator. 10. The method of claim 1, further comprising:
automatically sending an invitation request via the communication system to the ascertained delegate, said invitation indicating that the delegate is requested to replace the one participant during the real time communication session; receiving a response to the invitation request, said response indicating an agreement to replace the one participant; and responsive to and contingent upon receiving the response, substituting the ascertained delegate for the one participant. 11. The method of claim 1, wherein the determining that the one participant is not to participate in the real time communication session occurs when the one participant fails to join the real time communication session within a defined time boundary of the start of the real time communication session;
automatically notifying the moderator that the one participant has failed to join the real time communication session within the defined time boundary; receiving a substitution approval or request from the moderator; and responsive to receiving the substitution approval or request, substituting the ascertained delegate for the one participant. 12. The method of claim 1, wherein the one participant is part of the real time communication session when the real time communication session is initiated;
detecting that the one participant disconnects from the real time communication session during the real time communication session; and responsive to detecting that the one participant disconnects, ascertaining the at least one delegate and substituting the delegate, wherein the substituting of the delegate occurs during the real time communication session and before the real time communication session ends. 13. The method of claim 1, wherein the real time communication session is a conference call, wherein the determining that the one participant is not able to participate occurs during the conference call, said method further comprising:
calling a phone number associated with the delegate to establish a telephony connection with the delegate; the moderator requesting within the telephony connection that the delegate join the conference call; and when the delegate indicates an agreement to join the conference call, connecting the delegate to the conference call that is already in session. 14. A computer program product comprising a computer readable storage medium having computer usable program code embodied therewith, the computer usable program code comprising:
computer usable program code stored in a tangible storage medium operable to, within a communication system, defining a real time communication session comprising a plurality of participants and a session moderator, wherein at least a portion of the participants are geographically remote from each other and are communicatively linked via networked computing devices enabling participation with the real-time communication session; computer usable program code stored in a tangible storage medium operable to determine that one of the participants is not able to participate in the real time communication session; computer usable program code stored in a tangible storage medium operable to ascertain at least one delegate able to substitute for the one participant subject to approval of the moderator; and computer usable program code stored in a tangible storage medium operable to substitute the ascertained delegate for the one participant for the real-time communication session. 15. The computer program product of claim 14, further comprising:
computer usable program code stored in a tangible storage medium operable to indicate to the moderator via a user interface the one participate and the ascertained delegate; and computer usable program code stored in a tangible storage medium operable to receive input from the moderate via the user interface that indicates that the ascertained delegate is approved to substitute for the one participant, wherein the substituting occurs responsive to receiving the input. 16. An apparatus including an interface for improved moderator control of delegates comprising:
a tangible memory storing at least one computer program product; a processor operable to execute the computer program product to cause the interface window to be displayed by the display hardware; and the computer program product when executed by the processor being operable to determine a predefined amount of time after commencement of an electronic communication session that a first participant to the session is absent from the session; the computer program product when executed by the processor being operable to programmatically identify a delegate for the first participant for the session; and the computer program product when executed by the processor being operable to contact a moderator of the electronic communication session and to receive confirmation from the moderator that the delegate to be contacted; the computer program product when executed by the processor being operable to automatically contact the delegate to notify the delegate of the session; and the computer program product when executed by the processor being operable to substitute the delegate for the first participant during the electronic communication session. 17. The apparatus of claim 16, wherein the electronic communication session is a real time communication session, said apparatus further comprising:
display hardware within with an interface window of a graphical user interface is displayed to the moderator; wherein the computer program product presents a notification within the graphical user interface, said notification showing the first person is not attending the electronic communication session, wherein the confirmation from the moderator is received via the graphical user interface; and wherein the substitution of the delegate for the first participant occurs responsive to a user command issued from the graphical user interface by the moderator. 18. The apparatus of claim 16, wherein the interface is associated with a telephony application, wherein the telephony application is a Voice over Internet Protocol (VoIP) software application. 19. The apparatus of claim 17, wherein the graphical user interface is an interface of an IBM LOTUS SAMETIME client application. 20. The apparatus of claim 17, wherein the graphical user interface is configured for displaying to a second participant in the session other than the first participant, a notification indicating the first participant is absent from the session; and
wherein the graphical user interface is configured for enabling the participant to initiate the performance of the identifying and contacting of the delegate. | Within a communication system, a real time communication session comprising a set of participants and a session moderator can be defined. At least a portion of the participants can be geographically remote from each other and can be communicatively linked via networked computing devices, each enabling participation with the real-time communication session. It can be determined that one of the participants is not able to participate in the real time communication session. At least one delegate able to substitute for the one participant subject to approval of the moderator can be ascertained. The ascertained delegate can be substituted for the one participant for the real-time communication session.1. A method for improved moderator control of a real time communication session comprising:
within a communication system, defining a real time communication session comprising a plurality of participants and a session moderator, wherein at least a portion of the participants are geographically remote from each other and are communicatively linked via networked computing devices enabling participation with the real-time communication session; determining that one of the participants is not able to participate in the real time communication session; ascertaining at least one delegate able to substitute for the one participant subject to approval of the moderator; and substituting the ascertained delegate for the one participant for the real-time communication session. 2. The method of claim 1, further comprising:
indicating to the moderator via a user interface the one participate and the ascertained delegate; and receiving input from the moderate via the user interface that indicates that the ascertained delegate is approved to substitute for the one participant, wherein the substituting occurs responsive to receiving the input. 3. The method of claim 1, wherein the at least one delegate able to substitute for the one participant comprises a plurality of delegates, said method further comprising:
for each of the plurality of delegates, generating a suitability score indicating a level of suitability of the corresponding delegate to substitute for the one participant; and prioritizing the plurality of delegates by suitability score. 4. The method of claim 3, further comprising:
attempting to get each of the plurality of delegates to agree to substitute for the one participant by starting with the delegate having the highest suitability score and proceeding downward in set of delegates based on decreasing suitability score until a delegate agrees to substitute for the participant. 5. The method of claim 1, further comprising:
presenting the moderator with contact information for the delegate, which the moderator is able to utilize to communicate with the delegate to get the delegate to substitute for the one participant. 6. The method of claim 1, further comprising:
presenting the one participant with a user interface for designating at least one delegate for the real time communication session; receiving from the one participant information entered via the user interface that designates the at least one delegate; storing the received information; and presenting the stored information to the moderator responsive to determining that the one participant is not able to participate in the real time communication session. 7. The method of claim 1, further comprising:
executing a set of rules to determine a suitable person to function as the one participate, wherein suitability is based on factors specific to the real time communication session; determining contact information for the suitable person; receiving confirmation from the suitable person that the person will substitute for the one participant; and responsive to receiving the confirmation, confirming the suitable person as the ascertained delegate and substituting the suitable person for the one participant for the real time communication session. 8. The method of claim 1, wherein the real time communication session is a conference call, wherein the substituting occurs within records of the communication system. 9. The method of claim 1, wherein the defining of the real-time communication session occurs within a calendaring system that maintains calendar entries for each of the participants and the session moderator. 10. The method of claim 1, further comprising:
automatically sending an invitation request via the communication system to the ascertained delegate, said invitation indicating that the delegate is requested to replace the one participant during the real time communication session; receiving a response to the invitation request, said response indicating an agreement to replace the one participant; and responsive to and contingent upon receiving the response, substituting the ascertained delegate for the one participant. 11. The method of claim 1, wherein the determining that the one participant is not to participate in the real time communication session occurs when the one participant fails to join the real time communication session within a defined time boundary of the start of the real time communication session;
automatically notifying the moderator that the one participant has failed to join the real time communication session within the defined time boundary; receiving a substitution approval or request from the moderator; and responsive to receiving the substitution approval or request, substituting the ascertained delegate for the one participant. 12. The method of claim 1, wherein the one participant is part of the real time communication session when the real time communication session is initiated;
detecting that the one participant disconnects from the real time communication session during the real time communication session; and responsive to detecting that the one participant disconnects, ascertaining the at least one delegate and substituting the delegate, wherein the substituting of the delegate occurs during the real time communication session and before the real time communication session ends. 13. The method of claim 1, wherein the real time communication session is a conference call, wherein the determining that the one participant is not able to participate occurs during the conference call, said method further comprising:
calling a phone number associated with the delegate to establish a telephony connection with the delegate; the moderator requesting within the telephony connection that the delegate join the conference call; and when the delegate indicates an agreement to join the conference call, connecting the delegate to the conference call that is already in session. 14. A computer program product comprising a computer readable storage medium having computer usable program code embodied therewith, the computer usable program code comprising:
computer usable program code stored in a tangible storage medium operable to, within a communication system, defining a real time communication session comprising a plurality of participants and a session moderator, wherein at least a portion of the participants are geographically remote from each other and are communicatively linked via networked computing devices enabling participation with the real-time communication session; computer usable program code stored in a tangible storage medium operable to determine that one of the participants is not able to participate in the real time communication session; computer usable program code stored in a tangible storage medium operable to ascertain at least one delegate able to substitute for the one participant subject to approval of the moderator; and computer usable program code stored in a tangible storage medium operable to substitute the ascertained delegate for the one participant for the real-time communication session. 15. The computer program product of claim 14, further comprising:
computer usable program code stored in a tangible storage medium operable to indicate to the moderator via a user interface the one participate and the ascertained delegate; and computer usable program code stored in a tangible storage medium operable to receive input from the moderate via the user interface that indicates that the ascertained delegate is approved to substitute for the one participant, wherein the substituting occurs responsive to receiving the input. 16. An apparatus including an interface for improved moderator control of delegates comprising:
a tangible memory storing at least one computer program product; a processor operable to execute the computer program product to cause the interface window to be displayed by the display hardware; and the computer program product when executed by the processor being operable to determine a predefined amount of time after commencement of an electronic communication session that a first participant to the session is absent from the session; the computer program product when executed by the processor being operable to programmatically identify a delegate for the first participant for the session; and the computer program product when executed by the processor being operable to contact a moderator of the electronic communication session and to receive confirmation from the moderator that the delegate to be contacted; the computer program product when executed by the processor being operable to automatically contact the delegate to notify the delegate of the session; and the computer program product when executed by the processor being operable to substitute the delegate for the first participant during the electronic communication session. 17. The apparatus of claim 16, wherein the electronic communication session is a real time communication session, said apparatus further comprising:
display hardware within with an interface window of a graphical user interface is displayed to the moderator; wherein the computer program product presents a notification within the graphical user interface, said notification showing the first person is not attending the electronic communication session, wherein the confirmation from the moderator is received via the graphical user interface; and wherein the substitution of the delegate for the first participant occurs responsive to a user command issued from the graphical user interface by the moderator. 18. The apparatus of claim 16, wherein the interface is associated with a telephony application, wherein the telephony application is a Voice over Internet Protocol (VoIP) software application. 19. The apparatus of claim 17, wherein the graphical user interface is an interface of an IBM LOTUS SAMETIME client application. 20. The apparatus of claim 17, wherein the graphical user interface is configured for displaying to a second participant in the session other than the first participant, a notification indicating the first participant is absent from the session; and
wherein the graphical user interface is configured for enabling the participant to initiate the performance of the identifying and contacting of the delegate. | 2,100 |
6,650 | 6,650 | 15,596,567 | 2,177 | A method may include receiving, by a virtual reality (VR) platform, role information identifying a particular role associated with a user device; and/or identifying, by the VR platform, a virtual reality scene to be provided to the user device based on the role information, the virtual reality scene including a plurality of objects with sets of objects that are associated with information identifying respective roles associated with the sets of objects. A role may be associated with a corresponding set of objects that are relevant to a person performing the role. The method may include identifying, by the VR platform, a particular set of objects, of the sets of objects, associated with the particular role to be provided to the user device as a part of the virtual reality scene; and/or providing, by the VR platform the virtual reality scene including the particular set of objects. | 1. A method, comprising:
receiving, by one or more devices of a virtual reality (VR) platform, role information identifying a particular role associated with a user device; identifying, by the one or more devices of the VR platform, a virtual reality scene to be provided to the user device based on the role information,
the virtual reality scene including a plurality of objects,
sets of objects, of the plurality of objects, being associated with information identifying respective roles associated with the sets of objects,
a role, of the respective roles, being associated with a corresponding set of objects based on the corresponding set of objects being relevant to a person performing the role;
identifying, by the one or more devices of the VR platform, a particular set of objects, of the sets of objects, to be provided to the user device as a part of the virtual reality scene,
the particular set of objects being associated with the particular role; and
providing, by the one or more devices of the VR platform and to the user device, the virtual reality scene including the particular set of objects. 2. The method of claim 1, where providing the virtual reality scene comprises:
providing the virtual reality scene including the plurality of objects to permit the user device to filter objects other than the particular set of objects from the virtual reality scene. 3. The method of claim 1, where an object, of the plurality of objects, is associated with two or more different roles. 4. The method of claim 1, where providing the virtual reality scene including the particular set of objects comprises:
providing information identifying a physical identifier associated with a particular object of the particular set of objects,
the physical identifier to correspond to a physical object in a vicinity of the user device, and
the user device to provide the particular object in the virtual reality scene based on detecting the physical identifier. 5. The method of claim 1, further comprising:
receiving a modification to the virtual reality scene; determining that the modification relates to a particular object of the particular set of objects; and providing the particular object to the user device for inclusion in the virtual reality scene based on the modification. 6. The method of claim 5, where the particular object relates to a plurality of roles including the particular role; and
where the method further comprises:
identifying a plurality of user devices associated with one or more roles of the plurality of roles; and
providing the particular object to the plurality of user devices for inclusion in the virtual reality scene based on the plurality of user devices being associated with the one or more roles and based on the modification. 7. A device, comprising:
one or more processors to:
determine role information identifying a particular role associated with a user device;
identify a virtual reality scene to be provided to the user device based on the role information and/or the user device,
the virtual reality scene including a plurality of objects,
sets of objects, of the plurality of objects, being associated with information identifying respective roles associated with the sets of objects;
identify a particular set of objects, of the sets of objects, to be provided to the user device as a part of the virtual reality scene based on the particular role; and
provide, to the user device, the virtual reality scene including the particular set of objects. 8. The device of claim 7, where the one or more processors are further to:
provide a notification to the user device indicating that the user device is to receive the virtual reality scene including the particular set of objects. 9. The device of claim 7, where the particular set of objects is relevant to the particular role. 10. The device of claim 7, where the one or more processors, when providing the virtual reality scene including the particular set of objects, are to:
provide the virtual reality scene including the plurality of objects to permit the user device to filter objects other than the particular set of objects from the virtual reality scene. 11. The device of claim 7, further comprising:
receiving information identifying one or more modified objects associated with the particular role; and provide the one or more modified objects to the user device for inclusion in the virtual reality scene. 12. The device of claim 7, where the one or more processors, when determining the role information, are to:
receive the role information from the user device. 13. The device of claim 7, where the one or more processors, when determining the role information, are to:
determine the role information based on information indicating that a user of the user device is to view the virtual reality scene according to a set of roles that includes the particular role. 14. The device of claim 7, where the one or more processors, when determining the role information, are to:
determine the role information based on a location of the user device. 15. A non-transitory computer-readable medium storing instructions, the instructions comprising:
one or more instructions that, when executed by one or more processors, cause the one or more processors to:
determine role information identifying a role associated with a user device,
the role relating to a virtual reality scene to be provided to a user associated with the user device;
identify the virtual reality scene to be provided to the user device based on the role information,
the virtual reality scene including a plurality of objects,
sets of objects, of the plurality of objects, being associated with information identifying respective roles associated with the sets of objects;
identify a particular set of objects, of the sets of objects, to be provided to the user device as a part of the virtual reality scene based on the role; and
provide, to the user device, the virtual reality scene including the particular set of objects to permit the user device to provide the virtual reality scene to the user. 16. The non-transitory computer-readable medium of claim 15, where the one or more instructions, that cause the one or more processors to provide the virtual reality scene including the particular set of objects, cause the one or more processors to:
provide information identifying a physical identifier associated with a particular object of the particular set of objects,
the physical identifier to correspond to a physical object in a vicinity of the user device, and
the user device to provide or modify the particular object in the virtual reality scene based on detecting the physical identifier. 17. The non-transitory computer-readable medium of claim 15, where the virtual reality scene relates to a change management operation. 18. The non-transitory computer-readable medium of claim 15, where the one or more instructions, when executed by the one or more processors, cause the one or more processors to:
determine whether the user device has permission to access the virtual reality scene associated with the role; and where the one or more instructions, that cause the one or more processors to provide the virtual reality scene, cause the one or more processors to:
selectively provide the virtual reality scene to the user device based on determining whether the user device has permission to access the virtual reality scene,
the virtual reality scene to be provided when the user device has permission to access the virtual reality scene, and
the virtual reality scene not to be provided when the user device does not have permission to access the virtual reality scene. 19. The non-transitory computer-readable medium of claim 15, where the one or more instructions, that cause the one or more processors to determine the role information, cause the one or more processors to:
determine the role information based on information identifying the user associated with the user device. 20. The non-transitory computer-readable medium of claim 15, where the one or more instructions, that cause the one or more processors to determine the role information, cause the one or more processors to:
identify multiple, different roles associated with the user device; and where the one or more instructions, that cause the one or more processors to provide the virtual reality scene, cause the one or more processors to:
provide, to the user device, multiple, different sets of objects associated with the multiple, different roles. | A method may include receiving, by a virtual reality (VR) platform, role information identifying a particular role associated with a user device; and/or identifying, by the VR platform, a virtual reality scene to be provided to the user device based on the role information, the virtual reality scene including a plurality of objects with sets of objects that are associated with information identifying respective roles associated with the sets of objects. A role may be associated with a corresponding set of objects that are relevant to a person performing the role. The method may include identifying, by the VR platform, a particular set of objects, of the sets of objects, associated with the particular role to be provided to the user device as a part of the virtual reality scene; and/or providing, by the VR platform the virtual reality scene including the particular set of objects.1. A method, comprising:
receiving, by one or more devices of a virtual reality (VR) platform, role information identifying a particular role associated with a user device; identifying, by the one or more devices of the VR platform, a virtual reality scene to be provided to the user device based on the role information,
the virtual reality scene including a plurality of objects,
sets of objects, of the plurality of objects, being associated with information identifying respective roles associated with the sets of objects,
a role, of the respective roles, being associated with a corresponding set of objects based on the corresponding set of objects being relevant to a person performing the role;
identifying, by the one or more devices of the VR platform, a particular set of objects, of the sets of objects, to be provided to the user device as a part of the virtual reality scene,
the particular set of objects being associated with the particular role; and
providing, by the one or more devices of the VR platform and to the user device, the virtual reality scene including the particular set of objects. 2. The method of claim 1, where providing the virtual reality scene comprises:
providing the virtual reality scene including the plurality of objects to permit the user device to filter objects other than the particular set of objects from the virtual reality scene. 3. The method of claim 1, where an object, of the plurality of objects, is associated with two or more different roles. 4. The method of claim 1, where providing the virtual reality scene including the particular set of objects comprises:
providing information identifying a physical identifier associated with a particular object of the particular set of objects,
the physical identifier to correspond to a physical object in a vicinity of the user device, and
the user device to provide the particular object in the virtual reality scene based on detecting the physical identifier. 5. The method of claim 1, further comprising:
receiving a modification to the virtual reality scene; determining that the modification relates to a particular object of the particular set of objects; and providing the particular object to the user device for inclusion in the virtual reality scene based on the modification. 6. The method of claim 5, where the particular object relates to a plurality of roles including the particular role; and
where the method further comprises:
identifying a plurality of user devices associated with one or more roles of the plurality of roles; and
providing the particular object to the plurality of user devices for inclusion in the virtual reality scene based on the plurality of user devices being associated with the one or more roles and based on the modification. 7. A device, comprising:
one or more processors to:
determine role information identifying a particular role associated with a user device;
identify a virtual reality scene to be provided to the user device based on the role information and/or the user device,
the virtual reality scene including a plurality of objects,
sets of objects, of the plurality of objects, being associated with information identifying respective roles associated with the sets of objects;
identify a particular set of objects, of the sets of objects, to be provided to the user device as a part of the virtual reality scene based on the particular role; and
provide, to the user device, the virtual reality scene including the particular set of objects. 8. The device of claim 7, where the one or more processors are further to:
provide a notification to the user device indicating that the user device is to receive the virtual reality scene including the particular set of objects. 9. The device of claim 7, where the particular set of objects is relevant to the particular role. 10. The device of claim 7, where the one or more processors, when providing the virtual reality scene including the particular set of objects, are to:
provide the virtual reality scene including the plurality of objects to permit the user device to filter objects other than the particular set of objects from the virtual reality scene. 11. The device of claim 7, further comprising:
receiving information identifying one or more modified objects associated with the particular role; and provide the one or more modified objects to the user device for inclusion in the virtual reality scene. 12. The device of claim 7, where the one or more processors, when determining the role information, are to:
receive the role information from the user device. 13. The device of claim 7, where the one or more processors, when determining the role information, are to:
determine the role information based on information indicating that a user of the user device is to view the virtual reality scene according to a set of roles that includes the particular role. 14. The device of claim 7, where the one or more processors, when determining the role information, are to:
determine the role information based on a location of the user device. 15. A non-transitory computer-readable medium storing instructions, the instructions comprising:
one or more instructions that, when executed by one or more processors, cause the one or more processors to:
determine role information identifying a role associated with a user device,
the role relating to a virtual reality scene to be provided to a user associated with the user device;
identify the virtual reality scene to be provided to the user device based on the role information,
the virtual reality scene including a plurality of objects,
sets of objects, of the plurality of objects, being associated with information identifying respective roles associated with the sets of objects;
identify a particular set of objects, of the sets of objects, to be provided to the user device as a part of the virtual reality scene based on the role; and
provide, to the user device, the virtual reality scene including the particular set of objects to permit the user device to provide the virtual reality scene to the user. 16. The non-transitory computer-readable medium of claim 15, where the one or more instructions, that cause the one or more processors to provide the virtual reality scene including the particular set of objects, cause the one or more processors to:
provide information identifying a physical identifier associated with a particular object of the particular set of objects,
the physical identifier to correspond to a physical object in a vicinity of the user device, and
the user device to provide or modify the particular object in the virtual reality scene based on detecting the physical identifier. 17. The non-transitory computer-readable medium of claim 15, where the virtual reality scene relates to a change management operation. 18. The non-transitory computer-readable medium of claim 15, where the one or more instructions, when executed by the one or more processors, cause the one or more processors to:
determine whether the user device has permission to access the virtual reality scene associated with the role; and where the one or more instructions, that cause the one or more processors to provide the virtual reality scene, cause the one or more processors to:
selectively provide the virtual reality scene to the user device based on determining whether the user device has permission to access the virtual reality scene,
the virtual reality scene to be provided when the user device has permission to access the virtual reality scene, and
the virtual reality scene not to be provided when the user device does not have permission to access the virtual reality scene. 19. The non-transitory computer-readable medium of claim 15, where the one or more instructions, that cause the one or more processors to determine the role information, cause the one or more processors to:
determine the role information based on information identifying the user associated with the user device. 20. The non-transitory computer-readable medium of claim 15, where the one or more instructions, that cause the one or more processors to determine the role information, cause the one or more processors to:
identify multiple, different roles associated with the user device; and where the one or more instructions, that cause the one or more processors to provide the virtual reality scene, cause the one or more processors to:
provide, to the user device, multiple, different sets of objects associated with the multiple, different roles. | 2,100 |
6,651 | 6,651 | 16,393,425 | 2,187 | Methods and systems for facilitating an equitable bandwidth distribution across downstream devices in asymmetrical switch topologies, and in particular asymmetrical PCIe switch topologies. The equitable distribution of bandwidth is achieved in asymmetrical topologies using virtual switch partitioning. An upstream switch that is connected to the root complex via an upstream port and that receives bandwidth B from the upstream port, is virtualized into two or more virtual switches. Each virtual switch equally shares the bandwidth. Each virtual switch is allocated to downstream devices that are connected to the upstream switch as well as to one or more downstream switches that are connected to the upstream switch. Each downstream switch may be connected to one or more additional downstream devices. | 1. A system, comprising:
an upstream switch connected to a root complex through a first upstream port that has a bandwidth B, wherein:
the upstream switch is virtualized into n virtual switches, wherein each of the n virtual switches receives B/n bandwidth from the first upstream port;
for a first virtual switch of the n virtual switches, the first virtual switch is connected to a first plurality of downstream devices through a first plurality of downstream ports in the upstream switch;
for each remaining virtual switch of the n virtual switches, the virtual switch is connected to a respective second upstream port of a respective downstream switch, wherein the respective second upstream port of the respective downstream switch is connected to the upstream switch through a respective downstream port in the upstream switch; and
for each respective downstream switch, the downstream switch is connected to a respective second plurality of downstream devices through a respective second plurality of downstream ports in the respective second switch. 2. The system of claim 1, wherein n=2. 3. The system of claim 2, wherein the total number of the first plurality of downstream ports is the same as the total number of the second plurality of downstream ports. 4. The system of claim 1, wherein:
the first plurality of downstream devices comprises m devices; and the respective second plurality of downstream devices comprises p devices, where m is not equal to p. 5. The system of claim 3, wherein:
each of the m devices receives B/(n*m) bandwidth; and each of the p devices receives B/(n*p) bandwidth. 6. The system of claim 1, wherein each of the first upstream port, the first plurality of downstream ports, and the respective second plurality of downstream ports has the same number of lanes. 7. The system of claim 1, wherein each switch is a PCIe switch. 8. A method comprising:
connecting an upstream switch to a root complex through a first upstream port in the upstream switch; virtualizing the upstream switch into n virtual switches, wherein each of the n virtual switches receives B/n bandwidth from the first upstream port; for a first virtual switch of the n virtual switches, connecting a first plurality of downstream devices through a first plurality of downstream ports in the upstream switch; for each remaining virtual switch of the n virtual switches, connecting a second upstream port of a respective downstream switch to the upstream switch through a respective downstream port in the upstream switch; and for each respective downstream switch, connecting a respective second plurality of downstream devices through a respective second plurality of downstream ports in the at least one respective downstream switch. 9. The method of claim 8, wherein n=2. 10. The method of claim 9, wherein the total number of the first plurality of downstream ports is the same as the total number of the second plurality of downstream ports. 11. The method of claim 8, wherein:
the first plurality of downstream devices comprises m devices; and the respective second plurality of downstream devices comprises p devices, where m is not equal to p. 12. The method of claim 3, wherein:
each of the m devices receives B/(n*m) bandwidth from the first upstream port; and each of the p devices receives B/(n*p) bandwidth from the first upstream port. 13. The method of claim 1, wherein each of the first upstream port, the first plurality of downstream ports, and the respective second plurality of downstream ports has the same number of lanes. 14. The method of claim 13, wherein each switch is a PCIe switch. | Methods and systems for facilitating an equitable bandwidth distribution across downstream devices in asymmetrical switch topologies, and in particular asymmetrical PCIe switch topologies. The equitable distribution of bandwidth is achieved in asymmetrical topologies using virtual switch partitioning. An upstream switch that is connected to the root complex via an upstream port and that receives bandwidth B from the upstream port, is virtualized into two or more virtual switches. Each virtual switch equally shares the bandwidth. Each virtual switch is allocated to downstream devices that are connected to the upstream switch as well as to one or more downstream switches that are connected to the upstream switch. Each downstream switch may be connected to one or more additional downstream devices.1. A system, comprising:
an upstream switch connected to a root complex through a first upstream port that has a bandwidth B, wherein:
the upstream switch is virtualized into n virtual switches, wherein each of the n virtual switches receives B/n bandwidth from the first upstream port;
for a first virtual switch of the n virtual switches, the first virtual switch is connected to a first plurality of downstream devices through a first plurality of downstream ports in the upstream switch;
for each remaining virtual switch of the n virtual switches, the virtual switch is connected to a respective second upstream port of a respective downstream switch, wherein the respective second upstream port of the respective downstream switch is connected to the upstream switch through a respective downstream port in the upstream switch; and
for each respective downstream switch, the downstream switch is connected to a respective second plurality of downstream devices through a respective second plurality of downstream ports in the respective second switch. 2. The system of claim 1, wherein n=2. 3. The system of claim 2, wherein the total number of the first plurality of downstream ports is the same as the total number of the second plurality of downstream ports. 4. The system of claim 1, wherein:
the first plurality of downstream devices comprises m devices; and the respective second plurality of downstream devices comprises p devices, where m is not equal to p. 5. The system of claim 3, wherein:
each of the m devices receives B/(n*m) bandwidth; and each of the p devices receives B/(n*p) bandwidth. 6. The system of claim 1, wherein each of the first upstream port, the first plurality of downstream ports, and the respective second plurality of downstream ports has the same number of lanes. 7. The system of claim 1, wherein each switch is a PCIe switch. 8. A method comprising:
connecting an upstream switch to a root complex through a first upstream port in the upstream switch; virtualizing the upstream switch into n virtual switches, wherein each of the n virtual switches receives B/n bandwidth from the first upstream port; for a first virtual switch of the n virtual switches, connecting a first plurality of downstream devices through a first plurality of downstream ports in the upstream switch; for each remaining virtual switch of the n virtual switches, connecting a second upstream port of a respective downstream switch to the upstream switch through a respective downstream port in the upstream switch; and for each respective downstream switch, connecting a respective second plurality of downstream devices through a respective second plurality of downstream ports in the at least one respective downstream switch. 9. The method of claim 8, wherein n=2. 10. The method of claim 9, wherein the total number of the first plurality of downstream ports is the same as the total number of the second plurality of downstream ports. 11. The method of claim 8, wherein:
the first plurality of downstream devices comprises m devices; and the respective second plurality of downstream devices comprises p devices, where m is not equal to p. 12. The method of claim 3, wherein:
each of the m devices receives B/(n*m) bandwidth from the first upstream port; and each of the p devices receives B/(n*p) bandwidth from the first upstream port. 13. The method of claim 1, wherein each of the first upstream port, the first plurality of downstream ports, and the respective second plurality of downstream ports has the same number of lanes. 14. The method of claim 13, wherein each switch is a PCIe switch. | 2,100 |
6,652 | 6,652 | 15,605,331 | 2,123 | A method disclosed herein provides for escalation of machine learning content selection for content moderation use. The method includes requesting reaction feedback from users of an online social community platform in association with each of a number of user-provided content items appearing in the online social community platform. The reaction feedback is analyzed to identify a subset of the user-provided content items satisfying reaction consensus criteria, and content moderation logic is then trained based on the subset of content items identified from the analysis of the reaction feedback to facilitate selective implementation of content moderation actions based on the trained content moderation logic. | 1. A method for escalating machine-learning selection of content of moderated terms for content moderation, the method comprising:
requesting reaction feedback from users of an online social community platform in association with each of a number of user-provided content items appearing in the online social community platform; analyzing the reaction feedback from the users of the online social community platform with respect to each of the user-provided content items to identify a subset of the user-provided content items satisfying reaction consensus criteria; training content moderation logic based on the subset of content items identified from the analysis of the reaction feedback; and selectively performing a content moderation action based on the trained content moderation logic. 2. The method of claim 1, further comprising:
receiving a notification of potentially objectionable content in association with each one of the user-provided content items; and requesting the reaction feedback from the users responsive to the receipt of notification of the potentially objectionable content. 3. The method of claim 1, wherein each of the users of the online social community platform is granted access to content of the online social community platform responsive to authentication of a personal account credential and wherein soliciting the reaction feedback further comprises soliciting the reaction feedback from a user in association with the personal account credential of the user. 4. The method of claim 1, wherein training the content moderation logic further comprises:
updating a moderation data store to include an item and at least one associated usage context in which the item is identified as satisfying the reaction consensus criteria; and updating the content moderation logic to provide for performance of a content moderation action responsive to identification of an instance of the item appearing in a context matching the at least one associated usage context. 5. The method of claim 1, wherein analyzing the reaction feedback from users of the online social community platform further comprises identifying a geographic source of a subset of the reaction feedback, the subset of the reaction feedback satisfying the reaction consensus criteria for a select content item;
and wherein training the content moderation logic further comprises updating a moderation data store to associate the geographic source with the select content item. 6. The method of claim 5 wherein the method further comprises:
identifying an instance of the select content item within the online social community platform; and
selectively removing the instance of the select content item from accessible online space of a subset of the users residing in a geographic location corresponding to the geographic source while permitting the instance of the content item to remain within accessible online space of a subset of the users residing in other geographic locations. 7. The method of claim 1, further comprising:
periodically scanning content in the online social community platform to track usage frequency of the content items satisfying the reaction consensus criteria; detecting an increase in the usage frequency of a first content item of the content items satisfying the reaction consensus criteria, the increase in the usage frequency satisfying a threshold; and responsive to the detected increase in the usage frequency, training the content moderation logic to automatically perform a content moderation action on content including the first content item in the online social community platform. 8. The method of claim 1, wherein selectively performing a content moderation further comprises:
automatically flagging content for further review. 9. A content moderation system comprising:
a reaction feedback collection and analysis engine stored in memory and executable by a processor to:
solicit reaction feedback from users of an online social community platform in association with each of a number of user-provided content items appearing in the online social community platform;
analyze the reaction feedback from the users of the online social community platform with respect to each of the user-provided content items to identify a subset of the user-provided content items satisfying reaction consensus criteria;
train content moderation logic based on the subset of content items identified from the analysis of the reaction feedback; and
a content moderation engine stored in memory and executable by a processor to selectively perform a content moderation action based on the trained content moderation logic. 10. The content moderation system of claim 9, wherein the reaction feedback collection and analysis engine is further configured to:
receive a notification of potentially objectionable content in association with each one of the user-provided content items; and request the reaction feedback from the users responsive to the receipt of notification of the potentially objectionable content. 11. The content moderation system of claim 9, wherein each of the users of the online social community platform is granted access to content in the online social community platform responsive to authentication of a personal account credential and wherein soliciting the reaction feedback further comprises soliciting the reaction feedback from a user in association with the personal account credential of the user. 12. The content moderation system of claim 9, wherein the reaction feedback collection and analysis engine is further configured to:
update a data store to include a content item and at least one associated usage context in which the content item is identified as satisfying the reaction consensus criteria; and update the content moderation logic to provide for performance of a content moderation action responsive to identification of an instance of the content item appearing in a context matching the at least one associated usage context. 13. The content moderation system of claim 9, the reaction feedback collection and analysis engine is further configured to:
analyze the reaction feedback from users of the online social community platform by identifying a geographic source of a subset of the reaction feedback, the subset of the reaction feedback satisfying the reaction consensus criteria for a select content item; and train the content moderation logic by updating a moderation data store to associate the geographic source in memory with the select content item. 14. The content moderation system of claim 13, wherein the reaction feedback collection and analysis engine is further configured to:
identify an instance of the select content item within the online social community platform; and selectively remove the instance of the select content item from accessible online space of a subset of the users residing in a geographic location corresponding to the geographic source while permitting the instance of the select content item to remain within accessible online space of a subset of the users residing in other geographic locations. 15. The content moderation system of claim 9, wherein the reaction feedback collection and analysis engine is further configured to:
scan content in the online social community platform to track a usage frequency of the content items identified as satisfying the reaction consensus criteria; detect an increase in the usage frequency of a first content item of the content items identified as satisfying the reaction consensus criteria, the increase in the usage frequency satisfying a threshold; and responsive to the detected increase in the usage frequency, train the content moderation logic to remove from the online social community platform content including the first content item. 16. The content moderation system of claim 9, wherein feedback collection and analysis engine is further configured to selectively perform the content moderation action by automatically removing content from accessible online space of one or more users of the online social community platform. 17. One or more processor-readable storage media of a tangible article of manufacture encoding computer-executable instructions for executing on a computer system a computer process, the computer process comprising:
receiving reaction feedback from users of an online social community platform in association with each of a number of user-provided content items appearing in the online social community platform, the reaction feedback from each of the users associated with a personal access credential to a primary domain managing the online social community platform; analyzing the reaction feedback from the users of the online social community platform with respect to each of the user-provided content items to identify a subset of the user-provided content items satisfying reaction consensus criteria; training content moderation logic based on the subset of content items identified from the analysis of the reaction feedback; and selectively performing a content moderation action based on the trained content moderation logic. 18. The one or more processor-readable storage media of claim 17, wherein selectively performing the content moderation action further comprises
selectively performing the content moderation action responsive to identification of one or more content items of the identified subset within the online social community platform. 19. The one or more processor-readable storage media of claim 16, wherein the content moderation action includes removing an instance of one or more content items of the identified subset from accessible online space of at least one of the users of the online social community platform. 20. The one or more processor-readable storage media of claim 16, wherein the content moderation action includes an action directed toward a user responsible for uploading an instance of one or more content items of the identified subset to the online social community platform. | A method disclosed herein provides for escalation of machine learning content selection for content moderation use. The method includes requesting reaction feedback from users of an online social community platform in association with each of a number of user-provided content items appearing in the online social community platform. The reaction feedback is analyzed to identify a subset of the user-provided content items satisfying reaction consensus criteria, and content moderation logic is then trained based on the subset of content items identified from the analysis of the reaction feedback to facilitate selective implementation of content moderation actions based on the trained content moderation logic.1. A method for escalating machine-learning selection of content of moderated terms for content moderation, the method comprising:
requesting reaction feedback from users of an online social community platform in association with each of a number of user-provided content items appearing in the online social community platform; analyzing the reaction feedback from the users of the online social community platform with respect to each of the user-provided content items to identify a subset of the user-provided content items satisfying reaction consensus criteria; training content moderation logic based on the subset of content items identified from the analysis of the reaction feedback; and selectively performing a content moderation action based on the trained content moderation logic. 2. The method of claim 1, further comprising:
receiving a notification of potentially objectionable content in association with each one of the user-provided content items; and requesting the reaction feedback from the users responsive to the receipt of notification of the potentially objectionable content. 3. The method of claim 1, wherein each of the users of the online social community platform is granted access to content of the online social community platform responsive to authentication of a personal account credential and wherein soliciting the reaction feedback further comprises soliciting the reaction feedback from a user in association with the personal account credential of the user. 4. The method of claim 1, wherein training the content moderation logic further comprises:
updating a moderation data store to include an item and at least one associated usage context in which the item is identified as satisfying the reaction consensus criteria; and updating the content moderation logic to provide for performance of a content moderation action responsive to identification of an instance of the item appearing in a context matching the at least one associated usage context. 5. The method of claim 1, wherein analyzing the reaction feedback from users of the online social community platform further comprises identifying a geographic source of a subset of the reaction feedback, the subset of the reaction feedback satisfying the reaction consensus criteria for a select content item;
and wherein training the content moderation logic further comprises updating a moderation data store to associate the geographic source with the select content item. 6. The method of claim 5 wherein the method further comprises:
identifying an instance of the select content item within the online social community platform; and
selectively removing the instance of the select content item from accessible online space of a subset of the users residing in a geographic location corresponding to the geographic source while permitting the instance of the content item to remain within accessible online space of a subset of the users residing in other geographic locations. 7. The method of claim 1, further comprising:
periodically scanning content in the online social community platform to track usage frequency of the content items satisfying the reaction consensus criteria; detecting an increase in the usage frequency of a first content item of the content items satisfying the reaction consensus criteria, the increase in the usage frequency satisfying a threshold; and responsive to the detected increase in the usage frequency, training the content moderation logic to automatically perform a content moderation action on content including the first content item in the online social community platform. 8. The method of claim 1, wherein selectively performing a content moderation further comprises:
automatically flagging content for further review. 9. A content moderation system comprising:
a reaction feedback collection and analysis engine stored in memory and executable by a processor to:
solicit reaction feedback from users of an online social community platform in association with each of a number of user-provided content items appearing in the online social community platform;
analyze the reaction feedback from the users of the online social community platform with respect to each of the user-provided content items to identify a subset of the user-provided content items satisfying reaction consensus criteria;
train content moderation logic based on the subset of content items identified from the analysis of the reaction feedback; and
a content moderation engine stored in memory and executable by a processor to selectively perform a content moderation action based on the trained content moderation logic. 10. The content moderation system of claim 9, wherein the reaction feedback collection and analysis engine is further configured to:
receive a notification of potentially objectionable content in association with each one of the user-provided content items; and request the reaction feedback from the users responsive to the receipt of notification of the potentially objectionable content. 11. The content moderation system of claim 9, wherein each of the users of the online social community platform is granted access to content in the online social community platform responsive to authentication of a personal account credential and wherein soliciting the reaction feedback further comprises soliciting the reaction feedback from a user in association with the personal account credential of the user. 12. The content moderation system of claim 9, wherein the reaction feedback collection and analysis engine is further configured to:
update a data store to include a content item and at least one associated usage context in which the content item is identified as satisfying the reaction consensus criteria; and update the content moderation logic to provide for performance of a content moderation action responsive to identification of an instance of the content item appearing in a context matching the at least one associated usage context. 13. The content moderation system of claim 9, the reaction feedback collection and analysis engine is further configured to:
analyze the reaction feedback from users of the online social community platform by identifying a geographic source of a subset of the reaction feedback, the subset of the reaction feedback satisfying the reaction consensus criteria for a select content item; and train the content moderation logic by updating a moderation data store to associate the geographic source in memory with the select content item. 14. The content moderation system of claim 13, wherein the reaction feedback collection and analysis engine is further configured to:
identify an instance of the select content item within the online social community platform; and selectively remove the instance of the select content item from accessible online space of a subset of the users residing in a geographic location corresponding to the geographic source while permitting the instance of the select content item to remain within accessible online space of a subset of the users residing in other geographic locations. 15. The content moderation system of claim 9, wherein the reaction feedback collection and analysis engine is further configured to:
scan content in the online social community platform to track a usage frequency of the content items identified as satisfying the reaction consensus criteria; detect an increase in the usage frequency of a first content item of the content items identified as satisfying the reaction consensus criteria, the increase in the usage frequency satisfying a threshold; and responsive to the detected increase in the usage frequency, train the content moderation logic to remove from the online social community platform content including the first content item. 16. The content moderation system of claim 9, wherein feedback collection and analysis engine is further configured to selectively perform the content moderation action by automatically removing content from accessible online space of one or more users of the online social community platform. 17. One or more processor-readable storage media of a tangible article of manufacture encoding computer-executable instructions for executing on a computer system a computer process, the computer process comprising:
receiving reaction feedback from users of an online social community platform in association with each of a number of user-provided content items appearing in the online social community platform, the reaction feedback from each of the users associated with a personal access credential to a primary domain managing the online social community platform; analyzing the reaction feedback from the users of the online social community platform with respect to each of the user-provided content items to identify a subset of the user-provided content items satisfying reaction consensus criteria; training content moderation logic based on the subset of content items identified from the analysis of the reaction feedback; and selectively performing a content moderation action based on the trained content moderation logic. 18. The one or more processor-readable storage media of claim 17, wherein selectively performing the content moderation action further comprises
selectively performing the content moderation action responsive to identification of one or more content items of the identified subset within the online social community platform. 19. The one or more processor-readable storage media of claim 16, wherein the content moderation action includes removing an instance of one or more content items of the identified subset from accessible online space of at least one of the users of the online social community platform. 20. The one or more processor-readable storage media of claim 16, wherein the content moderation action includes an action directed toward a user responsible for uploading an instance of one or more content items of the identified subset to the online social community platform. | 2,100 |
6,653 | 6,653 | 16,521,747 | 2,191 | Methods and apparatus provide for downloading application software from a server, including: downloading the application software from the server, where a first application software file contains only a portion of the application software, and a second application software file contains more than the portion of the application software; executing the application software and generating application images based thereon, where execution of the first application software file contains enough of the application software to execute a limited amount of the application software; and displaying the application images on a display screen based on the execution of the application software, where the acquisition unit begins downloading the second application software file in a background process after downloading the first application software file and at least partially during the execution of the first application software file. | 1. An information processing apparatus for downloading application software from a server, comprising:
an acquisition unit for downloading the application software from the server, where a first application software file contains only a portion of the application software, and a second application software file contains more than the portion of the application software; an execution unit for executing the application software and generating application images based thereon, where execution of the first application software file contains enough of the application software to execute a limited amount of the application software; and a display processing part for displaying the application images on a display screen based on the execution of the application software, wherein the acquisition unit begins downloading the second application software file in a background process after downloading the first application software file and at least partially during the execution of the first application software file. 2. The information processing apparatus according to claim 1, wherein the display processing unit displays a progressive bar indicating a download status of the second application software file in a region of the display screen that does not interfere with the application images. 3. The information processing apparatus according to claim 1, wherein, even if the acquisition unit completes downloading of the second application software file, the execution unit continues to execute the application software based on the first application software file for at least some period of time without terminating the application software. 4. The information processing apparatus according to claim 1, further comprising an installation processing unit for installing the application software after downloading, wherein when the acquisition unit acquires the first application software, the installation processing unit installs the first application software file. 5. The information processing apparatus according to claim 4, wherein upon completion of downloading the first application software file, the acquisition unit instructs the installation processing unit to install the first application software file. 6. The information processing apparatus according to claim 4, wherein, when the acquisition unit completes downloading of the second application software file, the installation processing unit automatically installs the second application software file. 7. The information processing apparatus according to claim 1, wherein, when the acquisition unit completes downloading of the second application software file, the display processing unit notifies the user that a more complete download of the application software has been completed. 8. The information processing apparatus according to claim 1, wherein the second application software file is a complete amount of the application software. 9. A method for downloading application software from a server, comprising:
downloading the application software from the server, where a first application software file contains only a portion of the application software, and a second application software file contains more than the portion of the application software; executing the application software and generating application images based thereon, where execution of the first application software file contains enough of the application software to execute a limited amount of the application software; and displaying the application images on a display screen based on the execution of the application software, wherein the acquisition unit begins downloading the second application software file in a background process after downloading the first application software file and at least partially during the execution of the first application software file. 10. The method according to claim 9, further comprising displaying a progressive bar indicating a download status of the second application software file in a region of the display screen that does not interfere with the application images. 11. The method according to claim 9, wherein, even if the downloading completes downloading of the second application software file, the executing continues to execute the application software based on the first application software file for at least some period of time without terminating the application software. 12. The method according to claim 9, further comprising installing the application software after downloading, wherein when the first application software is downloaded, the first application software file is installed. 13. The method according to claim 12, wherein upon completion of downloading the first application software file, the installing automatically begins the installing of the first application software file. 14. The method according to claim 12, wherein, when the downloading of the second application software file is complete, installation of the second application software file is automatically started. 15. The method according to claim 9, wherein, when the downloading of the second application software file is complete, the method includes notifying the user that a more complete download of the application software has been completed. 16. The method according to claim 9, wherein the second application software file is a complete amount of the application software. 17. A non-transitory, computer readable storage medium containing a computer program, which when executed by a computer system, causes the computer system to carry out actions for downloading application software from a server, the actions comprising:
downloading the application software from the server, where a first application software file contains only a portion of the application software, and a second application software file contains more than the portion of the application software; executing the application software and generating application images based thereon, where execution of the first application software file contains enough of the application software to execute a limited amount of the application software; and displaying the application images on a display screen based on the execution of the application software, wherein the acquisition unit begins downloading the second application software file in a background process after downloading the first application software file and at least partially during the execution of the first application software file. | Methods and apparatus provide for downloading application software from a server, including: downloading the application software from the server, where a first application software file contains only a portion of the application software, and a second application software file contains more than the portion of the application software; executing the application software and generating application images based thereon, where execution of the first application software file contains enough of the application software to execute a limited amount of the application software; and displaying the application images on a display screen based on the execution of the application software, where the acquisition unit begins downloading the second application software file in a background process after downloading the first application software file and at least partially during the execution of the first application software file.1. An information processing apparatus for downloading application software from a server, comprising:
an acquisition unit for downloading the application software from the server, where a first application software file contains only a portion of the application software, and a second application software file contains more than the portion of the application software; an execution unit for executing the application software and generating application images based thereon, where execution of the first application software file contains enough of the application software to execute a limited amount of the application software; and a display processing part for displaying the application images on a display screen based on the execution of the application software, wherein the acquisition unit begins downloading the second application software file in a background process after downloading the first application software file and at least partially during the execution of the first application software file. 2. The information processing apparatus according to claim 1, wherein the display processing unit displays a progressive bar indicating a download status of the second application software file in a region of the display screen that does not interfere with the application images. 3. The information processing apparatus according to claim 1, wherein, even if the acquisition unit completes downloading of the second application software file, the execution unit continues to execute the application software based on the first application software file for at least some period of time without terminating the application software. 4. The information processing apparatus according to claim 1, further comprising an installation processing unit for installing the application software after downloading, wherein when the acquisition unit acquires the first application software, the installation processing unit installs the first application software file. 5. The information processing apparatus according to claim 4, wherein upon completion of downloading the first application software file, the acquisition unit instructs the installation processing unit to install the first application software file. 6. The information processing apparatus according to claim 4, wherein, when the acquisition unit completes downloading of the second application software file, the installation processing unit automatically installs the second application software file. 7. The information processing apparatus according to claim 1, wherein, when the acquisition unit completes downloading of the second application software file, the display processing unit notifies the user that a more complete download of the application software has been completed. 8. The information processing apparatus according to claim 1, wherein the second application software file is a complete amount of the application software. 9. A method for downloading application software from a server, comprising:
downloading the application software from the server, where a first application software file contains only a portion of the application software, and a second application software file contains more than the portion of the application software; executing the application software and generating application images based thereon, where execution of the first application software file contains enough of the application software to execute a limited amount of the application software; and displaying the application images on a display screen based on the execution of the application software, wherein the acquisition unit begins downloading the second application software file in a background process after downloading the first application software file and at least partially during the execution of the first application software file. 10. The method according to claim 9, further comprising displaying a progressive bar indicating a download status of the second application software file in a region of the display screen that does not interfere with the application images. 11. The method according to claim 9, wherein, even if the downloading completes downloading of the second application software file, the executing continues to execute the application software based on the first application software file for at least some period of time without terminating the application software. 12. The method according to claim 9, further comprising installing the application software after downloading, wherein when the first application software is downloaded, the first application software file is installed. 13. The method according to claim 12, wherein upon completion of downloading the first application software file, the installing automatically begins the installing of the first application software file. 14. The method according to claim 12, wherein, when the downloading of the second application software file is complete, installation of the second application software file is automatically started. 15. The method according to claim 9, wherein, when the downloading of the second application software file is complete, the method includes notifying the user that a more complete download of the application software has been completed. 16. The method according to claim 9, wherein the second application software file is a complete amount of the application software. 17. A non-transitory, computer readable storage medium containing a computer program, which when executed by a computer system, causes the computer system to carry out actions for downloading application software from a server, the actions comprising:
downloading the application software from the server, where a first application software file contains only a portion of the application software, and a second application software file contains more than the portion of the application software; executing the application software and generating application images based thereon, where execution of the first application software file contains enough of the application software to execute a limited amount of the application software; and displaying the application images on a display screen based on the execution of the application software, wherein the acquisition unit begins downloading the second application software file in a background process after downloading the first application software file and at least partially during the execution of the first application software file. | 2,100 |
6,654 | 6,654 | 15,624,719 | 2,176 | Technologies are provided for an instant preview of a locally unsupported file. A client application such as a file synchronization application may detect an intent to preview a locally stored file. Upon detection and determination that there is no local preview support is available, the client application may send the file to a cloud based service such as a cloud storage service. The cloud based service may search to determine if a cloud based previewer associated with the file is registered with the service. If no such previewer is found, the service may seek one from an external resource. Upon finding a previewer associated with the file, the cloud based service may generate a preview for the file and transmit to the client application to be rendered. The cloud based service may remove the cloud copy of the file upon completion in some examples. | 1. A method to provide an instant preview of a locally stored file, the method comprising:
detecting an intent to preview the locally stored file; confirming a lack of a local previewer associated with the locally stored file; requesting a preview from a server associated with the locally stored file by providing the locally stored file to the server, wherein the preview includes a presentation of the locally stored file using a previewer registered with the server; receiving the preview associated with the locally stored file from the server; and rendering the preview associated with the locally stored file. 2. The method of claim 1, wherein detecting the intent to preview the locally stored file comprises:
detecting one of a hover action, a click action, and a tap action on a representation of the locally stored file through a user interface of a local application. 3. The method of claim 2, wherein the representation of the locally stored file includes one of a thumbnail, an icon, a shortcut, an image, and a graphic. 4. The method of claim 1, further comprising:
informing the server about whether a user associated with the locally stored file is a subscriber or non-subscriber. 5. The method of claim 4, wherein the locally stored file provided to the server is removed from a cloud storage upon generation of the preview at the server if the user is a non-subscriber. 6. The method of claim 4, further comprising:
receiving an incentive for the user from the server to keep the provided locally stored file in a cloud storage if the user is non-subscriber. 7. The method of claim 1, wherein rendering the preview associated with the locally stored file comprises:
rendering the preview that includes one or more of an image, and the interactive element, and editable text associated with content of the locally stored file. 8. The method of claim 7, wherein the one or more of the image, and the interactive element, and the editable text associated with content of the locally stored file are included in the preview based on a determination by the server. 9. The method of claim 1, wherein confirming the lack of the local previewer associated with the locally stored file comprises:
querying an operating system of a computing device that stores the locally stored file for the local previewer. 10. A method to provide an instant preview of a locally unsupported file, the method comprising:
receiving a request for a preview for a locally stored file and a copy of the locally stored file from a client application upon detection of an intent to preview the locally stored file and confirmation of a lack of a local previewer associated with the locally stored file by the client application; querying a previewer registry associated with a server receiving the file to determine a previewer associated with the received file; if a registered previewer associated with the file is found, generating the preview using the registered previewer; and providing the preview to the client application. 11. The method of claim 10, further comprising:
removing the copy of the locally stored file from a temporary cloud storage if a user associated with the file is a non-subscriber. 12. The method of claim 11, further comprising:
generating the preview with a predefined level of functionality based on a subscription status of the user. 13. The method of claim 12, further comprising:
providing one of an incentive and an offer to the user associated with the level of functionality of the preview. 14. The method of claim 12, wherein the level of functionality includes one of a read only functionality, a limited interactivity functionality, and an editability functionality. 15. The method of claim 12, further comprising:
providing the preview with a first level of functionality for a predefined number of preview requests; and providing the preview with a second level of functionality after the predefined number of preview requests, wherein the first level of functionality provides a higher level of functionality compared to the second level of functionality. 16. A server configured to provide an instant preview of a locally stored file, the server comprising:
a communication device configured to facilitate communication between the server, a cloud storage, a previewer provider, and a client device; a memory configured to store instructions; and one or more processors coupled to the memory and the communication device, wherein the one or more processors, in conjunction with the instructions stored in the memory, execute a preview management module of a hosted service, the preview management module configured to: receive a request for a preview for a locally stored file and a copy of the locally stored file from a client application executed on the client device upon detection of an intent to preview the locally stored file and confirmation of a lack of a local previewer associated with the locally stored file by the client application; query a previewer registry associated with the server; if a registered previewer associated with the file is found, generate the preview using the registered previewer; provide the preview to the client application; and remove the copy of the locally stored file from a temporary cloud storage if a user associated with the file is a non-subscriber. 17. The server of claim 16, wherein the preview management module is further configured to:
keep the copy of the locally stored file at a cloud storage if the user associated with the file is a subscriber of the hosted service. 18. The server of claim 16, wherein the preview management module is further configured to:
if the registered previewer associated with the file is not found, query the previewer provider; and generate the preview using a previewer received from the previewer provider. 19. The server of claim 16, wherein the preview management module is further configured to:
collect statistical information associated with preview requests from a plurality of users, the statistical information including a file type, a subscription status, a number of requests per user, and a number of requests per file type; and provide the statistical information to the previewer provider. 20. The server of claim 16, wherein the preview management module is further configured to:
anonymize user information associated with the locally stored file prior to generating the preview. | Technologies are provided for an instant preview of a locally unsupported file. A client application such as a file synchronization application may detect an intent to preview a locally stored file. Upon detection and determination that there is no local preview support is available, the client application may send the file to a cloud based service such as a cloud storage service. The cloud based service may search to determine if a cloud based previewer associated with the file is registered with the service. If no such previewer is found, the service may seek one from an external resource. Upon finding a previewer associated with the file, the cloud based service may generate a preview for the file and transmit to the client application to be rendered. The cloud based service may remove the cloud copy of the file upon completion in some examples.1. A method to provide an instant preview of a locally stored file, the method comprising:
detecting an intent to preview the locally stored file; confirming a lack of a local previewer associated with the locally stored file; requesting a preview from a server associated with the locally stored file by providing the locally stored file to the server, wherein the preview includes a presentation of the locally stored file using a previewer registered with the server; receiving the preview associated with the locally stored file from the server; and rendering the preview associated with the locally stored file. 2. The method of claim 1, wherein detecting the intent to preview the locally stored file comprises:
detecting one of a hover action, a click action, and a tap action on a representation of the locally stored file through a user interface of a local application. 3. The method of claim 2, wherein the representation of the locally stored file includes one of a thumbnail, an icon, a shortcut, an image, and a graphic. 4. The method of claim 1, further comprising:
informing the server about whether a user associated with the locally stored file is a subscriber or non-subscriber. 5. The method of claim 4, wherein the locally stored file provided to the server is removed from a cloud storage upon generation of the preview at the server if the user is a non-subscriber. 6. The method of claim 4, further comprising:
receiving an incentive for the user from the server to keep the provided locally stored file in a cloud storage if the user is non-subscriber. 7. The method of claim 1, wherein rendering the preview associated with the locally stored file comprises:
rendering the preview that includes one or more of an image, and the interactive element, and editable text associated with content of the locally stored file. 8. The method of claim 7, wherein the one or more of the image, and the interactive element, and the editable text associated with content of the locally stored file are included in the preview based on a determination by the server. 9. The method of claim 1, wherein confirming the lack of the local previewer associated with the locally stored file comprises:
querying an operating system of a computing device that stores the locally stored file for the local previewer. 10. A method to provide an instant preview of a locally unsupported file, the method comprising:
receiving a request for a preview for a locally stored file and a copy of the locally stored file from a client application upon detection of an intent to preview the locally stored file and confirmation of a lack of a local previewer associated with the locally stored file by the client application; querying a previewer registry associated with a server receiving the file to determine a previewer associated with the received file; if a registered previewer associated with the file is found, generating the preview using the registered previewer; and providing the preview to the client application. 11. The method of claim 10, further comprising:
removing the copy of the locally stored file from a temporary cloud storage if a user associated with the file is a non-subscriber. 12. The method of claim 11, further comprising:
generating the preview with a predefined level of functionality based on a subscription status of the user. 13. The method of claim 12, further comprising:
providing one of an incentive and an offer to the user associated with the level of functionality of the preview. 14. The method of claim 12, wherein the level of functionality includes one of a read only functionality, a limited interactivity functionality, and an editability functionality. 15. The method of claim 12, further comprising:
providing the preview with a first level of functionality for a predefined number of preview requests; and providing the preview with a second level of functionality after the predefined number of preview requests, wherein the first level of functionality provides a higher level of functionality compared to the second level of functionality. 16. A server configured to provide an instant preview of a locally stored file, the server comprising:
a communication device configured to facilitate communication between the server, a cloud storage, a previewer provider, and a client device; a memory configured to store instructions; and one or more processors coupled to the memory and the communication device, wherein the one or more processors, in conjunction with the instructions stored in the memory, execute a preview management module of a hosted service, the preview management module configured to: receive a request for a preview for a locally stored file and a copy of the locally stored file from a client application executed on the client device upon detection of an intent to preview the locally stored file and confirmation of a lack of a local previewer associated with the locally stored file by the client application; query a previewer registry associated with the server; if a registered previewer associated with the file is found, generate the preview using the registered previewer; provide the preview to the client application; and remove the copy of the locally stored file from a temporary cloud storage if a user associated with the file is a non-subscriber. 17. The server of claim 16, wherein the preview management module is further configured to:
keep the copy of the locally stored file at a cloud storage if the user associated with the file is a subscriber of the hosted service. 18. The server of claim 16, wherein the preview management module is further configured to:
if the registered previewer associated with the file is not found, query the previewer provider; and generate the preview using a previewer received from the previewer provider. 19. The server of claim 16, wherein the preview management module is further configured to:
collect statistical information associated with preview requests from a plurality of users, the statistical information including a file type, a subscription status, a number of requests per user, and a number of requests per file type; and provide the statistical information to the previewer provider. 20. The server of claim 16, wherein the preview management module is further configured to:
anonymize user information associated with the locally stored file prior to generating the preview. | 2,100 |
6,655 | 6,655 | 16,558,804 | 2,148 | An inspection device control device for an inspection device of a wind turbine having a device interface arranged for communication with a wind turbine control of the wind turbine, and a device interface arranged for communication with the inspection device. Automated resource planning is possible if a processor produces control information for the inspection device depending on turbine parameters of the wind turbine received via the device interface and outputs the control information via the device interface. Further improved resource planning and control is made possible if a processor generates control information for the wind turbine and outputs the control information via the turbine interface, depending on the device parameters of the inspection device received via the device interface. | 1. An inspection control device for an inspection device of a wind turbine, comprising:
a turbine interface arranged for communication with a wind turbine control system of the wind turbine; and a device interface arranged for communication with the inspection device; wherein a processor generates control information for the inspection device and outputs the control information via the turbine interface in dependence on turbine parameters of the wind turbine received via the turbine interface; and wherein the processor outputs control information for a flying platform with a lighting source depending on the turbine parameters received via the turbine interface. 2. The inspection device control device according to claim 1, wherein the turbine interface is arranged to receive turbine parameters from the wind turbine or a wind farm and the processor generates the control information depending on at least the turbine parameters. 3. The inspection device control device according to claim 1, wherein the device interface is arranged to receive operating parameters of the inspection device, and the processor generates the control information depending on at least the operating parameters. 4. The inspection device control device according to claim 1, wherein the inspection device control device is arranged to receive environmental data, and the processor generates the control information depending on at least the environmental data. 5. The inspection device control device according to claim 4, wherein the device interface is arranged to receive operating parameters of the inspection device, and the processor generates the control information depending on at least the operating parameters. 6. The inspection device control device according to claim 5, wherein the processor is arranged to output turbine parameters for the wind turbine via the turbine interface, the turbine parameters being dependent in particular on the environmental data and/or the operating parameters. 7. The inspection device control device according to claim 1, wherein the inspection device is a flying drone, in particular a multicopter or a VTOL aircraft with wings, or a tracker fixed or pivotably attached to the wind turbine. 8. The inspection device control device according to claim 1, wherein the turbine parameters contain geoinformation of the wind turbine and/or in that the operating parameters contain geoinformation of the inspection device, and in that the processor generates the control information as a function of at least the geoinformation. 9. The inspection device control device according to claim 1, wherein the processor is arranged to receive real-time data via the turbine interface and/or the device interface. 10. The inspection device control device according to claim 1, wherein the control information contain in particular flight routes and/or flight times. 11. A method of operating an inspection apparatus control device for a wind turbine inspection apparatus, comprising:
exchanging wind turbine parameters with a wind turbine control system, and exchanging control information with the inspection device, wherein the control information for the inspection device is generated and output, depending on the turbine parameters and/or environmental parameters, and/or wherein the control information for the wind turbine is generated and output depending on the inspection device parameters and/or environmental parameters, and wherein control information for a flying platform with a lighting source depending on the turbine parameters are output. | An inspection device control device for an inspection device of a wind turbine having a device interface arranged for communication with a wind turbine control of the wind turbine, and a device interface arranged for communication with the inspection device. Automated resource planning is possible if a processor produces control information for the inspection device depending on turbine parameters of the wind turbine received via the device interface and outputs the control information via the device interface. Further improved resource planning and control is made possible if a processor generates control information for the wind turbine and outputs the control information via the turbine interface, depending on the device parameters of the inspection device received via the device interface.1. An inspection control device for an inspection device of a wind turbine, comprising:
a turbine interface arranged for communication with a wind turbine control system of the wind turbine; and a device interface arranged for communication with the inspection device; wherein a processor generates control information for the inspection device and outputs the control information via the turbine interface in dependence on turbine parameters of the wind turbine received via the turbine interface; and wherein the processor outputs control information for a flying platform with a lighting source depending on the turbine parameters received via the turbine interface. 2. The inspection device control device according to claim 1, wherein the turbine interface is arranged to receive turbine parameters from the wind turbine or a wind farm and the processor generates the control information depending on at least the turbine parameters. 3. The inspection device control device according to claim 1, wherein the device interface is arranged to receive operating parameters of the inspection device, and the processor generates the control information depending on at least the operating parameters. 4. The inspection device control device according to claim 1, wherein the inspection device control device is arranged to receive environmental data, and the processor generates the control information depending on at least the environmental data. 5. The inspection device control device according to claim 4, wherein the device interface is arranged to receive operating parameters of the inspection device, and the processor generates the control information depending on at least the operating parameters. 6. The inspection device control device according to claim 5, wherein the processor is arranged to output turbine parameters for the wind turbine via the turbine interface, the turbine parameters being dependent in particular on the environmental data and/or the operating parameters. 7. The inspection device control device according to claim 1, wherein the inspection device is a flying drone, in particular a multicopter or a VTOL aircraft with wings, or a tracker fixed or pivotably attached to the wind turbine. 8. The inspection device control device according to claim 1, wherein the turbine parameters contain geoinformation of the wind turbine and/or in that the operating parameters contain geoinformation of the inspection device, and in that the processor generates the control information as a function of at least the geoinformation. 9. The inspection device control device according to claim 1, wherein the processor is arranged to receive real-time data via the turbine interface and/or the device interface. 10. The inspection device control device according to claim 1, wherein the control information contain in particular flight routes and/or flight times. 11. A method of operating an inspection apparatus control device for a wind turbine inspection apparatus, comprising:
exchanging wind turbine parameters with a wind turbine control system, and exchanging control information with the inspection device, wherein the control information for the inspection device is generated and output, depending on the turbine parameters and/or environmental parameters, and/or wherein the control information for the wind turbine is generated and output depending on the inspection device parameters and/or environmental parameters, and wherein control information for a flying platform with a lighting source depending on the turbine parameters are output. | 2,100 |
6,656 | 6,656 | 16,332,130 | 2,192 | An apparatus and method are provided for generating and processing a trace stream indicative of instruction execution by processing circuitry. An apparatus has an input interface for receiving instruction execution information from the processing circuitry indicative of a sequence of instructions executed by the processing circuitry, and trace generation circuitry for generating from the instruction execution information a trace stream comprising a plurality of trace elements indicative of execution by the processing circuitry of instruction flow changing instructions within the sequence. The sequence may include a branch behaviour setting instruction that indicates an identified instruction within the sequence, where execution of the branch behaviour setting instruction enables a branch behaviour to be associated with the identified instruction that causes the processing circuitry to branch to a target address identified by the branch behaviour setting instruction when the identified instruction is encountered in the sequence. The trace generation circuitry is further arranged to generate, from the instruction execution information, a trace element indicative of execution behaviour of the branch behaviour setting instruction, and a trace element to indicate that the branch behaviour has been triggered on encountering the identified instruction within the sequence. This enables a very efficient form of trace stream to be used even in situations where the instruction sequence executed by the processing circuitry includes such branch behaviour setting instructions. | 1. An apparatus comprising:
an input interface to receive instruction execution information from processing circuitry indicative of a sequence of instructions executed by the processing circuitry, said sequence including a branch behaviour setting instruction that indicates an identified instruction within said sequence, execution of the branch behaviour setting instruction enabling a branch behaviour to be associated with said identified instruction that causes the processing circuitry to branch to a target address identified by the branch behaviour setting instruction when the identified instruction is encountered in said sequence; and trace generation circuitry to generate from the instruction execution information a trace stream comprising a plurality of trace elements indicative of execution by the processing circuitry of instruction flow changing instructions within said sequence; and the trace generation circuitry further being arranged to generate, from the instruction execution information, a trace element indicative of execution behaviour of said branch behaviour setting instruction, and a trace element to indicate that said branch behaviour has been triggered on encountering said identified instruction within said sequence. 2. An apparatus as claimed in claim 1, wherein:
when execution of the branch behaviour setting instruction causes the branch behaviour to be associated with the identified instruction, the processing circuitry is arranged to store branch control information for that identified instruction within a branch control storage; and the trace generation circuitry is arranged to generate a trace element indicating that said branch behaviour has been triggered when the instruction execution information indicates that a branch has occurred due to the identified instruction being encountered in the sequence at a time where branch control information for that identified instruction is stored within the branch control storage. 3. An apparatus as claimed in claim 2, wherein:
the branch behaviour setting instruction is a loop-end instruction at a finish of a program loop body, the identified instruction is an immediately preceding instruction within said program loop body, and said target address is an address of an instruction at a start of the program loop body; and the trace generation unit is arranged to issue a trace element indicating that said branch behaviour has been triggered each time the instruction execution information indicates that a branch has occurred due to said immediately preceding instruction being encountered at a time where branch control information for said immediately preceding instruction is stored within the branch control storage. 4. An apparatus as claimed in claim 3, wherein the branch behaviour is triggered when at least one further iteration of the program loop body is required when said immediately preceding instruction is encountered whilst branch control information for said immediately preceding instruction is stored within the branch control storage. 5. An apparatus as claimed in claim 3, wherein:
when execution of the loop-end instruction causes the branch behaviour to be associated with said immediately preceding instruction, the processing circuitry is arranged to branch to said target address, and the trace generation circuitry is arranged to issue a trace element indicating a taken branch as the execution behaviour of said loop-end instruction; when no further iterations of the loop body are required, execution of the loop-end instruction will cause the processing circuitry to exit the program loop body, and the trace generation circuitry is arranged to issue a trace element indicating a not taken branch as the execution behaviour of said loop-end instruction. 6. An apparatus as claimed in claim 3, wherein:
when an event causes the branch control information for said immediately preceding instruction to be invalidated within the branch control storage whilst further iterations of the program loop body are still required, the branch behaviour will not be triggered on a next encountering of said immediately preceding instruction; the processing circuitry is responsive to a next execution of the loop-end instruction to cause the branch behaviour to be re-associated with said immediately preceding instruction, and a branch to be taken to said target address, thereby resuming processing of the further iterations of the program loop body; and the trace generation circuitry is arranged to issue a further trace element indicating a taken branch as the execution behaviour of said loop-end instruction. 7. An apparatus as claimed in claim 1, wherein:
the branch behaviour setting instruction is a branch-future instruction, and the identified instruction is an instruction following said branch-future instruction within said sequence; when execution of the branch-future instruction causes the branch behaviour to be associated with said identified instruction, the trace generation circuitry is arranged to issue a trace element indicating, as the execution behaviour of said branch-future instruction, that the branch behaviour has been associated; when execution of the branch-future instruction does not cause the branch behaviour to be associated with said identified instruction, the trace generation circuitry is arranged to issue a trace element indicating, as the execution behaviour of said branch-future instruction, that the branch behaviour has not been associated. 8. An apparatus as claimed in claim 7, wherein:
the trace generation circuitry is arranged to issue, as the trace element indicating that the branch behaviour has been associated, a same type of trace element as used to indicate a taken branch; and the trace generation circuitry is arranged to issue, as the trace element indicating that the branch behaviour has not been associated, a same type of trace element as used to indicate a not taken branch. 9. An apparatus as claimed in claim 7, wherein:
when execution of the branch behaviour setting instruction causes the branch behaviour to be associated with the identified instruction, the processing circuitry is arranged to store branch control information for that identified instruction within a branch control storage; the trace generation circuitry is arranged to generate a trace element indicating that said branch behaviour has been triggered when the instruction execution information indicates that a branch has occurred due to the identified instruction being encountered in the sequence at a time where branch control information for that identified instruction is stored within the branch control storaged; and when execution of the branch-future instruction causes the branch behaviour to be associated with the identified instruction, and the branch control information for that identified instruction stored by the processing circuitry within the branch control storage overwrites active branch control information associated with the identified instruction of a previously executed branch-future instruction, the trace generation circuitry is arranged to issue a non-event trace element. 10. An apparatus as claimed in claim 7, wherein when tracing is enabled at a point in instruction execution between execution of the branch-future instruction and encountering of the identified instruction in said sequence, the trace generation circuitry is responsive to a branch being taken on encountering the identified instruction, to issue a trace element to identify both the identified instruction and the branch that has been taken on encountering that identified instruction. 11. An apparatus as claimed in claim 10, wherein:
when execution of the branch behaviour setting instruction causes the branch behaviour to be associated with the identified instruction, the processing circuitry is arranged to store branch control information for that identified instruction within a branch control storage; the trace generation circuitry is arranged to generate a trace element indicating that said branch behaviour has been triggered when the instruction execution information indicates that a branch has occurred due to the identified instruction being encountered in the sequence at a time where branch control information for that identified instruction is stored within the branch control storage; the trace generation circuitry is arranged to maintain a counter value in association with each entry in the branch control storage associated with an identified instruction of a branch-future instruction; and the trace generation circuitry is arranged to issue said trace element to identify both the identified instruction and the branch that has been taken on encountering that identified instruction, when the counter value for the relevant entry in the branch control storage has an unexpected value when the branch is taken on encountering the identified instruction. 12. An apparatus as claimed in claim 2, wherein:
when an event causes the branch control information within the branch control storage to be invalidated, the trace generation circuitry is arranged to issue an invalidation trace element. 13. An apparatus as claimed in claim 1, wherein the processing circuitry is arranged, when said branch behaviour is triggered on encountering said identified instruction within said sequence, to also execute said identified instruction. 14. An apparatus as claimed in claim 1, wherein the processing circuitry is arranged, when said branch behaviour is triggered on encountering said identified instruction within said sequence, to inhibit execution of said identified instruction. 15. An apparatus as claimed in claim 2, wherein said branch control information comprises at least branch point data providing an indication of said identified instruction and further data providing an indication of said target address. 16. An apparatus as claimed in claim 15, wherein said branch point data comprises one or more of:
address data indicative of an address of said identified instruction; end data indicative of an address of a last instruction that immediately precedes said identified instruction; offset data indicative of a distance between said branch behavior setting instruction and said identified instruction; a proper subset of bits indicative of a memory storage address of said identified instruction starting from a least significant bit end of bits of said memory storage address that distinguish between starting storage addresses of instructions; remaining size instruction data indicative of a number of instructions remaining to be processed before said identified instruction; and remaining size data indicative of a number of program storage locations remaining to be processed before said identified instruction is reached. 17. An apparatus, comprising:
an input interface to receive a trace stream comprising a plurality of trace elements indicative of execution by processing circuitry of predetermined instructions within a sequence of instructions executed by the processing circuitry, said sequence including a branch behaviour setting instruction that indicates an identified instruction within said sequence, execution of the branch behaviour setting instruction enabling a branch behaviour to be associated with said identified instruction that causes the processing circuitry to branch to a target address identified by the branch behaviour setting instruction when the identified instruction is encountered in said sequence; decompression circuitry, responsive to each trace element, to traverse a program image from a current instruction address until a next one of the predetermined instructions is detected within said program image, and to produce from the program image information indicative of the instructions between said current instruction address and said next one of the predetermined instructions; and a branch control storage associated with said decompression circuitry; the decompression circuitry being responsive to detecting at least one type of the branch behaviour setting instruction when traversing said program image in response to a current trace element of a predetermined type, to store within the branch control storage branch control information derived from the branch behaviour setting instruction; the decompression circuitry being arranged, when detecting with reference to the branch control information that the identified instruction has been reached during traversal of the program image, to treat that identified instruction as the next one of said predetermined instructions. 18. An apparatus as claimed in claim 17, wherein the decompression circuitry is arranged to store as the branch control information branch point data identified by the branch behaviour setting instruction and used to determine said identified instruction. 19. An apparatus as claimed in claim 18, wherein the decompression circuitry is further arranged to store as the branch control information the target address when that target address is directly derivable from an immediate value specified within said branch behaviour setting instruction. 20. An apparatus as claimed in claim 17, wherein:
the decompression circuitry is responsive to a non-event trace element in said trace stream to invalidate an entry in its associated branch control storage. 21. An apparatus as claimed in claim 17, wherein:
the decompression circuitry is responsive to an invalidation trace element in said trace stream to invalidate the contents of its associated branch control storage. 22. A method of generating a trace stream indicative of instruction execution by processing circuitry, comprising:
receiving instruction execution information from the processing circuitry indicative of a sequence of instructions executed by the processing circuitry, said sequence including a branch behaviour setting instruction that indicates an identified instruction within said sequence, execution of the branch behaviour setting instruction enabling a branch behaviour to be associated with said identified instruction that causes the processing circuitry to branch to a target address identified by the branch behaviour setting instruction when the identified instruction is encountered in said sequence; generating from the instruction execution information the trace stream comprising a plurality of trace elements indicative of execution by the processing circuitry of instruction flow changing instructions within said sequence; and generating, from the instruction execution information, a trace element indicative of execution behaviour of said branch behaviour setting instruction, and a trace element to indicate that said branch behaviour has been triggered on encountering said identified instruction within said sequence. 23. An apparatus comprising:
input interface means for receiving instruction execution information from processing circuitry indicative of a sequence of instructions executed by the processing circuitry, said sequence including a branch behaviour setting instruction that indicates an identified instruction within said sequence, execution of the branch behaviour setting instruction enabling a branch behaviour to be associated with said identified instruction that causes the processing circuitry to branch to a target address identified by the branch behaviour setting instruction when the identified instruction is encountered in said sequence; and trace generation means for generating from the instruction execution information a trace stream comprising a plurality of trace elements indicative of execution by the processing circuitry of instruction flow changing instructions within said sequence; and the trace generation means further for generating, from the instruction execution information, a trace element indicative of execution behaviour of said branch behaviour setting instruction, and a trace element to indicate that said branch behaviour has been triggered on encountering said identified instruction within said sequence. 24. A method of processing a trace stream generated to indicate instruction execution by processing circuitry, comprising:
receiving the trace stream comprising a plurality of trace elements indicative of execution by the processing circuitry of predetermined instructions within a sequence of instructions executed by the processing circuitry, said sequence including a branch behaviour setting instruction that indicates an identified instruction within said sequence, execution of the branch behaviour setting instruction enabling a branch behaviour to be associated with said identified instruction that causes the processing circuitry to branch to a target address identified by the branch behaviour setting instruction when the identified instruction is encountered in said sequence; traversing, responsive to each trace element, a program image from a current instruction address until a next one of the predetermined instructions is detected within said program image, and producing from the program image information indicative of the instructions between said current instruction address and said next one of the predetermined instructions; responsive to detecting at least one type of the branch behaviour setting instruction when traversing said program image in response to a current trace element of a predetermined type, storing within a branch control storage branch control information derived from the branch behaviour setting instruction; and when detecting with reference to the branch control information that the identified instruction has been reached during traversal of the program image, treating that identified instruction as the next one of said predetermined instructions. 25. An apparatus, comprising:
an input interface means for receiving a trace stream comprising a plurality of trace elements indicative of execution by processing circuitry of predetermined instructions within a sequence of instructions executed by the processing circuitry, said sequence including a branch behaviour setting instruction that indicates an identified instruction within said sequence, execution of the branch behaviour setting instruction enabling a branch behaviour to be associated with said identified instruction that causes the processing circuitry to branch to a target address identified by the branch behaviour setting instruction when the identified instruction is encountered in said sequence; decompression means for traversing, responsive to each trace element, a program image from a current instruction address until a next one of the predetermined instructions is detected within said program image, and for producing from the program image information indicative of the instructions between said current instruction address and said next one of the predetermined instructions; and a branch control storage means for association with said decompression means; the decompression means, responsive to detecting at least one type of the branch behaviour setting instruction when traversing said program image in response to a current trace element of a predetermined type, for storing within the branch control storage means branch control information derived from the branch behaviour setting instruction; the decompression means, when detecting with reference to the branch control information that the identified instruction has been reached during traversal of the program image, for treating that identified instruction as the next one of said predetermined instructions | An apparatus and method are provided for generating and processing a trace stream indicative of instruction execution by processing circuitry. An apparatus has an input interface for receiving instruction execution information from the processing circuitry indicative of a sequence of instructions executed by the processing circuitry, and trace generation circuitry for generating from the instruction execution information a trace stream comprising a plurality of trace elements indicative of execution by the processing circuitry of instruction flow changing instructions within the sequence. The sequence may include a branch behaviour setting instruction that indicates an identified instruction within the sequence, where execution of the branch behaviour setting instruction enables a branch behaviour to be associated with the identified instruction that causes the processing circuitry to branch to a target address identified by the branch behaviour setting instruction when the identified instruction is encountered in the sequence. The trace generation circuitry is further arranged to generate, from the instruction execution information, a trace element indicative of execution behaviour of the branch behaviour setting instruction, and a trace element to indicate that the branch behaviour has been triggered on encountering the identified instruction within the sequence. This enables a very efficient form of trace stream to be used even in situations where the instruction sequence executed by the processing circuitry includes such branch behaviour setting instructions.1. An apparatus comprising:
an input interface to receive instruction execution information from processing circuitry indicative of a sequence of instructions executed by the processing circuitry, said sequence including a branch behaviour setting instruction that indicates an identified instruction within said sequence, execution of the branch behaviour setting instruction enabling a branch behaviour to be associated with said identified instruction that causes the processing circuitry to branch to a target address identified by the branch behaviour setting instruction when the identified instruction is encountered in said sequence; and trace generation circuitry to generate from the instruction execution information a trace stream comprising a plurality of trace elements indicative of execution by the processing circuitry of instruction flow changing instructions within said sequence; and the trace generation circuitry further being arranged to generate, from the instruction execution information, a trace element indicative of execution behaviour of said branch behaviour setting instruction, and a trace element to indicate that said branch behaviour has been triggered on encountering said identified instruction within said sequence. 2. An apparatus as claimed in claim 1, wherein:
when execution of the branch behaviour setting instruction causes the branch behaviour to be associated with the identified instruction, the processing circuitry is arranged to store branch control information for that identified instruction within a branch control storage; and the trace generation circuitry is arranged to generate a trace element indicating that said branch behaviour has been triggered when the instruction execution information indicates that a branch has occurred due to the identified instruction being encountered in the sequence at a time where branch control information for that identified instruction is stored within the branch control storage. 3. An apparatus as claimed in claim 2, wherein:
the branch behaviour setting instruction is a loop-end instruction at a finish of a program loop body, the identified instruction is an immediately preceding instruction within said program loop body, and said target address is an address of an instruction at a start of the program loop body; and the trace generation unit is arranged to issue a trace element indicating that said branch behaviour has been triggered each time the instruction execution information indicates that a branch has occurred due to said immediately preceding instruction being encountered at a time where branch control information for said immediately preceding instruction is stored within the branch control storage. 4. An apparatus as claimed in claim 3, wherein the branch behaviour is triggered when at least one further iteration of the program loop body is required when said immediately preceding instruction is encountered whilst branch control information for said immediately preceding instruction is stored within the branch control storage. 5. An apparatus as claimed in claim 3, wherein:
when execution of the loop-end instruction causes the branch behaviour to be associated with said immediately preceding instruction, the processing circuitry is arranged to branch to said target address, and the trace generation circuitry is arranged to issue a trace element indicating a taken branch as the execution behaviour of said loop-end instruction; when no further iterations of the loop body are required, execution of the loop-end instruction will cause the processing circuitry to exit the program loop body, and the trace generation circuitry is arranged to issue a trace element indicating a not taken branch as the execution behaviour of said loop-end instruction. 6. An apparatus as claimed in claim 3, wherein:
when an event causes the branch control information for said immediately preceding instruction to be invalidated within the branch control storage whilst further iterations of the program loop body are still required, the branch behaviour will not be triggered on a next encountering of said immediately preceding instruction; the processing circuitry is responsive to a next execution of the loop-end instruction to cause the branch behaviour to be re-associated with said immediately preceding instruction, and a branch to be taken to said target address, thereby resuming processing of the further iterations of the program loop body; and the trace generation circuitry is arranged to issue a further trace element indicating a taken branch as the execution behaviour of said loop-end instruction. 7. An apparatus as claimed in claim 1, wherein:
the branch behaviour setting instruction is a branch-future instruction, and the identified instruction is an instruction following said branch-future instruction within said sequence; when execution of the branch-future instruction causes the branch behaviour to be associated with said identified instruction, the trace generation circuitry is arranged to issue a trace element indicating, as the execution behaviour of said branch-future instruction, that the branch behaviour has been associated; when execution of the branch-future instruction does not cause the branch behaviour to be associated with said identified instruction, the trace generation circuitry is arranged to issue a trace element indicating, as the execution behaviour of said branch-future instruction, that the branch behaviour has not been associated. 8. An apparatus as claimed in claim 7, wherein:
the trace generation circuitry is arranged to issue, as the trace element indicating that the branch behaviour has been associated, a same type of trace element as used to indicate a taken branch; and the trace generation circuitry is arranged to issue, as the trace element indicating that the branch behaviour has not been associated, a same type of trace element as used to indicate a not taken branch. 9. An apparatus as claimed in claim 7, wherein:
when execution of the branch behaviour setting instruction causes the branch behaviour to be associated with the identified instruction, the processing circuitry is arranged to store branch control information for that identified instruction within a branch control storage; the trace generation circuitry is arranged to generate a trace element indicating that said branch behaviour has been triggered when the instruction execution information indicates that a branch has occurred due to the identified instruction being encountered in the sequence at a time where branch control information for that identified instruction is stored within the branch control storaged; and when execution of the branch-future instruction causes the branch behaviour to be associated with the identified instruction, and the branch control information for that identified instruction stored by the processing circuitry within the branch control storage overwrites active branch control information associated with the identified instruction of a previously executed branch-future instruction, the trace generation circuitry is arranged to issue a non-event trace element. 10. An apparatus as claimed in claim 7, wherein when tracing is enabled at a point in instruction execution between execution of the branch-future instruction and encountering of the identified instruction in said sequence, the trace generation circuitry is responsive to a branch being taken on encountering the identified instruction, to issue a trace element to identify both the identified instruction and the branch that has been taken on encountering that identified instruction. 11. An apparatus as claimed in claim 10, wherein:
when execution of the branch behaviour setting instruction causes the branch behaviour to be associated with the identified instruction, the processing circuitry is arranged to store branch control information for that identified instruction within a branch control storage; the trace generation circuitry is arranged to generate a trace element indicating that said branch behaviour has been triggered when the instruction execution information indicates that a branch has occurred due to the identified instruction being encountered in the sequence at a time where branch control information for that identified instruction is stored within the branch control storage; the trace generation circuitry is arranged to maintain a counter value in association with each entry in the branch control storage associated with an identified instruction of a branch-future instruction; and the trace generation circuitry is arranged to issue said trace element to identify both the identified instruction and the branch that has been taken on encountering that identified instruction, when the counter value for the relevant entry in the branch control storage has an unexpected value when the branch is taken on encountering the identified instruction. 12. An apparatus as claimed in claim 2, wherein:
when an event causes the branch control information within the branch control storage to be invalidated, the trace generation circuitry is arranged to issue an invalidation trace element. 13. An apparatus as claimed in claim 1, wherein the processing circuitry is arranged, when said branch behaviour is triggered on encountering said identified instruction within said sequence, to also execute said identified instruction. 14. An apparatus as claimed in claim 1, wherein the processing circuitry is arranged, when said branch behaviour is triggered on encountering said identified instruction within said sequence, to inhibit execution of said identified instruction. 15. An apparatus as claimed in claim 2, wherein said branch control information comprises at least branch point data providing an indication of said identified instruction and further data providing an indication of said target address. 16. An apparatus as claimed in claim 15, wherein said branch point data comprises one or more of:
address data indicative of an address of said identified instruction; end data indicative of an address of a last instruction that immediately precedes said identified instruction; offset data indicative of a distance between said branch behavior setting instruction and said identified instruction; a proper subset of bits indicative of a memory storage address of said identified instruction starting from a least significant bit end of bits of said memory storage address that distinguish between starting storage addresses of instructions; remaining size instruction data indicative of a number of instructions remaining to be processed before said identified instruction; and remaining size data indicative of a number of program storage locations remaining to be processed before said identified instruction is reached. 17. An apparatus, comprising:
an input interface to receive a trace stream comprising a plurality of trace elements indicative of execution by processing circuitry of predetermined instructions within a sequence of instructions executed by the processing circuitry, said sequence including a branch behaviour setting instruction that indicates an identified instruction within said sequence, execution of the branch behaviour setting instruction enabling a branch behaviour to be associated with said identified instruction that causes the processing circuitry to branch to a target address identified by the branch behaviour setting instruction when the identified instruction is encountered in said sequence; decompression circuitry, responsive to each trace element, to traverse a program image from a current instruction address until a next one of the predetermined instructions is detected within said program image, and to produce from the program image information indicative of the instructions between said current instruction address and said next one of the predetermined instructions; and a branch control storage associated with said decompression circuitry; the decompression circuitry being responsive to detecting at least one type of the branch behaviour setting instruction when traversing said program image in response to a current trace element of a predetermined type, to store within the branch control storage branch control information derived from the branch behaviour setting instruction; the decompression circuitry being arranged, when detecting with reference to the branch control information that the identified instruction has been reached during traversal of the program image, to treat that identified instruction as the next one of said predetermined instructions. 18. An apparatus as claimed in claim 17, wherein the decompression circuitry is arranged to store as the branch control information branch point data identified by the branch behaviour setting instruction and used to determine said identified instruction. 19. An apparatus as claimed in claim 18, wherein the decompression circuitry is further arranged to store as the branch control information the target address when that target address is directly derivable from an immediate value specified within said branch behaviour setting instruction. 20. An apparatus as claimed in claim 17, wherein:
the decompression circuitry is responsive to a non-event trace element in said trace stream to invalidate an entry in its associated branch control storage. 21. An apparatus as claimed in claim 17, wherein:
the decompression circuitry is responsive to an invalidation trace element in said trace stream to invalidate the contents of its associated branch control storage. 22. A method of generating a trace stream indicative of instruction execution by processing circuitry, comprising:
receiving instruction execution information from the processing circuitry indicative of a sequence of instructions executed by the processing circuitry, said sequence including a branch behaviour setting instruction that indicates an identified instruction within said sequence, execution of the branch behaviour setting instruction enabling a branch behaviour to be associated with said identified instruction that causes the processing circuitry to branch to a target address identified by the branch behaviour setting instruction when the identified instruction is encountered in said sequence; generating from the instruction execution information the trace stream comprising a plurality of trace elements indicative of execution by the processing circuitry of instruction flow changing instructions within said sequence; and generating, from the instruction execution information, a trace element indicative of execution behaviour of said branch behaviour setting instruction, and a trace element to indicate that said branch behaviour has been triggered on encountering said identified instruction within said sequence. 23. An apparatus comprising:
input interface means for receiving instruction execution information from processing circuitry indicative of a sequence of instructions executed by the processing circuitry, said sequence including a branch behaviour setting instruction that indicates an identified instruction within said sequence, execution of the branch behaviour setting instruction enabling a branch behaviour to be associated with said identified instruction that causes the processing circuitry to branch to a target address identified by the branch behaviour setting instruction when the identified instruction is encountered in said sequence; and trace generation means for generating from the instruction execution information a trace stream comprising a plurality of trace elements indicative of execution by the processing circuitry of instruction flow changing instructions within said sequence; and the trace generation means further for generating, from the instruction execution information, a trace element indicative of execution behaviour of said branch behaviour setting instruction, and a trace element to indicate that said branch behaviour has been triggered on encountering said identified instruction within said sequence. 24. A method of processing a trace stream generated to indicate instruction execution by processing circuitry, comprising:
receiving the trace stream comprising a plurality of trace elements indicative of execution by the processing circuitry of predetermined instructions within a sequence of instructions executed by the processing circuitry, said sequence including a branch behaviour setting instruction that indicates an identified instruction within said sequence, execution of the branch behaviour setting instruction enabling a branch behaviour to be associated with said identified instruction that causes the processing circuitry to branch to a target address identified by the branch behaviour setting instruction when the identified instruction is encountered in said sequence; traversing, responsive to each trace element, a program image from a current instruction address until a next one of the predetermined instructions is detected within said program image, and producing from the program image information indicative of the instructions between said current instruction address and said next one of the predetermined instructions; responsive to detecting at least one type of the branch behaviour setting instruction when traversing said program image in response to a current trace element of a predetermined type, storing within a branch control storage branch control information derived from the branch behaviour setting instruction; and when detecting with reference to the branch control information that the identified instruction has been reached during traversal of the program image, treating that identified instruction as the next one of said predetermined instructions. 25. An apparatus, comprising:
an input interface means for receiving a trace stream comprising a plurality of trace elements indicative of execution by processing circuitry of predetermined instructions within a sequence of instructions executed by the processing circuitry, said sequence including a branch behaviour setting instruction that indicates an identified instruction within said sequence, execution of the branch behaviour setting instruction enabling a branch behaviour to be associated with said identified instruction that causes the processing circuitry to branch to a target address identified by the branch behaviour setting instruction when the identified instruction is encountered in said sequence; decompression means for traversing, responsive to each trace element, a program image from a current instruction address until a next one of the predetermined instructions is detected within said program image, and for producing from the program image information indicative of the instructions between said current instruction address and said next one of the predetermined instructions; and a branch control storage means for association with said decompression means; the decompression means, responsive to detecting at least one type of the branch behaviour setting instruction when traversing said program image in response to a current trace element of a predetermined type, for storing within the branch control storage means branch control information derived from the branch behaviour setting instruction; the decompression means, when detecting with reference to the branch control information that the identified instruction has been reached during traversal of the program image, for treating that identified instruction as the next one of said predetermined instructions | 2,100 |
6,657 | 6,657 | 15,785,954 | 2,174 | A smart device is provided with a main remote control application that may be configured using information. The main remote control application may present images of original remote controls corresponding to devices which are controllable by the configured main remote control application. In connection with a presented image of an original remote control, the display may present icons that are representative of a subset of the buttons of the original remote control. The user interface also allows a user to select amongst the images of the original remote controls to change which appliances are to be controlled via the user interface. A pop-up remote control widget may also be provided which may be invoked without switching to the main remote control application provisioned on the smart device. | 1. A non-transitory, computer-readable media having stored thereon instructions which, when executed by a processing device of a smart device, cause the smart device to perform steps comprising:
invoking on the smart device in response to a first predetermined input being provided to the smart device a remote control application of the smart device wherein the invoked remote control application includes a main remote control user interface that is displayed in a display of the smart device, wherein the main remote control user interface displayed in the display of the smart device comprises a first plurality of user interface elements and wherein user interactions with the first plurality of user interface elements will cause the smart device to transmit commands for controlling functional operations of an intended target device; causing a secondary remote control user interface to be provided to the display of the smart device in response to a second predetermined input being provided to the smart device, wherein the secondary remote control user interface displayed in the display of the smart device comprises a second plurality of user interface elements and wherein user interactions with the second plurality of user interface elements will cause the smart device to transmit commands for controlling functional operations of the intended target device; and wherein the instructions cause the smart device to be responsive to the second predetermined user input to provide the secondary remote control user interface to the display of the smart device independent of the remote control application being invoked on the smart device. 2. The non-transitory, computer readable media as recited in claim 1, wherein the second predetermined input comprises a predetermined motion gesture made upon a surface of the display of the smart device. 3. The non-transitory, computer readable media as recited in claim 1, wherein the second predetermined input comprises a predetermined movement of the smart device. 4. The non-transitory, computer readable media as recited in claim 1, wherein the secondary remote control user interface is caused to be superimposed over content currently being displayed in the display of the smart device. 5. The non-transitory, computer readable media as recited in claim 1, wherein the secondary remote control user interface is caused to be temporarily displayed in the display of the smart device. 6. The non-transitory, computer readable media as recited in claim 1, wherein the second plurality of user interface elements comprises a subset of and less than all of the first plurality of user interface elements. 7. The non-transitory, computer readable media as recited in claim 1, wherein the instructions cause an infrared transmission system of the smart device to transmit commands for controlling functional operations of the intended target device. 8. The non-transitory, computer readable media as recited in claim 1, wherein the instructions cause a radio frequency transmission system of the smart device to transmit commands for controlling functional operations of the intended target device. 9. The non-transitory, computer readable media as recited in claim 1, wherein the smart device comprises a tablet computing device. 10. The non-transitory, computer readable media as recited in claim 1, wherein the smart device comprises a smart phone. 11. The non-transitory, computer readable media as recited in claim 1, wherein an entirety of the display of the smart device is used by the main remote control user interface and wherein less than the entirety of the display of the smart device is used by the secondary remote control user interface. 12. The computer-readable media as recited in claim 6, wherein the second plurality of user interface elements comprises one or more frequently selected ones of the first plurality of user interface elements. 13. The computer-readable media as recited in claim 17, wherein the instructions monitor selections of the first plurality of user interface elements to automatically determine the second plurality of user interface elements. 14. The computer-readable media as recited in claim 1, wherein the second plurality of user interface elements provide for control of a predetermined, controllable activity. 15. The computer-readable media as recited in claim 19, wherein the second plurality of user interface elements provide for control of volume operational functions and media transport functions. | A smart device is provided with a main remote control application that may be configured using information. The main remote control application may present images of original remote controls corresponding to devices which are controllable by the configured main remote control application. In connection with a presented image of an original remote control, the display may present icons that are representative of a subset of the buttons of the original remote control. The user interface also allows a user to select amongst the images of the original remote controls to change which appliances are to be controlled via the user interface. A pop-up remote control widget may also be provided which may be invoked without switching to the main remote control application provisioned on the smart device.1. A non-transitory, computer-readable media having stored thereon instructions which, when executed by a processing device of a smart device, cause the smart device to perform steps comprising:
invoking on the smart device in response to a first predetermined input being provided to the smart device a remote control application of the smart device wherein the invoked remote control application includes a main remote control user interface that is displayed in a display of the smart device, wherein the main remote control user interface displayed in the display of the smart device comprises a first plurality of user interface elements and wherein user interactions with the first plurality of user interface elements will cause the smart device to transmit commands for controlling functional operations of an intended target device; causing a secondary remote control user interface to be provided to the display of the smart device in response to a second predetermined input being provided to the smart device, wherein the secondary remote control user interface displayed in the display of the smart device comprises a second plurality of user interface elements and wherein user interactions with the second plurality of user interface elements will cause the smart device to transmit commands for controlling functional operations of the intended target device; and wherein the instructions cause the smart device to be responsive to the second predetermined user input to provide the secondary remote control user interface to the display of the smart device independent of the remote control application being invoked on the smart device. 2. The non-transitory, computer readable media as recited in claim 1, wherein the second predetermined input comprises a predetermined motion gesture made upon a surface of the display of the smart device. 3. The non-transitory, computer readable media as recited in claim 1, wherein the second predetermined input comprises a predetermined movement of the smart device. 4. The non-transitory, computer readable media as recited in claim 1, wherein the secondary remote control user interface is caused to be superimposed over content currently being displayed in the display of the smart device. 5. The non-transitory, computer readable media as recited in claim 1, wherein the secondary remote control user interface is caused to be temporarily displayed in the display of the smart device. 6. The non-transitory, computer readable media as recited in claim 1, wherein the second plurality of user interface elements comprises a subset of and less than all of the first plurality of user interface elements. 7. The non-transitory, computer readable media as recited in claim 1, wherein the instructions cause an infrared transmission system of the smart device to transmit commands for controlling functional operations of the intended target device. 8. The non-transitory, computer readable media as recited in claim 1, wherein the instructions cause a radio frequency transmission system of the smart device to transmit commands for controlling functional operations of the intended target device. 9. The non-transitory, computer readable media as recited in claim 1, wherein the smart device comprises a tablet computing device. 10. The non-transitory, computer readable media as recited in claim 1, wherein the smart device comprises a smart phone. 11. The non-transitory, computer readable media as recited in claim 1, wherein an entirety of the display of the smart device is used by the main remote control user interface and wherein less than the entirety of the display of the smart device is used by the secondary remote control user interface. 12. The computer-readable media as recited in claim 6, wherein the second plurality of user interface elements comprises one or more frequently selected ones of the first plurality of user interface elements. 13. The computer-readable media as recited in claim 17, wherein the instructions monitor selections of the first plurality of user interface elements to automatically determine the second plurality of user interface elements. 14. The computer-readable media as recited in claim 1, wherein the second plurality of user interface elements provide for control of a predetermined, controllable activity. 15. The computer-readable media as recited in claim 19, wherein the second plurality of user interface elements provide for control of volume operational functions and media transport functions. | 2,100 |
6,658 | 6,658 | 15,806,293 | 2,166 | A system includes reception of a database query, determination of result set output columns associated with the database query, and determination, for each of the determined result set output columns, of one or more data sources associated with the result set output column. Sensitivity information is determined for each of the one or more data sources based on metadata, and result set sensitivity information is determined based on the determined sensitivity information. A result set is determined based on the database query, and the result set and the result set sensitivity information are transmitted. | 1. A system comprising:
a data store storing one or more data sources and metadata associated with the one or more data sources; a memory device storing processor-executable process steps; and a processing unit to execute the processor-executable process steps to cause the system to:
determine output columns associated with a database query;
determine, for each of the determined output columns, one or more data sources associated with the determined output column;
determine, based on the metadata, sensitivity information for each of the determined one or more data sources;
determine result set sensitivity information based on the determined sensitivity information of the one or more data sources;
acquire a result set based on the database query; and
transmit the result set and the result set sensitivity information. 2. A system according to claim 1, the processing unit to further execute the processor-executable process steps to cause the system to:
receive the database query from a remote system, wherein transmission of the result set and the result set sensitivity information comprises transmission of the result set and the result set sensitivity information to the remote system. 3. A system according to claim 2, the processing unit to further execute the processor-executable process steps to cause the system to:
acquire result set metadata; and add the result set sensitivity information to the result set metadata, wherein transmission of the result set sensitivity information comprises transmission of the result set metadata to the remote system. 4. A system according to claim 1, wherein the metadata identifies an information type and a sensitivity level associated with one or more of the data sources. 5. A system according to claim 1, the processing unit to further execute the processor-executable process steps to cause the system to:
update an audit log based on the result set and the result set sensitivity information. 6. A system according to claim 5, the processing unit to further execute the processor-executable process steps to cause the system to:
restrict export of the result set based on the result set sensitivity information. 7. A system according to claim 1, wherein the one or more data sources comprise one or more table columns of one or more relational database tables. 8. A system according to claim 7, wherein determination of the one or more data sources comprises:
determination of a parse tree based on the database query; and determination of one or more table columns based on the parse tree. 9. A computer-implemented method comprising:
determining result set output columns associated with a database query; determining, for each of the determined result set output columns, one or more data sources associated with the result set output column; determining, based on metadata associated with one or more of the data sources, sensitivity information for each of the determined one or more data sources; determining result set sensitivity information based on the determined sensitivity information of the one or more data sources; determining a result set based on the database query; and transmitting the result set and the result set sensitivity information. 10. A computer-implemented method according to claim 9, further comprising:
receiving the database query from a remote system, wherein transmitting the result set sensitivity information comprises transmitting the result set and the result set sensitivity information to the remote system. 11. A computer-implemented method according to claim 10, further comprising:
determining result set metadata describing the result set; and adding the result set sensitivity information to the result set metadata, wherein transmission of the result set sensitivity information comprises transmission of the result set metadata to the remote system. 12. A computer-implemented method according to claim 9, wherein the metadata identifies an information type and a sensitivity level associated with one or more of the data sources. 13. A computer-implemented method according to claim 9, further comprising:
updating an audit log based on the result set and the result set sensitivity information. 14. A computer-implemented method according to claim 9, wherein the one or more data sources comprise one or more table columns of one or more relational database tables. 15. A computer-implemented method according to claim 14, wherein determining the one or more data sources comprises:
determining a parse tree based on the database query; and determining one or more table columns based on the parse tree. 16. A computer-readable medium storing processor-executable code, the code executable by a processing unit to cause a computing system to:
receive a database query from a requesting computing system; determine result set output columns associated with the database query; determine, for each of the determined result set output columns, one or more data sources storing values on which the result set output column is based; determine, based on metadata associated with the one or more data sources, sensitivity information for each of the determined one or more data sources; determine result set sensitivity information based on the determined sensitivity information of the one or more data sources; determine a result set based on the database query; and transmit the result set and the result set sensitivity information to the requesting computer system. 17. A medium according to claim 16, the code further executable by a processing unit to cause a computing system to:
determine result set metadata describing the result set; and add the result set sensitivity information to the result set metadata, wherein transmission of the result set sensitivity information comprises transmission of the result set metadata to the requesting computer system. 18. A medium according to claim 16, wherein the metadata identifies an information type and a sensitivity level associated with one or more of the one or more data sources. 19. A medium according to claim 16, wherein determination of the one or more table columns comprises:
determination of a parse tree based on the database query; and determination of the one or more data sources based on the parse tree. 20. A medium according to claim 16, wherein the one or more data sources comprise one or more table columns of one or more relational database tables. | A system includes reception of a database query, determination of result set output columns associated with the database query, and determination, for each of the determined result set output columns, of one or more data sources associated with the result set output column. Sensitivity information is determined for each of the one or more data sources based on metadata, and result set sensitivity information is determined based on the determined sensitivity information. A result set is determined based on the database query, and the result set and the result set sensitivity information are transmitted.1. A system comprising:
a data store storing one or more data sources and metadata associated with the one or more data sources; a memory device storing processor-executable process steps; and a processing unit to execute the processor-executable process steps to cause the system to:
determine output columns associated with a database query;
determine, for each of the determined output columns, one or more data sources associated with the determined output column;
determine, based on the metadata, sensitivity information for each of the determined one or more data sources;
determine result set sensitivity information based on the determined sensitivity information of the one or more data sources;
acquire a result set based on the database query; and
transmit the result set and the result set sensitivity information. 2. A system according to claim 1, the processing unit to further execute the processor-executable process steps to cause the system to:
receive the database query from a remote system, wherein transmission of the result set and the result set sensitivity information comprises transmission of the result set and the result set sensitivity information to the remote system. 3. A system according to claim 2, the processing unit to further execute the processor-executable process steps to cause the system to:
acquire result set metadata; and add the result set sensitivity information to the result set metadata, wherein transmission of the result set sensitivity information comprises transmission of the result set metadata to the remote system. 4. A system according to claim 1, wherein the metadata identifies an information type and a sensitivity level associated with one or more of the data sources. 5. A system according to claim 1, the processing unit to further execute the processor-executable process steps to cause the system to:
update an audit log based on the result set and the result set sensitivity information. 6. A system according to claim 5, the processing unit to further execute the processor-executable process steps to cause the system to:
restrict export of the result set based on the result set sensitivity information. 7. A system according to claim 1, wherein the one or more data sources comprise one or more table columns of one or more relational database tables. 8. A system according to claim 7, wherein determination of the one or more data sources comprises:
determination of a parse tree based on the database query; and determination of one or more table columns based on the parse tree. 9. A computer-implemented method comprising:
determining result set output columns associated with a database query; determining, for each of the determined result set output columns, one or more data sources associated with the result set output column; determining, based on metadata associated with one or more of the data sources, sensitivity information for each of the determined one or more data sources; determining result set sensitivity information based on the determined sensitivity information of the one or more data sources; determining a result set based on the database query; and transmitting the result set and the result set sensitivity information. 10. A computer-implemented method according to claim 9, further comprising:
receiving the database query from a remote system, wherein transmitting the result set sensitivity information comprises transmitting the result set and the result set sensitivity information to the remote system. 11. A computer-implemented method according to claim 10, further comprising:
determining result set metadata describing the result set; and adding the result set sensitivity information to the result set metadata, wherein transmission of the result set sensitivity information comprises transmission of the result set metadata to the remote system. 12. A computer-implemented method according to claim 9, wherein the metadata identifies an information type and a sensitivity level associated with one or more of the data sources. 13. A computer-implemented method according to claim 9, further comprising:
updating an audit log based on the result set and the result set sensitivity information. 14. A computer-implemented method according to claim 9, wherein the one or more data sources comprise one or more table columns of one or more relational database tables. 15. A computer-implemented method according to claim 14, wherein determining the one or more data sources comprises:
determining a parse tree based on the database query; and determining one or more table columns based on the parse tree. 16. A computer-readable medium storing processor-executable code, the code executable by a processing unit to cause a computing system to:
receive a database query from a requesting computing system; determine result set output columns associated with the database query; determine, for each of the determined result set output columns, one or more data sources storing values on which the result set output column is based; determine, based on metadata associated with the one or more data sources, sensitivity information for each of the determined one or more data sources; determine result set sensitivity information based on the determined sensitivity information of the one or more data sources; determine a result set based on the database query; and transmit the result set and the result set sensitivity information to the requesting computer system. 17. A medium according to claim 16, the code further executable by a processing unit to cause a computing system to:
determine result set metadata describing the result set; and add the result set sensitivity information to the result set metadata, wherein transmission of the result set sensitivity information comprises transmission of the result set metadata to the requesting computer system. 18. A medium according to claim 16, wherein the metadata identifies an information type and a sensitivity level associated with one or more of the one or more data sources. 19. A medium according to claim 16, wherein determination of the one or more table columns comprises:
determination of a parse tree based on the database query; and determination of the one or more data sources based on the parse tree. 20. A medium according to claim 16, wherein the one or more data sources comprise one or more table columns of one or more relational database tables. | 2,100 |
6,659 | 6,659 | 14,336,583 | 2,199 | In a computer-implemented method for viewing a snapshot of a virtual machine, during operation of a virtual machine in a first console, at least one snapshot of the virtual machine is presented for selection, wherein the snapshot includes a previous state of the virtual machine. Responsive to a selection of the snapshot, a second virtual machine of the selected snapshot is deployed in a second console, wherein the second virtual machine is deployed without closing the virtual machine in the first console. | 1. A computer-implemented method for viewing a snapshot of a virtual machine, the method comprising:
during operation of a virtual machine in a first console, presenting at least one snapshot of the virtual machine for selection, wherein the at least one snapshot comprises a previous state of the virtual machine; and responsive to a selection of the snapshot, deploying a second virtual machine of the selected snapshot in a second console, wherein the second virtual machine is deployed without closing the virtual machine in the first console. 2. The method of claim 1, wherein the second virtual machine is a clone of the virtual machine. 3. The method of claim 1, wherein the second virtual machine is a linked clone of the virtual machine. 4. The method of claim 1, further comprising:
deploying the virtual machine in the first console. 5. The method of claim 1, further comprising:
creating at least one snapshot of the virtual machine. 6. The method of claim 1, wherein the presenting the at least one snapshot of the virtual machine for selection comprises:
presenting a plurality of snapshots of the virtual machine for selection, wherein the plurality of snapshots comprise different states of the virtual machine. 7. The method of claim 6, further comprising:
responsive to a selection of a second snapshot, deploying a third virtual machine of the selected second snapshot in a third console, wherein the third virtual machine is deployed without closing the virtual machine in the first console. 8. The method of claim 1, wherein the at least one snapshot of the virtual machine for selection is presented at a menu of the virtual machine. 9. The method of claim 1, wherein the at least one snapshot of the virtual machine for selection is presented at a virtualization manager. 10. The method of claim 1, wherein the snapshot comprises a power state of the virtual machine. 11. The method of claim 1, wherein the snapshot comprises data and a memory state of an operating system of the virtual machine. 12. The method of claim 1, wherein the snapshot comprises settings and configuration data of the virtual machine. 13. A non-transitory computer readable storage medium having computer readable program code stored thereon for causing a computer system to perform a method for viewing a snapshot of a virtual machine, the method comprising:
during operation of a virtual machine in a first console, presenting a plurality of snapshots of the virtual machine for selection, wherein the plurality of snapshots comprise different states of the virtual machine; and responsive to a selection of a snapshot of the plurality of snapshots, deploying a second virtual machine of the selected snapshot in a second console, wherein the second virtual machine is deployed without closing the virtual machine in the first console. 14. The computer readable storage medium of claim 13, wherein the second virtual machine is a clone of the virtual machine. 15. The computer readable storage medium of claim 13, wherein the second virtual machine is a linked clone of the virtual machine. 16. The computer readable storage medium of claim 13, wherein the plurality of snapshots of the virtual machine for selection is presented at a menu of the virtual machine. 17. The computer readable storage medium of claim 13, wherein the plurality of snapshots of the virtual machine for selection is presented at a virtualization manager. 18. A computer-implemented method for viewing a snapshot of a virtual machine, the method comprising:
deploying the virtual machine in a first console; creating at least one snapshot of the virtual machine; during operation of the virtual machine in a first console, presenting at least one snapshot of the virtual machine for selection, wherein the at least one snapshot comprises a previous state of the virtual machine; and responsive to a selection of the snapshot, deploying a second virtual machine of the selected snapshot in a second console, wherein the second virtual machine is a linked clone of the virtual machine and is deployed without closing the virtual machine in the first console. 19. The method of claim 18, wherein the presenting the at least one snapshot of the virtual machine for selection comprises:
presenting a plurality of snapshots of the virtual machine for selection, wherein the plurality of snapshots comprise different states of the virtual machine. 20. The method of claim 19, further comprising:
responsive to a selection of a second snapshot, deploying a third virtual machine of the selected second snapshot in a third console, wherein the third virtual machine is deployed without closing the virtual machine in the first console. | In a computer-implemented method for viewing a snapshot of a virtual machine, during operation of a virtual machine in a first console, at least one snapshot of the virtual machine is presented for selection, wherein the snapshot includes a previous state of the virtual machine. Responsive to a selection of the snapshot, a second virtual machine of the selected snapshot is deployed in a second console, wherein the second virtual machine is deployed without closing the virtual machine in the first console.1. A computer-implemented method for viewing a snapshot of a virtual machine, the method comprising:
during operation of a virtual machine in a first console, presenting at least one snapshot of the virtual machine for selection, wherein the at least one snapshot comprises a previous state of the virtual machine; and responsive to a selection of the snapshot, deploying a second virtual machine of the selected snapshot in a second console, wherein the second virtual machine is deployed without closing the virtual machine in the first console. 2. The method of claim 1, wherein the second virtual machine is a clone of the virtual machine. 3. The method of claim 1, wherein the second virtual machine is a linked clone of the virtual machine. 4. The method of claim 1, further comprising:
deploying the virtual machine in the first console. 5. The method of claim 1, further comprising:
creating at least one snapshot of the virtual machine. 6. The method of claim 1, wherein the presenting the at least one snapshot of the virtual machine for selection comprises:
presenting a plurality of snapshots of the virtual machine for selection, wherein the plurality of snapshots comprise different states of the virtual machine. 7. The method of claim 6, further comprising:
responsive to a selection of a second snapshot, deploying a third virtual machine of the selected second snapshot in a third console, wherein the third virtual machine is deployed without closing the virtual machine in the first console. 8. The method of claim 1, wherein the at least one snapshot of the virtual machine for selection is presented at a menu of the virtual machine. 9. The method of claim 1, wherein the at least one snapshot of the virtual machine for selection is presented at a virtualization manager. 10. The method of claim 1, wherein the snapshot comprises a power state of the virtual machine. 11. The method of claim 1, wherein the snapshot comprises data and a memory state of an operating system of the virtual machine. 12. The method of claim 1, wherein the snapshot comprises settings and configuration data of the virtual machine. 13. A non-transitory computer readable storage medium having computer readable program code stored thereon for causing a computer system to perform a method for viewing a snapshot of a virtual machine, the method comprising:
during operation of a virtual machine in a first console, presenting a plurality of snapshots of the virtual machine for selection, wherein the plurality of snapshots comprise different states of the virtual machine; and responsive to a selection of a snapshot of the plurality of snapshots, deploying a second virtual machine of the selected snapshot in a second console, wherein the second virtual machine is deployed without closing the virtual machine in the first console. 14. The computer readable storage medium of claim 13, wherein the second virtual machine is a clone of the virtual machine. 15. The computer readable storage medium of claim 13, wherein the second virtual machine is a linked clone of the virtual machine. 16. The computer readable storage medium of claim 13, wherein the plurality of snapshots of the virtual machine for selection is presented at a menu of the virtual machine. 17. The computer readable storage medium of claim 13, wherein the plurality of snapshots of the virtual machine for selection is presented at a virtualization manager. 18. A computer-implemented method for viewing a snapshot of a virtual machine, the method comprising:
deploying the virtual machine in a first console; creating at least one snapshot of the virtual machine; during operation of the virtual machine in a first console, presenting at least one snapshot of the virtual machine for selection, wherein the at least one snapshot comprises a previous state of the virtual machine; and responsive to a selection of the snapshot, deploying a second virtual machine of the selected snapshot in a second console, wherein the second virtual machine is a linked clone of the virtual machine and is deployed without closing the virtual machine in the first console. 19. The method of claim 18, wherein the presenting the at least one snapshot of the virtual machine for selection comprises:
presenting a plurality of snapshots of the virtual machine for selection, wherein the plurality of snapshots comprise different states of the virtual machine. 20. The method of claim 19, further comprising:
responsive to a selection of a second snapshot, deploying a third virtual machine of the selected second snapshot in a third console, wherein the third virtual machine is deployed without closing the virtual machine in the first console. | 2,100 |
6,660 | 6,660 | 15,208,250 | 2,152 | A set of objects is defined from a plurality of objects. The objects are defined with a common structure including properties. The plurality of objects is to be clustered into clusters. A clustering criterion for determining the clusters is defined. The clusters are non-intersecting sets of objects from the set of objects. Object distance between a first object and a second object from the set of objects is computed. The computation of the object distance is based on computation of distances between property values defined for properties from the structure of the objects from the set. When the first object is a part of the cluster, the second objects is added to the cluster when the object distance complies with the clustering criterion. The clusters are determined in a number of iterations based on evaluations of the distances between objects from subsequently determined subsets of objects from the plurality. | 1. A computer implemented method to determine clusters in a plurality of objects, the method comprising:
defining a clustering criterion for determining a cluster; a processor, computing property distances between values for properties of the objects from a set of the plurality of objects; a processor, computing object distance between a first object and a second object from the set of objects based on the property distances; and when the first object is a part of the cluster, adding the second object to the cluster when the object distance complies with the clustering criterion. 2. The method of claim 1, further comprising:
the processor, iteratively determining the clusters based on a plurality of iterations for evaluations of distances between objects from the plurality of objects according to the clustering criterion, wherein a subsequent subset of objects from the plurality of objects is evalated at a subsequent iteration, wherein the clusters are non-intersecting sets of objects from the plurality of objects. 3. The method of claim 2, further comprising:
determining the set of objects to be clustered; wherein objects from the set of objects am defined according to a structure corresponding to a type of the objects from the set, wherein the structure defines the properties associated with the type of the objects. 4. The method of claim 2, wherein the cluster from the clusters is associated with a representative object from the set of objects. 5. The method of claim 4, wherein the clustering criterion is associated with a definition for measuring the object distance between two objects from the set of objects, and wherein a cluster comprises one or more objects from the set of objects complying with the clustering criterion, the clustering criterion defining a threshold value for the distance between the representative object for the cluster and other objects within the cluster. 6. The method of claim 5, wherein iteratively determining the clusters based on the plurality of iterations for evaluations of the distances between the objects from the plurality of objects according to the clustering criterion further comprises:
the processor, determining a first cluster comprising a maximum number of objects from the set of objects that comply with the defined clustering criterion, wherein the first cluster is determined through evaluating the distances between objects from the set of objects; and the processor, iteratively determining rest of the clusters based on evaluations of distances between objects from subsets of objects from the plurality of objects, wherein the subsequent subset of objects is determined based on one or more defined clusters at one or more preceding iterations. 7. The method of claim 6, wherein during a first iteration from the iterative determination of the clusters the first cluster is determined, wherein the first iteration is associated with the set of objects for evaluation, and wherein a subsequent subset of objects associated with a subsequent iteration is defined based on excluding objects from the plurality of objects, and wherein the excluded objects are objects which are included in one or more iteratively defined clusters during one or more preceding iterations. 8. The method of claim 6, wherein determining the first cluster further comprises:
defining an ordered list of objects associated with the first object based on computing distances between the first object and rest of objects from the plurality of objects; defining a set of spheres centered around the first object, wherein the set of spheres are defined with radiuses in an increasing order starting from the defined threshold value and increasing with a step equal to the defined threshold value; evaluating objects included in a first pair of spheres based on evaluations of distances between the objects, wherein the evaluated distances are defined between objects included in a first sphere and objects included in a subsequent sphere, where the first and the subsequent sphere are nested spheres; determining an enriched neighborhood of objects from the objects of the first pair of spheres that includes objects complying with the defined clustering criterion, and wherein the enriched neighborhood of objects comprises the maximum number of objects compared to other subsets of the objects from the first pair of spheres, other subsets complying with the defined clustering criterion; and defining the first cluster to include the objects from the enriched neighborhood. 9. A computer system to determine clusters in a set of objects, comprising:
a processor; a memory in association with the processor storing instructions related to:
define a clustering criterion for determining a cluster, wherein the clusters are non-intersecting sets of objects from the set of objects, wherein the clustering criterion is associated with a definition to measure a distance between two objects from the set of objects, and wherein the clustering criterion defining a threshold value for the distance between objects within the cluster;
compute property distances between values for properties of the objects from the set;
compute object distance between a first object and a second object from the set of objects based on the property distances; and
when the first object is a part of the cluster, add the second object to the cluster when the object distance complies with the clustering criterion. 10. The system of claim 9, wherein the memory further stores instructions related to:
iteratively determine the clusters based on a plurality of iterations for evaluations of distances between objects from the plurality of objects according to the clustering criterion, wherein a subsequent subset of objects from the plurality of objects is evalated at a subsequent iteration, wherein a cluster from the clusters is associated with a representative object from the set of objects. 11. The system of claim 9, wherein the memory further stores instructions to:
determine the set of objects to be clustered; wherein objects from the set of objects are defined according to a structure corresponding to a type of the objects from the set, wherein the structure defines the properties associated with the type of the objects. 12. The system of claim 9, wherein the instructions related to iteratively determining the clusters based on the plurality of iterations for evaluations of the distances between the objects from the plurality of objects according to the clustering criterion further comprise instructions to:
determine a first cluster comprising a maximum number of objects from the set of objects that comply with the defined clustering criterion, wherein the first cluster is determined through evaluating the distances between objects from the set of objects; and the processor, iteratively determine rest of the clusters based on evaluations of distances between objects from subsets of objects from the plurality of objects, wherein the subsequent subset of objects is determined based on one or more defined clusters at one or more preceding iterations. 13. The system of claim 12, wherein during a first iteration from the iterative determination of the clusters the first cluster is determined, wherein the first iteration is associated with the set of objects for evaluation, and wherein a subsequent subset of objects associated with a subsequent iteration is defined based on excluding objects from the plurality of objects, and wherein the excluded objects are objects which are included in one or more iteratively defined clusters during one or more preceding iterations. 14. The system of claim 12, wherein the instructions related to determining the first cluster further comprise instructions related to:
defining an ordered list of objects associated with the first object based on computing distances between the first object and rest of objects from the plurality of objects; defining a set of spheres centered around the first object, wherein the set of spheres are defined with radiuses in an increasing order starting from the defined threshold value and increasing with a step equal to the defined threshold value; evaluating objects included in a first pair of spheres based on evaluations of distances between the objects, wherein the evaluated distances are defined between objects included in a first sphere and objects included in a subsequent sphere, where the first and the subsequent sphere are nested spheres; determining an enriched neighborhood of objects from the objects of the first pair of spheres that includes objects complying with the defined clustering criterion, and wherein the enriched neighborhood of objects comprises the maximum number of objects compared to other subsets of the objects from the first pair of spheres, other subsets complying with the defined clustering criterion; and defining the first cluster to include the objects from the enriched neighborhood. 15. A non-transitory computer-readable medium storing instructions, which when executed cause a computer system to perform operations comprising:
defining a clustering criterion for determining a cluster, wherein the clusters are non-intersecting sets of objects from the set of objects, wherein the clustering criterion is associated with a definition to measure a distance between two objects from the set of objects, and wherein the clustering criterion defining a threshold value for the distance between objects within the cluster; computing property distances between values for properties of the objects from the set; computing object distance between a first object and a second object from the set of objects based on the property distances; and when the first object is a part of the cluster, adding the second object to the cluster when the object distance complies with the clustering criterion. 16. The computer-readable medium of claim 15, further comprising instructions to:
iteratively determine the clusters based on a plurality of iterations for evaluations of distances between objects from the plurality of objects according to the clustering criterion, wherein a subsequent subset of objects from the plurality of objects is evalated at a subsequent iteration, wherein a cluster from the clusters is associated with a representative object from the set of objects. 17. The computer-readable medium of claim 15, further comprising instructions to:
determine the set of objects to be clustered; wherein objects from the set of objects are defined according to a structure corresponding to a type of the objects from the set, wherein the structure defines the properties associated with the type of the objects. 18. The computer-readable medium of claim 15, wherein the instructions related to iteratively determining the clusters based on the plurality of iterations for evaluations of the distances between the objects from the plurality of objects according to the clustering criterion further comprise instructions related to:
determining a first cluster comprising a maximum number of objects from the set of objects that comply with the defined clustering criterion, wherein the first cluster is determined through evaluating the distances between objects from the set of objects; and the processor, iteratively determining rest of the clusters based on evaluations of distances between objects from subsets of objects from the plurality of objects, wherein the subsequent subset of objects is determined based on one or more defined clusters at one or more preceding iterations. 19. The computer-readable medium of claim 18, wherein during a first iteration from the iterative determination of the clusters the first cluster is determined, wherein the first iteration is associated with the set of objects for evaluation, and wherein a subsequent subset of objects associated with a subsequent iteration is defined based on excluding objects from the plurality of objects, and wherein the excluded objects are objects which are included in one or more iteratively defined clusters during one or more preceding iterations. 20. The computer-readable medium of claim 17, wherein the instructions related to determining the first cluster further comprise instructions related to:
defining an ordered list of objects associated with the first object based on computing distances between the first object and rest of objects from the plurality of objects; defining a set of spheres centered around the first object, wherein the set of spheres are defined with radiuses in an increasing order starting from the defined threshold value and increasing with a step equal to the defined threshold value; evaluating objects included in a first pair of spheres based on evaluations of distances between the objects, wherein the evaluated distances are defined between objects included in a first sphere and objects included in a subsequent sphere, where the first and the subsequent sphere are nested spheres; determining an enriched neighborhood of objects from the objects of the first pair of spheres that includes objects complying with the defined clustering criterion, and wherein the enriched neighborhood of objects comprises the maximum number of objects compared to other subsets of the objects from the first pair of spheres, other subsets complying with the defined clustering criterion; and defining the first cluster to include the objects from the enriched neighborhood. | A set of objects is defined from a plurality of objects. The objects are defined with a common structure including properties. The plurality of objects is to be clustered into clusters. A clustering criterion for determining the clusters is defined. The clusters are non-intersecting sets of objects from the set of objects. Object distance between a first object and a second object from the set of objects is computed. The computation of the object distance is based on computation of distances between property values defined for properties from the structure of the objects from the set. When the first object is a part of the cluster, the second objects is added to the cluster when the object distance complies with the clustering criterion. The clusters are determined in a number of iterations based on evaluations of the distances between objects from subsequently determined subsets of objects from the plurality.1. A computer implemented method to determine clusters in a plurality of objects, the method comprising:
defining a clustering criterion for determining a cluster; a processor, computing property distances between values for properties of the objects from a set of the plurality of objects; a processor, computing object distance between a first object and a second object from the set of objects based on the property distances; and when the first object is a part of the cluster, adding the second object to the cluster when the object distance complies with the clustering criterion. 2. The method of claim 1, further comprising:
the processor, iteratively determining the clusters based on a plurality of iterations for evaluations of distances between objects from the plurality of objects according to the clustering criterion, wherein a subsequent subset of objects from the plurality of objects is evalated at a subsequent iteration, wherein the clusters are non-intersecting sets of objects from the plurality of objects. 3. The method of claim 2, further comprising:
determining the set of objects to be clustered; wherein objects from the set of objects am defined according to a structure corresponding to a type of the objects from the set, wherein the structure defines the properties associated with the type of the objects. 4. The method of claim 2, wherein the cluster from the clusters is associated with a representative object from the set of objects. 5. The method of claim 4, wherein the clustering criterion is associated with a definition for measuring the object distance between two objects from the set of objects, and wherein a cluster comprises one or more objects from the set of objects complying with the clustering criterion, the clustering criterion defining a threshold value for the distance between the representative object for the cluster and other objects within the cluster. 6. The method of claim 5, wherein iteratively determining the clusters based on the plurality of iterations for evaluations of the distances between the objects from the plurality of objects according to the clustering criterion further comprises:
the processor, determining a first cluster comprising a maximum number of objects from the set of objects that comply with the defined clustering criterion, wherein the first cluster is determined through evaluating the distances between objects from the set of objects; and the processor, iteratively determining rest of the clusters based on evaluations of distances between objects from subsets of objects from the plurality of objects, wherein the subsequent subset of objects is determined based on one or more defined clusters at one or more preceding iterations. 7. The method of claim 6, wherein during a first iteration from the iterative determination of the clusters the first cluster is determined, wherein the first iteration is associated with the set of objects for evaluation, and wherein a subsequent subset of objects associated with a subsequent iteration is defined based on excluding objects from the plurality of objects, and wherein the excluded objects are objects which are included in one or more iteratively defined clusters during one or more preceding iterations. 8. The method of claim 6, wherein determining the first cluster further comprises:
defining an ordered list of objects associated with the first object based on computing distances between the first object and rest of objects from the plurality of objects; defining a set of spheres centered around the first object, wherein the set of spheres are defined with radiuses in an increasing order starting from the defined threshold value and increasing with a step equal to the defined threshold value; evaluating objects included in a first pair of spheres based on evaluations of distances between the objects, wherein the evaluated distances are defined between objects included in a first sphere and objects included in a subsequent sphere, where the first and the subsequent sphere are nested spheres; determining an enriched neighborhood of objects from the objects of the first pair of spheres that includes objects complying with the defined clustering criterion, and wherein the enriched neighborhood of objects comprises the maximum number of objects compared to other subsets of the objects from the first pair of spheres, other subsets complying with the defined clustering criterion; and defining the first cluster to include the objects from the enriched neighborhood. 9. A computer system to determine clusters in a set of objects, comprising:
a processor; a memory in association with the processor storing instructions related to:
define a clustering criterion for determining a cluster, wherein the clusters are non-intersecting sets of objects from the set of objects, wherein the clustering criterion is associated with a definition to measure a distance between two objects from the set of objects, and wherein the clustering criterion defining a threshold value for the distance between objects within the cluster;
compute property distances between values for properties of the objects from the set;
compute object distance between a first object and a second object from the set of objects based on the property distances; and
when the first object is a part of the cluster, add the second object to the cluster when the object distance complies with the clustering criterion. 10. The system of claim 9, wherein the memory further stores instructions related to:
iteratively determine the clusters based on a plurality of iterations for evaluations of distances between objects from the plurality of objects according to the clustering criterion, wherein a subsequent subset of objects from the plurality of objects is evalated at a subsequent iteration, wherein a cluster from the clusters is associated with a representative object from the set of objects. 11. The system of claim 9, wherein the memory further stores instructions to:
determine the set of objects to be clustered; wherein objects from the set of objects are defined according to a structure corresponding to a type of the objects from the set, wherein the structure defines the properties associated with the type of the objects. 12. The system of claim 9, wherein the instructions related to iteratively determining the clusters based on the plurality of iterations for evaluations of the distances between the objects from the plurality of objects according to the clustering criterion further comprise instructions to:
determine a first cluster comprising a maximum number of objects from the set of objects that comply with the defined clustering criterion, wherein the first cluster is determined through evaluating the distances between objects from the set of objects; and the processor, iteratively determine rest of the clusters based on evaluations of distances between objects from subsets of objects from the plurality of objects, wherein the subsequent subset of objects is determined based on one or more defined clusters at one or more preceding iterations. 13. The system of claim 12, wherein during a first iteration from the iterative determination of the clusters the first cluster is determined, wherein the first iteration is associated with the set of objects for evaluation, and wherein a subsequent subset of objects associated with a subsequent iteration is defined based on excluding objects from the plurality of objects, and wherein the excluded objects are objects which are included in one or more iteratively defined clusters during one or more preceding iterations. 14. The system of claim 12, wherein the instructions related to determining the first cluster further comprise instructions related to:
defining an ordered list of objects associated with the first object based on computing distances between the first object and rest of objects from the plurality of objects; defining a set of spheres centered around the first object, wherein the set of spheres are defined with radiuses in an increasing order starting from the defined threshold value and increasing with a step equal to the defined threshold value; evaluating objects included in a first pair of spheres based on evaluations of distances between the objects, wherein the evaluated distances are defined between objects included in a first sphere and objects included in a subsequent sphere, where the first and the subsequent sphere are nested spheres; determining an enriched neighborhood of objects from the objects of the first pair of spheres that includes objects complying with the defined clustering criterion, and wherein the enriched neighborhood of objects comprises the maximum number of objects compared to other subsets of the objects from the first pair of spheres, other subsets complying with the defined clustering criterion; and defining the first cluster to include the objects from the enriched neighborhood. 15. A non-transitory computer-readable medium storing instructions, which when executed cause a computer system to perform operations comprising:
defining a clustering criterion for determining a cluster, wherein the clusters are non-intersecting sets of objects from the set of objects, wherein the clustering criterion is associated with a definition to measure a distance between two objects from the set of objects, and wherein the clustering criterion defining a threshold value for the distance between objects within the cluster; computing property distances between values for properties of the objects from the set; computing object distance between a first object and a second object from the set of objects based on the property distances; and when the first object is a part of the cluster, adding the second object to the cluster when the object distance complies with the clustering criterion. 16. The computer-readable medium of claim 15, further comprising instructions to:
iteratively determine the clusters based on a plurality of iterations for evaluations of distances between objects from the plurality of objects according to the clustering criterion, wherein a subsequent subset of objects from the plurality of objects is evalated at a subsequent iteration, wherein a cluster from the clusters is associated with a representative object from the set of objects. 17. The computer-readable medium of claim 15, further comprising instructions to:
determine the set of objects to be clustered; wherein objects from the set of objects are defined according to a structure corresponding to a type of the objects from the set, wherein the structure defines the properties associated with the type of the objects. 18. The computer-readable medium of claim 15, wherein the instructions related to iteratively determining the clusters based on the plurality of iterations for evaluations of the distances between the objects from the plurality of objects according to the clustering criterion further comprise instructions related to:
determining a first cluster comprising a maximum number of objects from the set of objects that comply with the defined clustering criterion, wherein the first cluster is determined through evaluating the distances between objects from the set of objects; and the processor, iteratively determining rest of the clusters based on evaluations of distances between objects from subsets of objects from the plurality of objects, wherein the subsequent subset of objects is determined based on one or more defined clusters at one or more preceding iterations. 19. The computer-readable medium of claim 18, wherein during a first iteration from the iterative determination of the clusters the first cluster is determined, wherein the first iteration is associated with the set of objects for evaluation, and wherein a subsequent subset of objects associated with a subsequent iteration is defined based on excluding objects from the plurality of objects, and wherein the excluded objects are objects which are included in one or more iteratively defined clusters during one or more preceding iterations. 20. The computer-readable medium of claim 17, wherein the instructions related to determining the first cluster further comprise instructions related to:
defining an ordered list of objects associated with the first object based on computing distances between the first object and rest of objects from the plurality of objects; defining a set of spheres centered around the first object, wherein the set of spheres are defined with radiuses in an increasing order starting from the defined threshold value and increasing with a step equal to the defined threshold value; evaluating objects included in a first pair of spheres based on evaluations of distances between the objects, wherein the evaluated distances are defined between objects included in a first sphere and objects included in a subsequent sphere, where the first and the subsequent sphere are nested spheres; determining an enriched neighborhood of objects from the objects of the first pair of spheres that includes objects complying with the defined clustering criterion, and wherein the enriched neighborhood of objects comprises the maximum number of objects compared to other subsets of the objects from the first pair of spheres, other subsets complying with the defined clustering criterion; and defining the first cluster to include the objects from the enriched neighborhood. | 2,100 |
6,661 | 6,661 | 15,973,741 | 2,191 | A system includes one or more code development servers operable to monitor development of code files and one or more code execution servers operable to execute the code files. One or more code analysis tools of the system include instructions that when executed by at least one processing device result in collecting code development data associated with development of the code files on a per user basis and determining a predicted code execution performance score of one or more selected files of the code files based on the code development data. One or more resources of the one or more code execution servers associated with execution of the one or more selected files are predictively allocated based on a predicted code execution performance score. One or more code execution metrics are captured associated with executing the one or more selected files on the one or more code execution servers. | 1. A system, comprising:
one or more code development servers operable to monitor development of a plurality of code files; one or more code execution servers operable to execute the code files; and one or more code analysis tools executable by at least one processing device and comprising a plurality of instructions that when executed by the at least one processing device result in:
collecting a plurality of code development data associated with development of the code files on a per user basis;
determining a predicted code execution performance score of one or more selected files of the code files based on the code development data, the predicted code execution performance score indicative of a predicted likelihood of a code quality issue in the one or more selected files;
predictively allocating one or more resources of the one or more code execution servers associated with execution of the one or more selected files based on the predicted code execution performance score; and
capturing one or more code execution metrics associated with executing the one or more selected files on the one or more code execution servers. 2. The system of claim 1, further comprising instructions that when executed by the processing device result in:
accessing a plurality of infrastructure usage data comprising memory system utilization data, processing system utilization data, and database utilization data to determine a plurality of historical usage data associated with a plurality of users; and incorporating the infrastructure usage data into the predicted code execution performance score. 3. The system of claim 2, further comprising instructions that when executed by the processing device result in:
linking a plurality of personnel data sources with the infrastructure usage data; and analyzing the personnel data sources and the infrastructure usage data to identify one or more shared characteristics in the personnel data sources with one or more similar performance patterns in the infrastructure usage data. 4. The system of claim 3, wherein the one or more shared characteristics comprise one or more of: resource skill sets, resource locations, hiring data, and associated job descriptions. 5. The system of claim 1, wherein the code development data comprises one or more of: design review data, code review data, and quality assurance review data with statistics indicative of one or more errors identified in the code files. 6. The system of claim 1, wherein the code development data comprises a plurality of code quality statistics indicative of one or more defects identified with respect to one or more requirements associated with the code files or a portion of code in one or more of the code files. 7. The system of claim 1, wherein the code development data comprises a plurality of code complexity metrics indicative of components within the code files. 8. The system of claim 1, wherein the code development data comprises a plurality of code version metrics indicative of a number of versions of the code files. 9. The system of claim 1, further comprising instructions that when executed by the processing device result in:
interfacing the one or more code analysis tools with a plurality of execution performance monitoring agents operable to track process execution time, subcomponent execution time, input/output resource utilization, processing resource utilization, memory resource utilization, and storage resource utilization to identify greater resource consuming trends and correlations on a per user basis. 10. The system of claim 9, further comprising instructions that when executed by the processing device result in:
monitoring for the trends and correlations on a time of day basis. 11. The system of claim 9, further comprising instructions that when executed by the processing device result in:
adjusting the predicted code execution performance score based on one or more previously observed resource consumption patterns and a current level of resource consumption of the one or more code execution servers. 12. The system of claim 1, further comprising a data warehouse system operable to store a plurality of records indicative of resource allocation of the one or more code execution servers and the one or more code execution metrics. 13. The system of claim 12, further comprising instructions that when executed by the processing device result in:
extracting a plurality of data feeds from the data warehouse system indicative of the code development data, personnel data, and the one or more code execution metrics; and delivering the data feeds to the one or more code analysis tools. 14. The system of claim 13, further comprising instructions that when executed by the processing device result in:
establishing a plurality of correlations across a plurality of domains captured in the data warehouse system, the correlations comprising linking server performance with one or more of: process execution performance, code complexity, resource location, service-level agreement metrics, storage area network capacity, and license utilization. 15. The system of claim 14, further comprising instructions that when executed by the processing device result in:
outputting one or more visual depictions of the correlations to an interactive dashboard interface. 16. The system of claim 1, further comprising instructions that when executed by the processing device result in:
adjusting one or more data values used to determine the predicted code execution performance score based on detecting an error condition associated with a component included in or used by the one or more selected files. 17. The system of claim 1, further comprising instructions that when executed by the processing device result in:
adjusting one or more data values used to determine the predicted code execution performance score based on detecting performance exceeding a resource utilization limit associated with a component included in or used by the one or more selected files. 18. The system of claim 1, wherein the code files are stored in a code repository using a version management system, and further comprising instructions that when executed by the processing device result in:
determining an updated value of the code development data responsive to a check-in operation of a new file or a new version of a file into the code repository using the version management system. 19. The system of claim 1, wherein the predicted code execution performance score is weighted based at least in part on a defect history of a user, a level of seniority of the user, an assigned work location of the user, and a level of code complexity of the one or more selected files. 20. The system of claim 1, wherein predictively allocating one or more resources comprises adjusting one or more of a scheduled start time, an execution priority, setting a maximum processing resource threshold, setting a maximum network resource threshold, setting a maximum memory usage threshold, setting a maximum storage usage threshold, and setting a maximum execution time threshold. 21. A computer program product comprising a storage medium embodied with computer program instructions that when executed by a computer cause the computer to implement:
collecting a plurality of code development data associated with development of the code files on a per user basis; determining a predicted code execution performance score of one or more selected files of the code files based on the code development data, the predicted code execution performance score indicative of a predicted likelihood of a code quality issue in the one or more selected files; predictively allocating one or more resources of the one or more code execution servers associated with execution of the one or more selected files based on the predicted code execution performance score; and capturing one or more code execution metrics associated with executing the one or more selected files on the one or more code execution servers. 22. The computer program product of claim 21, further comprising computer program instructions that when executed by the computer cause the computer to implement:
accessing a plurality of infrastructure usage data comprising memory system utilization data, processing system utilization data, and database utilization data to determine a plurality of historical usage data associated with a plurality of users; and incorporating the infrastructure usage data into the predicted code execution performance score. 23. The computer program product of claim 21, further comprising computer program instructions that when executed by the computer cause the computer to implement:
linking a plurality of personnel data sources with the infrastructure usage data; and analyzing the personnel data sources and the infrastructure usage data to identify one or more shared characteristics in the personnel data sources with one or more similar performance patterns in the infrastructure usage data. 24. The computer program product of claim 21, further comprising computer program instructions that when executed by the computer cause the computer to implement:
interfacing the one or more code analysis tools with a plurality of execution performance monitoring agents operable to track process execution time, subcomponent execution time, input/output resource utilization, processing resource utilization, memory resource utilization, and storage resource utilization to identify greater resource consuming trends and correlations on a per user basis. 25. The computer program product of claim 21, further comprising computer program instructions that when executed by the computer cause the computer to implement:
monitoring for the trends and correlations on a time of day basis; and adjusting the predicted code execution performance score based on one or more previously observed resource consumption patterns and a current level of resource consumption of the one or more code execution servers. | A system includes one or more code development servers operable to monitor development of code files and one or more code execution servers operable to execute the code files. One or more code analysis tools of the system include instructions that when executed by at least one processing device result in collecting code development data associated with development of the code files on a per user basis and determining a predicted code execution performance score of one or more selected files of the code files based on the code development data. One or more resources of the one or more code execution servers associated with execution of the one or more selected files are predictively allocated based on a predicted code execution performance score. One or more code execution metrics are captured associated with executing the one or more selected files on the one or more code execution servers.1. A system, comprising:
one or more code development servers operable to monitor development of a plurality of code files; one or more code execution servers operable to execute the code files; and one or more code analysis tools executable by at least one processing device and comprising a plurality of instructions that when executed by the at least one processing device result in:
collecting a plurality of code development data associated with development of the code files on a per user basis;
determining a predicted code execution performance score of one or more selected files of the code files based on the code development data, the predicted code execution performance score indicative of a predicted likelihood of a code quality issue in the one or more selected files;
predictively allocating one or more resources of the one or more code execution servers associated with execution of the one or more selected files based on the predicted code execution performance score; and
capturing one or more code execution metrics associated with executing the one or more selected files on the one or more code execution servers. 2. The system of claim 1, further comprising instructions that when executed by the processing device result in:
accessing a plurality of infrastructure usage data comprising memory system utilization data, processing system utilization data, and database utilization data to determine a plurality of historical usage data associated with a plurality of users; and incorporating the infrastructure usage data into the predicted code execution performance score. 3. The system of claim 2, further comprising instructions that when executed by the processing device result in:
linking a plurality of personnel data sources with the infrastructure usage data; and analyzing the personnel data sources and the infrastructure usage data to identify one or more shared characteristics in the personnel data sources with one or more similar performance patterns in the infrastructure usage data. 4. The system of claim 3, wherein the one or more shared characteristics comprise one or more of: resource skill sets, resource locations, hiring data, and associated job descriptions. 5. The system of claim 1, wherein the code development data comprises one or more of: design review data, code review data, and quality assurance review data with statistics indicative of one or more errors identified in the code files. 6. The system of claim 1, wherein the code development data comprises a plurality of code quality statistics indicative of one or more defects identified with respect to one or more requirements associated with the code files or a portion of code in one or more of the code files. 7. The system of claim 1, wherein the code development data comprises a plurality of code complexity metrics indicative of components within the code files. 8. The system of claim 1, wherein the code development data comprises a plurality of code version metrics indicative of a number of versions of the code files. 9. The system of claim 1, further comprising instructions that when executed by the processing device result in:
interfacing the one or more code analysis tools with a plurality of execution performance monitoring agents operable to track process execution time, subcomponent execution time, input/output resource utilization, processing resource utilization, memory resource utilization, and storage resource utilization to identify greater resource consuming trends and correlations on a per user basis. 10. The system of claim 9, further comprising instructions that when executed by the processing device result in:
monitoring for the trends and correlations on a time of day basis. 11. The system of claim 9, further comprising instructions that when executed by the processing device result in:
adjusting the predicted code execution performance score based on one or more previously observed resource consumption patterns and a current level of resource consumption of the one or more code execution servers. 12. The system of claim 1, further comprising a data warehouse system operable to store a plurality of records indicative of resource allocation of the one or more code execution servers and the one or more code execution metrics. 13. The system of claim 12, further comprising instructions that when executed by the processing device result in:
extracting a plurality of data feeds from the data warehouse system indicative of the code development data, personnel data, and the one or more code execution metrics; and delivering the data feeds to the one or more code analysis tools. 14. The system of claim 13, further comprising instructions that when executed by the processing device result in:
establishing a plurality of correlations across a plurality of domains captured in the data warehouse system, the correlations comprising linking server performance with one or more of: process execution performance, code complexity, resource location, service-level agreement metrics, storage area network capacity, and license utilization. 15. The system of claim 14, further comprising instructions that when executed by the processing device result in:
outputting one or more visual depictions of the correlations to an interactive dashboard interface. 16. The system of claim 1, further comprising instructions that when executed by the processing device result in:
adjusting one or more data values used to determine the predicted code execution performance score based on detecting an error condition associated with a component included in or used by the one or more selected files. 17. The system of claim 1, further comprising instructions that when executed by the processing device result in:
adjusting one or more data values used to determine the predicted code execution performance score based on detecting performance exceeding a resource utilization limit associated with a component included in or used by the one or more selected files. 18. The system of claim 1, wherein the code files are stored in a code repository using a version management system, and further comprising instructions that when executed by the processing device result in:
determining an updated value of the code development data responsive to a check-in operation of a new file or a new version of a file into the code repository using the version management system. 19. The system of claim 1, wherein the predicted code execution performance score is weighted based at least in part on a defect history of a user, a level of seniority of the user, an assigned work location of the user, and a level of code complexity of the one or more selected files. 20. The system of claim 1, wherein predictively allocating one or more resources comprises adjusting one or more of a scheduled start time, an execution priority, setting a maximum processing resource threshold, setting a maximum network resource threshold, setting a maximum memory usage threshold, setting a maximum storage usage threshold, and setting a maximum execution time threshold. 21. A computer program product comprising a storage medium embodied with computer program instructions that when executed by a computer cause the computer to implement:
collecting a plurality of code development data associated with development of the code files on a per user basis; determining a predicted code execution performance score of one or more selected files of the code files based on the code development data, the predicted code execution performance score indicative of a predicted likelihood of a code quality issue in the one or more selected files; predictively allocating one or more resources of the one or more code execution servers associated with execution of the one or more selected files based on the predicted code execution performance score; and capturing one or more code execution metrics associated with executing the one or more selected files on the one or more code execution servers. 22. The computer program product of claim 21, further comprising computer program instructions that when executed by the computer cause the computer to implement:
accessing a plurality of infrastructure usage data comprising memory system utilization data, processing system utilization data, and database utilization data to determine a plurality of historical usage data associated with a plurality of users; and incorporating the infrastructure usage data into the predicted code execution performance score. 23. The computer program product of claim 21, further comprising computer program instructions that when executed by the computer cause the computer to implement:
linking a plurality of personnel data sources with the infrastructure usage data; and analyzing the personnel data sources and the infrastructure usage data to identify one or more shared characteristics in the personnel data sources with one or more similar performance patterns in the infrastructure usage data. 24. The computer program product of claim 21, further comprising computer program instructions that when executed by the computer cause the computer to implement:
interfacing the one or more code analysis tools with a plurality of execution performance monitoring agents operable to track process execution time, subcomponent execution time, input/output resource utilization, processing resource utilization, memory resource utilization, and storage resource utilization to identify greater resource consuming trends and correlations on a per user basis. 25. The computer program product of claim 21, further comprising computer program instructions that when executed by the computer cause the computer to implement:
monitoring for the trends and correlations on a time of day basis; and adjusting the predicted code execution performance score based on one or more previously observed resource consumption patterns and a current level of resource consumption of the one or more code execution servers. | 2,100 |
6,662 | 6,662 | 16,694,915 | 2,182 | A method and apparatus are provided for manufacturing integrated circuits performing invariant integer division x/d. A desired rounding mode is provided and an integer triple (a,b,k) for this rounding mode is derived. Furthermore, a set of conditions for the rounding mode is derived. An RTL representation is then derived using the integer triple. From this a hardware layout can be derived and an integrated circuit manufactured with the derived hardware layout. When the integer triple is derived a minimum value of k for the desired rounding mode and set of conditions is also derived. | 1. A computer-implemented method for generating a representation of an integrated circuit for performing integer division x/d where x is a variable integer and d is an invariant integer constant, the method comprising:
deriving, by a processing module, an integer triple (a,b,k) which satisfies Round(x/d)=(ax+b)/2k for a desired rounding mode; and generating, by a processing module, the representation of the integrated circuit for manufacture of the integrated circuit, wherein the representation of the integrated circuit is generated using a representation of the logical operation (ax+b)/2k in accordance with the derived integer triple (a,b,k). 2. The method of claim 1, wherein deriving the integer triple (a,b,k) comprises deriving an integer triple (a,b,k) which has a minimum value of k to satisfy Round(x/d)=(ax+b)/2k for the desired rounding mode. 3. The method of claim 1, further comprising deriving, by a processing module, the representation of the logical operation (ax+b)/2k using the derived integer triple (a,b,k). 4. The method of claim 3, wherein the representation of the logical operation (ax+b)/2k is an RTL (Register Transfer Level) representation. 5. The method of claim 1, wherein the desired rounding mode is one of: (i) round towards zero, (ii) round to nearest, and (iii) faithful rounding. 6. The method of claim 1, wherein deriving the integer triple (a,b,k) comprises deriving a value for b from values of k and a in which the derived value of b has the smallest Hamming weight of the possible values of b satisfying Round(x/d)=(ax+b)/2k. 7. The method of claim 1, further comprising manufacturing the integrated circuit from the representation of the integrated circuit. 8. The method of claim 1, wherein the derivation of k is different for different rounding modes. 9. An apparatus implemented in one or more processors and configured to generate a representation of an integrated circuit for performing integer division x/d where x is a variable integer and d is an invariant integer constant, the apparatus comprising:
a parameter creator configured to derive an integer triple (a,b,k) which satisfies Round(x/d)=(ax+b)/2k for a desired rounding mode; and a synthesis tool configured to generate the representation of the integrated circuit from a representation of the logical operation (ax+b)/2k in accordance with the derived integer triple (a,b,k) for manufacture of the integrated circuit. 10. The apparatus of claim 9, wherein the parameter creator is configured to derive an integer triple (a,b,k) which has a minimum value of k to satisfy Round(x/d)=(ax+b)/2k for the desired rounding mode. 11. The apparatus of claim 9, wherein the parameter creator is configured to derive a value for b from values of a and k, the derived value of b having the smallest Hamming weight of the possible values of b satisfying Round(x/d)=(ax+b)/2k. 12. The apparatus of claim 9, further comprising a representation generator configured to derive the representation of the logical operation (ax+b)/2k using the derived integer triple (a,b,k). 13. The apparatus of claim 12, wherein the representation generator is an RTL (Register Transfer Level) generator, and the representation of the logical operation (ax+b)/2k is an RTL representation. 14. The apparatus of claim 9, wherein the desired rounding mode is one of: (i) round towards zero, (ii) round to nearest, and (iii) faithful rounding. 15. The apparatus of claim 9, wherein the parameter creator is further configured to derive k differently for different rounding modes. 16. The apparatus of claim 9, wherein the parameter creator is further configured to receive n, d, and the desired rounding mode as inputs, wherein x is an n-bit number. 17. The apparatus of claim 9, wherein the parameter creator is configured to derive the integer triple (a,b,k) based on a set of conditions. 18. A non-transitory computer readable medium having stored thereon computer executable instructions that when executed cause at least one processor to:
derive an integer triple (a,b,k) which satisfies Round(x/d)=(ax+b)/2k for a desired rounding mode where x is a variable integer and d is an invariant integer constant; and generate a representation of an integrated circuit from a representation of the logical operation (ax+b)/2k in accordance with the derived integer triple (a,b,k) for manufacture of the integrated circuit. 19. The non-transitory computer readable medium of claim 18, wherein execution of the executable instructions cause the at least one processor to derive an integer triple (a,b,k) which has a minimum value of k to satisfy Round (x/d)=(ax+b)/2k for the desired rounding mode. 20. A method of manufacturing, using an integrated circuit manufacturing system, an integrated circuit for performing integer division, the method comprising:
processing, using a layout processing system, a representation of an integrated circuit for performing integer division so as to generate a circuit layout description of an integrated circuit for performing integer division, the representation of the integrated circuit for performing integer division being generated in accordance with the method as set forth in claim 1; and manufacturing, using an integrated circuit generation system, the integrated circuit for performing integer division according to the circuit layout description. | A method and apparatus are provided for manufacturing integrated circuits performing invariant integer division x/d. A desired rounding mode is provided and an integer triple (a,b,k) for this rounding mode is derived. Furthermore, a set of conditions for the rounding mode is derived. An RTL representation is then derived using the integer triple. From this a hardware layout can be derived and an integrated circuit manufactured with the derived hardware layout. When the integer triple is derived a minimum value of k for the desired rounding mode and set of conditions is also derived.1. A computer-implemented method for generating a representation of an integrated circuit for performing integer division x/d where x is a variable integer and d is an invariant integer constant, the method comprising:
deriving, by a processing module, an integer triple (a,b,k) which satisfies Round(x/d)=(ax+b)/2k for a desired rounding mode; and generating, by a processing module, the representation of the integrated circuit for manufacture of the integrated circuit, wherein the representation of the integrated circuit is generated using a representation of the logical operation (ax+b)/2k in accordance with the derived integer triple (a,b,k). 2. The method of claim 1, wherein deriving the integer triple (a,b,k) comprises deriving an integer triple (a,b,k) which has a minimum value of k to satisfy Round(x/d)=(ax+b)/2k for the desired rounding mode. 3. The method of claim 1, further comprising deriving, by a processing module, the representation of the logical operation (ax+b)/2k using the derived integer triple (a,b,k). 4. The method of claim 3, wherein the representation of the logical operation (ax+b)/2k is an RTL (Register Transfer Level) representation. 5. The method of claim 1, wherein the desired rounding mode is one of: (i) round towards zero, (ii) round to nearest, and (iii) faithful rounding. 6. The method of claim 1, wherein deriving the integer triple (a,b,k) comprises deriving a value for b from values of k and a in which the derived value of b has the smallest Hamming weight of the possible values of b satisfying Round(x/d)=(ax+b)/2k. 7. The method of claim 1, further comprising manufacturing the integrated circuit from the representation of the integrated circuit. 8. The method of claim 1, wherein the derivation of k is different for different rounding modes. 9. An apparatus implemented in one or more processors and configured to generate a representation of an integrated circuit for performing integer division x/d where x is a variable integer and d is an invariant integer constant, the apparatus comprising:
a parameter creator configured to derive an integer triple (a,b,k) which satisfies Round(x/d)=(ax+b)/2k for a desired rounding mode; and a synthesis tool configured to generate the representation of the integrated circuit from a representation of the logical operation (ax+b)/2k in accordance with the derived integer triple (a,b,k) for manufacture of the integrated circuit. 10. The apparatus of claim 9, wherein the parameter creator is configured to derive an integer triple (a,b,k) which has a minimum value of k to satisfy Round(x/d)=(ax+b)/2k for the desired rounding mode. 11. The apparatus of claim 9, wherein the parameter creator is configured to derive a value for b from values of a and k, the derived value of b having the smallest Hamming weight of the possible values of b satisfying Round(x/d)=(ax+b)/2k. 12. The apparatus of claim 9, further comprising a representation generator configured to derive the representation of the logical operation (ax+b)/2k using the derived integer triple (a,b,k). 13. The apparatus of claim 12, wherein the representation generator is an RTL (Register Transfer Level) generator, and the representation of the logical operation (ax+b)/2k is an RTL representation. 14. The apparatus of claim 9, wherein the desired rounding mode is one of: (i) round towards zero, (ii) round to nearest, and (iii) faithful rounding. 15. The apparatus of claim 9, wherein the parameter creator is further configured to derive k differently for different rounding modes. 16. The apparatus of claim 9, wherein the parameter creator is further configured to receive n, d, and the desired rounding mode as inputs, wherein x is an n-bit number. 17. The apparatus of claim 9, wherein the parameter creator is configured to derive the integer triple (a,b,k) based on a set of conditions. 18. A non-transitory computer readable medium having stored thereon computer executable instructions that when executed cause at least one processor to:
derive an integer triple (a,b,k) which satisfies Round(x/d)=(ax+b)/2k for a desired rounding mode where x is a variable integer and d is an invariant integer constant; and generate a representation of an integrated circuit from a representation of the logical operation (ax+b)/2k in accordance with the derived integer triple (a,b,k) for manufacture of the integrated circuit. 19. The non-transitory computer readable medium of claim 18, wherein execution of the executable instructions cause the at least one processor to derive an integer triple (a,b,k) which has a minimum value of k to satisfy Round (x/d)=(ax+b)/2k for the desired rounding mode. 20. A method of manufacturing, using an integrated circuit manufacturing system, an integrated circuit for performing integer division, the method comprising:
processing, using a layout processing system, a representation of an integrated circuit for performing integer division so as to generate a circuit layout description of an integrated circuit for performing integer division, the representation of the integrated circuit for performing integer division being generated in accordance with the method as set forth in claim 1; and manufacturing, using an integrated circuit generation system, the integrated circuit for performing integer division according to the circuit layout description. | 2,100 |
6,663 | 6,663 | 16,053,109 | 2,176 | An action associated with an event detected within a communication is provided. A communication service initiates operations to provide the action by processing a communication to detect an event related to a transaction between a recipient and vendor. An action template is located that matches an event type and the vendor. A vendor service is queried to find the action template. An action is generated by populating the action template with an attribute of the event such as a recipient identifier and/or a transaction identifier. The action is provided to the recipient to facilitate an interaction related to the event with the vendor service. | 1. A server to provide an action associated with an event detected within a communication, the server comprising:
a communication device; a memory configured to store instructions associated with a communication service; one or more processors coupled to the memory and the communication device, the one or more processors executing the communication service in conjunction with the instructions stored in the memory, wherein the one or more processors are configured to:
process the communication;
detect the event within the communication, wherein the event is related to a transaction between a recipient of the communication and a sender of the communication;
match at least one selected from a group consisting of an event type and a vendor to a plurality of action templates;
determine an action template corresponding to the transaction based on matching at least one selected from the group consisting of the event type and the vendor to the plurality of action templates; and
in response to a determination of the action template,
generate the action by selecting and populating the action template based on recipient information and information associated with the event included in the communication, and
provide, through the communication device, the action to the recipient as part of an interaction with the sender of the communication related to the event. 2. The server of claim 1, wherein the recipient information and the information associated with the event respectively include a recipient identifier and a transaction identifier. 3. The server of claim 2, wherein the one or more processors are further configured to:
insert the recipient identifier and the transaction identifier to corresponding positions in the action template. 4. The server of claim 1, wherein the action includes a hyperlink generated from a uniform resource locator (URL) of a webpage of the sender of the communication, and wherein the URL includes one or more of the event type, a recipient identifier, and a transaction identifier. 5. The server of claim 1, wherein the one or more processors are further configured to:
in response to a failure to determine the action template,
query a search service for a webpage of the sender of the communication,
locate the webpage of the sender of the communication based on the query,
analyze the webpage of the sender of the communication, and
identify the webpage as configured to accept an autofill input based on results of analyzing the webpage of the sender of the communication. 6. The server of claim 5, wherein the one or more processors are further configured to:
in response to querying the search service for the webpage of the sender of the communication, receive the webpage of the sender of the communication. 7. The server of claim 6, wherein the one or more processors are further configured to:
query the webpage of the sender of the communication for the action template; and receive the action template from the webpage of the sender of the communication. 8. The server of claim 5, wherein the one or more processors are further configured to:
provide, through the communication device, the webpage of the sender of the communication to a client application to be displayed to the recipient; and instruct, through the communication device, the client application for auto-fill of the webpage of the sender of the communication. 9. The server of claim 1, wherein the one or more processors are further configured to:
in response to a failure to determine the action template, query a search service for a webpage of the sender of the communication; receive the webpage of the sender of the communication; and analyze the webpage of the sender of the communication; identify the webpage as configured to reject an autofill input based on results of analyzing the webpage of the sender of the communication. 10. The server of claim 9, wherein the one or more processors are further configured to:
generate an event summary card, wherein the event summary card includes a description of the event, an attribute of the event, the event type, and the vendor; and provide, through the communication device, the webpage of the sender of the communication, the event summary card, and an alert to the recipient to a client application to be displayed to the recipient, wherein the alert includes instructions for filling in the webpage of the sender of the communication with one or more sections of the event summary card. 11. A method executed on a computing device to provide an action associated with an event detected within a communication, the method comprising:
during processing the communication, detecting the event within the communication, wherein the event is associated with one or more transactions; matching at least one selected from a group consisting of an event type and a vendor to a plurality of action templates; determining an action template corresponding to the one or more transactions based on matching at least one selected from the group consisting of the event type and the vendor to the plurality of action templates; and in response to a determination of the action template,
generating the action by selecting and populating the action template based on recipient information and information associated with the event included in the communication, and
providing the action to a recipient of the communication as part of an interaction with a sender of the communication related to the event. 12. The method of claim 11, wherein the action includes a status confirmation related to the one or more transactions or a status tracking related to the one or more transactions, the one or more transactions including a travel related purchase and a product related purchase. 13. The method of claim 11, further comprising:
identifying a time period to complete the event; detecting a start of the time period; and instructing a client application to display the action to the recipient at the start of the time period. 14. The method of claim 11, further comprising:
detecting a failure to determine the action template; querying a search service with the event type and the sender of the communication for a first webpage of the sender of the communication; and analyzing the first webpage located based on the querying; identifying a second webpage of the sender of the communication based on the analyzing; determining the action template at the second webpage. 15. The method of claim 14, further comprising:
mapping the recipient information and the information associated with the event included in the communication to the action template determined at the second webpage of the sender of the communication; automatically filling the action template determined at the second webpage with the recipient information and the information associated with the event included in the communication; and providing the second webpage to a client application to be displayed to the recipient of the communication. 16. The method of claim 11, further comprising
in response to a failure to determine the action template,
querying a search service for a webpage of the sender of the communication;
locating the webpage of the sender of the communication based on results of querying the search service;
analyzing the webpage of the sender of the communication; and
identifying the webpage as configured to accept an autofill input based on results of analyzing the webpage of the sender of the communication. 17. A computer-readable memory device with instructions stored thereon to provide an action associated with an event detected within a communication, the instructions comprising:
during processing the communication, detecting the event within the communication, wherein the event is associated with one or more transactions; matching at least one selected from a group consisting of an event type and a vendor to a plurality of action templates; determining an action template corresponding to the one or more transactions based on matching at least one selected from the group consisting of the event type and the vendor to the plurality of action templates; and in response to a determination of the action template,
generating the action by selecting and populating the action template based on recipient information and information associated with the event included in the communication, and
providing the action to a recipient of the communication as part of an interaction with a sender of the communication related to the event. 18. The computer-readable memory device of claim 17, wherein the instructions further comprise:
querying a search service for a webpage of the sender of the communication; identifying the webpage of the sender of the communication based on the querying of the search service; querying the webpage of the sender of the communication with the event type; and receiving the action template from the sender of the communication based on the querying of the webpage. 19. The computer-readable memory device of claim 17, wherein the instructions further comprise:
detecting a failure to determine the action template; and in response to detecting the failure to determine the action template,
querying a search service with the event type and the sender of the communication for a first webpage of the sender of the communication,
analyzing the first webpage located based on the querying,
identifying a second webpage of the sender of the communication, wherein the second webpage includes one or more components associated with the event,
mapping a recipient identifier and a transaction identifier to the one or more components,
automatically filling the one or more components with the recipient identifier and the transaction identifier, and
providing the second webpage with the one or more components as filled to a client application to be displayed to the recipient. 20. The computer-readable memory device of claim 17, wherein the instructions further comprise:
in response to a failure to determine the action template,
querying a search service for a webpage of the sender of the communication,
locating the webpage of the sender of the communication based on results of querying the search service,
analyzing the webpage of the sender of the communication, and
identifying the webpage as configured to accept an autofill input based on results of analyzing the webpage of the sender of the communication. | An action associated with an event detected within a communication is provided. A communication service initiates operations to provide the action by processing a communication to detect an event related to a transaction between a recipient and vendor. An action template is located that matches an event type and the vendor. A vendor service is queried to find the action template. An action is generated by populating the action template with an attribute of the event such as a recipient identifier and/or a transaction identifier. The action is provided to the recipient to facilitate an interaction related to the event with the vendor service.1. A server to provide an action associated with an event detected within a communication, the server comprising:
a communication device; a memory configured to store instructions associated with a communication service; one or more processors coupled to the memory and the communication device, the one or more processors executing the communication service in conjunction with the instructions stored in the memory, wherein the one or more processors are configured to:
process the communication;
detect the event within the communication, wherein the event is related to a transaction between a recipient of the communication and a sender of the communication;
match at least one selected from a group consisting of an event type and a vendor to a plurality of action templates;
determine an action template corresponding to the transaction based on matching at least one selected from the group consisting of the event type and the vendor to the plurality of action templates; and
in response to a determination of the action template,
generate the action by selecting and populating the action template based on recipient information and information associated with the event included in the communication, and
provide, through the communication device, the action to the recipient as part of an interaction with the sender of the communication related to the event. 2. The server of claim 1, wherein the recipient information and the information associated with the event respectively include a recipient identifier and a transaction identifier. 3. The server of claim 2, wherein the one or more processors are further configured to:
insert the recipient identifier and the transaction identifier to corresponding positions in the action template. 4. The server of claim 1, wherein the action includes a hyperlink generated from a uniform resource locator (URL) of a webpage of the sender of the communication, and wherein the URL includes one or more of the event type, a recipient identifier, and a transaction identifier. 5. The server of claim 1, wherein the one or more processors are further configured to:
in response to a failure to determine the action template,
query a search service for a webpage of the sender of the communication,
locate the webpage of the sender of the communication based on the query,
analyze the webpage of the sender of the communication, and
identify the webpage as configured to accept an autofill input based on results of analyzing the webpage of the sender of the communication. 6. The server of claim 5, wherein the one or more processors are further configured to:
in response to querying the search service for the webpage of the sender of the communication, receive the webpage of the sender of the communication. 7. The server of claim 6, wherein the one or more processors are further configured to:
query the webpage of the sender of the communication for the action template; and receive the action template from the webpage of the sender of the communication. 8. The server of claim 5, wherein the one or more processors are further configured to:
provide, through the communication device, the webpage of the sender of the communication to a client application to be displayed to the recipient; and instruct, through the communication device, the client application for auto-fill of the webpage of the sender of the communication. 9. The server of claim 1, wherein the one or more processors are further configured to:
in response to a failure to determine the action template, query a search service for a webpage of the sender of the communication; receive the webpage of the sender of the communication; and analyze the webpage of the sender of the communication; identify the webpage as configured to reject an autofill input based on results of analyzing the webpage of the sender of the communication. 10. The server of claim 9, wherein the one or more processors are further configured to:
generate an event summary card, wherein the event summary card includes a description of the event, an attribute of the event, the event type, and the vendor; and provide, through the communication device, the webpage of the sender of the communication, the event summary card, and an alert to the recipient to a client application to be displayed to the recipient, wherein the alert includes instructions for filling in the webpage of the sender of the communication with one or more sections of the event summary card. 11. A method executed on a computing device to provide an action associated with an event detected within a communication, the method comprising:
during processing the communication, detecting the event within the communication, wherein the event is associated with one or more transactions; matching at least one selected from a group consisting of an event type and a vendor to a plurality of action templates; determining an action template corresponding to the one or more transactions based on matching at least one selected from the group consisting of the event type and the vendor to the plurality of action templates; and in response to a determination of the action template,
generating the action by selecting and populating the action template based on recipient information and information associated with the event included in the communication, and
providing the action to a recipient of the communication as part of an interaction with a sender of the communication related to the event. 12. The method of claim 11, wherein the action includes a status confirmation related to the one or more transactions or a status tracking related to the one or more transactions, the one or more transactions including a travel related purchase and a product related purchase. 13. The method of claim 11, further comprising:
identifying a time period to complete the event; detecting a start of the time period; and instructing a client application to display the action to the recipient at the start of the time period. 14. The method of claim 11, further comprising:
detecting a failure to determine the action template; querying a search service with the event type and the sender of the communication for a first webpage of the sender of the communication; and analyzing the first webpage located based on the querying; identifying a second webpage of the sender of the communication based on the analyzing; determining the action template at the second webpage. 15. The method of claim 14, further comprising:
mapping the recipient information and the information associated with the event included in the communication to the action template determined at the second webpage of the sender of the communication; automatically filling the action template determined at the second webpage with the recipient information and the information associated with the event included in the communication; and providing the second webpage to a client application to be displayed to the recipient of the communication. 16. The method of claim 11, further comprising
in response to a failure to determine the action template,
querying a search service for a webpage of the sender of the communication;
locating the webpage of the sender of the communication based on results of querying the search service;
analyzing the webpage of the sender of the communication; and
identifying the webpage as configured to accept an autofill input based on results of analyzing the webpage of the sender of the communication. 17. A computer-readable memory device with instructions stored thereon to provide an action associated with an event detected within a communication, the instructions comprising:
during processing the communication, detecting the event within the communication, wherein the event is associated with one or more transactions; matching at least one selected from a group consisting of an event type and a vendor to a plurality of action templates; determining an action template corresponding to the one or more transactions based on matching at least one selected from the group consisting of the event type and the vendor to the plurality of action templates; and in response to a determination of the action template,
generating the action by selecting and populating the action template based on recipient information and information associated with the event included in the communication, and
providing the action to a recipient of the communication as part of an interaction with a sender of the communication related to the event. 18. The computer-readable memory device of claim 17, wherein the instructions further comprise:
querying a search service for a webpage of the sender of the communication; identifying the webpage of the sender of the communication based on the querying of the search service; querying the webpage of the sender of the communication with the event type; and receiving the action template from the sender of the communication based on the querying of the webpage. 19. The computer-readable memory device of claim 17, wherein the instructions further comprise:
detecting a failure to determine the action template; and in response to detecting the failure to determine the action template,
querying a search service with the event type and the sender of the communication for a first webpage of the sender of the communication,
analyzing the first webpage located based on the querying,
identifying a second webpage of the sender of the communication, wherein the second webpage includes one or more components associated with the event,
mapping a recipient identifier and a transaction identifier to the one or more components,
automatically filling the one or more components with the recipient identifier and the transaction identifier, and
providing the second webpage with the one or more components as filled to a client application to be displayed to the recipient. 20. The computer-readable memory device of claim 17, wherein the instructions further comprise:
in response to a failure to determine the action template,
querying a search service for a webpage of the sender of the communication,
locating the webpage of the sender of the communication based on results of querying the search service,
analyzing the webpage of the sender of the communication, and
identifying the webpage as configured to accept an autofill input based on results of analyzing the webpage of the sender of the communication. | 2,100 |
6,664 | 6,664 | 16,240,956 | 2,192 | A method for allocating memory includes an operation that determines whether a prototype of a callee function is within a scope of a caller. The caller is a module containing a function call to the callee function. In addition, the method includes determining whether the function call includes one or more unnamed parameters when a prototype of the callee function is within the scope of the caller. Further, the method may include inserting instructions in the caller to allocate a register save area in a memory when it is determined that the function call includes one or more unnamed parameters. | 1. A computer-implemented method for allocating memory, comprising:
determining that a prototype of a callee function is within a scope of a caller, the caller being a module containing a function call to the callee function; determining, in response to determining that the prototype of the callee function is within the scope of the caller, that the function call includes one or more parameters that cannot be passed in registers; and inserting instructions in the caller to allocate a register save area in a memory for the one or more parameters that cannot be passed in registers. 2. The method of claim 1, wherein the method further comprises inserting instructions in the caller to allocate a parameter overflow area in the memory. 3. The method of claim 2, wherein the inserting instructions is performed in response to determining the function call includes the one or more parameters that cannot be passed in registers. 4. The method of claim 1, wherein the memory is a stack frame of the caller. 5. The method of claim 1, wherein the one or more parameters that cannot be passed in registers are unnamed parameters. 6. The method of claim 5, wherein determining that the function call includes one or more parameters that cannot be passed in registers includes determining that the function call includes the unnamed parameters. 7. The method of claim 1, wherein the method is performed by a compiler. 8. A system for allocating memory, comprising:
a processor; and a memory to store a compiler and one or more modules, the compiler being comprised of instructions that when executed by the processor, cause the processor to perform a method comprising: determining that a prototype of a callee function is within a scope of a caller, the caller containing a function call to the callee function; determining, in response to determining that the prototype of the callee function is within the scope of the caller, that the function call includes one or more parameters that cannot be passed in registers; and inserting instructions in the caller to allocate a register save area in the memory for the one or more parameters that cannot be passed in registers. 9. The system of claim 8, wherein the method further comprises inserting instructions in the caller to allocate a parameter overflow area in the memory. 10. The system of claim 9, wherein the inserting instructions is performed in response to determining the function call includes the one or more parameters that cannot be passed in registers. 11. The system of claim 8, wherein the memory is a stack frame of the caller. 12. The system of claim 8, wherein the one or more parameters that cannot be passed in registers are unnamed parameters. 13. The system of claim 12, wherein determining that the function call includes one or more parameters that cannot be passed in registers includes determining that the function call includes the unnamed parameters. 14. A computer program product for allocating memory, comprising a computer readable storage medium having program instructions embodied therewith, the program instructions readable by a processor to cause the processor to:
determining that a prototype of a callee function is within a scope of a caller, the caller containing a function call to the callee function; determining, in response to determining that the prototype of the callee function is within the scope of the caller, that the function call includes one or more parameters that cannot be passed in registers; and inserting instructions in the caller to allocate a register save area in the memory for the one or more parameters that cannot be passed in registers. 15. The computer program product of claim 14, wherein the method further comprises inserting instructions in the caller to allocate a parameter overflow area in the memory. 16. The computer program product of claim 15, wherein the inserting instructions is performed in response to determining the function call includes the one or more parameters that cannot be passed in registers. 17. The computer program product of claim 14, wherein the memory is a stack frame of the caller. 18. The computer program product of claim 14, wherein the one or more parameters that cannot be passed in registers are unnamed parameters. 19. The computer program product of claim 18, wherein determining that the function call includes one or more parameters that cannot be passed in registers includes determining that the function call includes the unnamed parameters. | A method for allocating memory includes an operation that determines whether a prototype of a callee function is within a scope of a caller. The caller is a module containing a function call to the callee function. In addition, the method includes determining whether the function call includes one or more unnamed parameters when a prototype of the callee function is within the scope of the caller. Further, the method may include inserting instructions in the caller to allocate a register save area in a memory when it is determined that the function call includes one or more unnamed parameters.1. A computer-implemented method for allocating memory, comprising:
determining that a prototype of a callee function is within a scope of a caller, the caller being a module containing a function call to the callee function; determining, in response to determining that the prototype of the callee function is within the scope of the caller, that the function call includes one or more parameters that cannot be passed in registers; and inserting instructions in the caller to allocate a register save area in a memory for the one or more parameters that cannot be passed in registers. 2. The method of claim 1, wherein the method further comprises inserting instructions in the caller to allocate a parameter overflow area in the memory. 3. The method of claim 2, wherein the inserting instructions is performed in response to determining the function call includes the one or more parameters that cannot be passed in registers. 4. The method of claim 1, wherein the memory is a stack frame of the caller. 5. The method of claim 1, wherein the one or more parameters that cannot be passed in registers are unnamed parameters. 6. The method of claim 5, wherein determining that the function call includes one or more parameters that cannot be passed in registers includes determining that the function call includes the unnamed parameters. 7. The method of claim 1, wherein the method is performed by a compiler. 8. A system for allocating memory, comprising:
a processor; and a memory to store a compiler and one or more modules, the compiler being comprised of instructions that when executed by the processor, cause the processor to perform a method comprising: determining that a prototype of a callee function is within a scope of a caller, the caller containing a function call to the callee function; determining, in response to determining that the prototype of the callee function is within the scope of the caller, that the function call includes one or more parameters that cannot be passed in registers; and inserting instructions in the caller to allocate a register save area in the memory for the one or more parameters that cannot be passed in registers. 9. The system of claim 8, wherein the method further comprises inserting instructions in the caller to allocate a parameter overflow area in the memory. 10. The system of claim 9, wherein the inserting instructions is performed in response to determining the function call includes the one or more parameters that cannot be passed in registers. 11. The system of claim 8, wherein the memory is a stack frame of the caller. 12. The system of claim 8, wherein the one or more parameters that cannot be passed in registers are unnamed parameters. 13. The system of claim 12, wherein determining that the function call includes one or more parameters that cannot be passed in registers includes determining that the function call includes the unnamed parameters. 14. A computer program product for allocating memory, comprising a computer readable storage medium having program instructions embodied therewith, the program instructions readable by a processor to cause the processor to:
determining that a prototype of a callee function is within a scope of a caller, the caller containing a function call to the callee function; determining, in response to determining that the prototype of the callee function is within the scope of the caller, that the function call includes one or more parameters that cannot be passed in registers; and inserting instructions in the caller to allocate a register save area in the memory for the one or more parameters that cannot be passed in registers. 15. The computer program product of claim 14, wherein the method further comprises inserting instructions in the caller to allocate a parameter overflow area in the memory. 16. The computer program product of claim 15, wherein the inserting instructions is performed in response to determining the function call includes the one or more parameters that cannot be passed in registers. 17. The computer program product of claim 14, wherein the memory is a stack frame of the caller. 18. The computer program product of claim 14, wherein the one or more parameters that cannot be passed in registers are unnamed parameters. 19. The computer program product of claim 18, wherein determining that the function call includes one or more parameters that cannot be passed in registers includes determining that the function call includes the unnamed parameters. | 2,100 |
6,665 | 6,665 | 15,812,932 | 2,136 | Systems and methods for intra-sector re-ordered wear leveling include: detecting, in a memory device, a high wear sub-sector having a high wear level, the sub-sector residing in a first sector; determining a second sector of the memory device having a low wear level; swapping the first sector with the second sector; and re-ordering a position of at least one sub-sector of the first sector, the second sector, or both. | 1. A memory device, comprising:
a memory array of sectors having one or more sub-sectors; wear-leveling logic configured to:
identify a target sector of the array of sectors having high wear, as determined based upon sub-sector information of one or more sub-sectors of the target sector; and
swap the target sector with a min sector of the memory device having a low wear level, as determined based upon sub-sector information of one or more sub-sectors of the min sector. 2. The memory device of claim 1, wherein the wear-leveling logic is configured to: re-order a position of at least one sub-sector of the target sector, the min sector, or both. 3. The memory device of claim 2, wherein the wear-leveling logic is configured to:
re-order the position of the at least one sub-sector of the target sector prior to swapping the target sector with the min sector; re-order the position of the at least one sub-sector of the min sector after swapping the target sector with the min sector; or both. 4. The memory device of claim 2, wherein the wear-leveling logic is configured to:
re-order the position of a high wear sub-sector by swapping the position of the high wear sub-sector. 5. The memory device of claim 1, wherein:
the sub-sector information of one or more sub-sectors of the target sector comprises a highest cycle count of a sub-sector of the target sector, a sum of sub-sector cycle counts for all sub-sectors of the target sector, an average of sub-sector cycle counts for all the sub-sectors of the target sector, or any combination thereof; the sub-sector information of one or more sub-sectors of the min sector comprises a lowest cycle count of a sub-sector of the min sector, a sum of sub-sector cycle counts for all sub-sectors of the min sector, an average of sub-sector cycle counts for all the sub-sectors of the min sector, or any combination thereof; or both. 6. The memory device of claim 1, wherein the wear-leveling logic is configured to:
swap a position of a first bundle of two or more sub-sectors containing a high wear sub-sector with a position of a second bundle of two or more sub-sectors containing a low wear sub-sector. 7. The memory device of claim 1, wherein the memory device comprises a NAND type flash memory device, a NOR type flash memory device, or both. 8. The memory device of claim 1, comprising:
rescrambling logic configured to modify one or more intra-sector addresses of the memory array to re-order the position of the at least one sub-sector of the target sector, the min sector, or both. 9. The memory device of claim 8, comprising:
a block mapping unit configured to re-map one or more sectors, one or more sub-sectors, or both of the memory-array, wherein the block mapping unit is configured to provide configuration bits to the rescrambling logic to enable the modification of the one or more intra-sector addresses. 10. The memory device of claim 9, wherein the rescrambling logic comprises a multiplexer (MUX);
wherein the configuration bits comprise a plurality of sets of bits, each set of bits representing an address modification; and wherein the MUX is configured to select one of the plurality of sets of bits, the selection resulting in an address modification represented by the selected one of the plurality of sets of bits. 11. The memory device of claim 9, wherein the rescrambling logic comprises combinational logic;
wherein the configuration bits comprise a single set of configuration bits; and wherein the combinational logic affects a modification to an intra-sector address using the single set of configuration bits. 12. A method, comprising:
identifying a target sector of an array of sectors of a memory device that has high wear, as determined based upon sub-sector information of one or more sub-sectors of the target sector; and swapping the target sector with a min sector of the memory device having a low wear level, as determined based upon the sub-sector information of the one or more sub-sectors of the min sector. 13. The method of claim 12, wherein:
the sub-sector information of one or more sub-sectors of the target sector comprises a highest cycle count of a sub-sector of the target sector, a sum of sub-sector cycle counts for all sub-sectors of the target sector, an average of sub-sector cycle counts for all the sub-sectors of the target sector, or any combination thereof; the sub-sector information of the one or more sub-sectors of the min sector comprises a lowest cycle count of a sub-sector of the min sector, a sum of sub-sector cycle counts for all sub-sectors of the min sector, an average of sub-sector cycle counts for all the sub-sectors of the min sector, or any combination thereof; or both. 14. The method of claim 12, comprising:
determining a minimum sub-sector of the min sector having a minimum wear level; and re-ordering a position of a high wear sub-sector of the target sector by swapping the position of the high wear sub-sector with a position of the minimum sub-sector. 15. The method of claim 12, comprising:
swapping a position of a first bundle of two or more sub-sectors containing a high wear sub-sector with a position of a second bundle of two more sub-sectors containing a low wear sub-sector. 16. The method of claim 12, comprising:
modifying one or more intra-sector addresses of the memory array to re-order a position of at least one sub-sector of the target sector, the min sector, or both. 17. The method of claim 16, comprising modifying the one or more intra-sector addresses of the memory array using:
a multiplexer; combinational logic comprising summation logic, exclusive or (XOR) logic, or both; or both. 18. A memory controller, comprising:
circuitry configured to:
identify a target sector of the array of sectors having high wear, as determined based upon sub-sector information of one or more sub-sectors of the target sector; and
swap the target sector with a min sector of the memory device having a low wear level, as determined based upon sub-sector information of one or more sub-sectors of the min sector. 19. The memory controller of claim 18, wherein the circuitry is configured to re-order a position of at least one sub-sector while swapping the target sector with the min sector. 20. The memory controller of claim 18, wherein:
the sub-sector information of one or more sub-sectors of the target sector comprises a highest cycle count of a sub-sector of the target sector, a sum of sub-sector cycle counts for all sub-sectors of the target sector, an average of sub-sector cycle counts for all the sub-sectors of the target sector, or any combination thereof; the sub-sector information of one or more sub-sectors of the min sector comprises a lowest cycle count of a sub-sector of the min sector, a sum of sub-sector cycle counts for all sub-sectors of the min sector, an average of sub-sector cycle counts for all the sub-sectors of the min sector, or any combination thereof; or both. | Systems and methods for intra-sector re-ordered wear leveling include: detecting, in a memory device, a high wear sub-sector having a high wear level, the sub-sector residing in a first sector; determining a second sector of the memory device having a low wear level; swapping the first sector with the second sector; and re-ordering a position of at least one sub-sector of the first sector, the second sector, or both.1. A memory device, comprising:
a memory array of sectors having one or more sub-sectors; wear-leveling logic configured to:
identify a target sector of the array of sectors having high wear, as determined based upon sub-sector information of one or more sub-sectors of the target sector; and
swap the target sector with a min sector of the memory device having a low wear level, as determined based upon sub-sector information of one or more sub-sectors of the min sector. 2. The memory device of claim 1, wherein the wear-leveling logic is configured to: re-order a position of at least one sub-sector of the target sector, the min sector, or both. 3. The memory device of claim 2, wherein the wear-leveling logic is configured to:
re-order the position of the at least one sub-sector of the target sector prior to swapping the target sector with the min sector; re-order the position of the at least one sub-sector of the min sector after swapping the target sector with the min sector; or both. 4. The memory device of claim 2, wherein the wear-leveling logic is configured to:
re-order the position of a high wear sub-sector by swapping the position of the high wear sub-sector. 5. The memory device of claim 1, wherein:
the sub-sector information of one or more sub-sectors of the target sector comprises a highest cycle count of a sub-sector of the target sector, a sum of sub-sector cycle counts for all sub-sectors of the target sector, an average of sub-sector cycle counts for all the sub-sectors of the target sector, or any combination thereof; the sub-sector information of one or more sub-sectors of the min sector comprises a lowest cycle count of a sub-sector of the min sector, a sum of sub-sector cycle counts for all sub-sectors of the min sector, an average of sub-sector cycle counts for all the sub-sectors of the min sector, or any combination thereof; or both. 6. The memory device of claim 1, wherein the wear-leveling logic is configured to:
swap a position of a first bundle of two or more sub-sectors containing a high wear sub-sector with a position of a second bundle of two or more sub-sectors containing a low wear sub-sector. 7. The memory device of claim 1, wherein the memory device comprises a NAND type flash memory device, a NOR type flash memory device, or both. 8. The memory device of claim 1, comprising:
rescrambling logic configured to modify one or more intra-sector addresses of the memory array to re-order the position of the at least one sub-sector of the target sector, the min sector, or both. 9. The memory device of claim 8, comprising:
a block mapping unit configured to re-map one or more sectors, one or more sub-sectors, or both of the memory-array, wherein the block mapping unit is configured to provide configuration bits to the rescrambling logic to enable the modification of the one or more intra-sector addresses. 10. The memory device of claim 9, wherein the rescrambling logic comprises a multiplexer (MUX);
wherein the configuration bits comprise a plurality of sets of bits, each set of bits representing an address modification; and wherein the MUX is configured to select one of the plurality of sets of bits, the selection resulting in an address modification represented by the selected one of the plurality of sets of bits. 11. The memory device of claim 9, wherein the rescrambling logic comprises combinational logic;
wherein the configuration bits comprise a single set of configuration bits; and wherein the combinational logic affects a modification to an intra-sector address using the single set of configuration bits. 12. A method, comprising:
identifying a target sector of an array of sectors of a memory device that has high wear, as determined based upon sub-sector information of one or more sub-sectors of the target sector; and swapping the target sector with a min sector of the memory device having a low wear level, as determined based upon the sub-sector information of the one or more sub-sectors of the min sector. 13. The method of claim 12, wherein:
the sub-sector information of one or more sub-sectors of the target sector comprises a highest cycle count of a sub-sector of the target sector, a sum of sub-sector cycle counts for all sub-sectors of the target sector, an average of sub-sector cycle counts for all the sub-sectors of the target sector, or any combination thereof; the sub-sector information of the one or more sub-sectors of the min sector comprises a lowest cycle count of a sub-sector of the min sector, a sum of sub-sector cycle counts for all sub-sectors of the min sector, an average of sub-sector cycle counts for all the sub-sectors of the min sector, or any combination thereof; or both. 14. The method of claim 12, comprising:
determining a minimum sub-sector of the min sector having a minimum wear level; and re-ordering a position of a high wear sub-sector of the target sector by swapping the position of the high wear sub-sector with a position of the minimum sub-sector. 15. The method of claim 12, comprising:
swapping a position of a first bundle of two or more sub-sectors containing a high wear sub-sector with a position of a second bundle of two more sub-sectors containing a low wear sub-sector. 16. The method of claim 12, comprising:
modifying one or more intra-sector addresses of the memory array to re-order a position of at least one sub-sector of the target sector, the min sector, or both. 17. The method of claim 16, comprising modifying the one or more intra-sector addresses of the memory array using:
a multiplexer; combinational logic comprising summation logic, exclusive or (XOR) logic, or both; or both. 18. A memory controller, comprising:
circuitry configured to:
identify a target sector of the array of sectors having high wear, as determined based upon sub-sector information of one or more sub-sectors of the target sector; and
swap the target sector with a min sector of the memory device having a low wear level, as determined based upon sub-sector information of one or more sub-sectors of the min sector. 19. The memory controller of claim 18, wherein the circuitry is configured to re-order a position of at least one sub-sector while swapping the target sector with the min sector. 20. The memory controller of claim 18, wherein:
the sub-sector information of one or more sub-sectors of the target sector comprises a highest cycle count of a sub-sector of the target sector, a sum of sub-sector cycle counts for all sub-sectors of the target sector, an average of sub-sector cycle counts for all the sub-sectors of the target sector, or any combination thereof; the sub-sector information of one or more sub-sectors of the min sector comprises a lowest cycle count of a sub-sector of the min sector, a sum of sub-sector cycle counts for all sub-sectors of the min sector, an average of sub-sector cycle counts for all the sub-sectors of the min sector, or any combination thereof; or both. | 2,100 |
6,666 | 6,666 | 15,288,023 | 2,159 | An apparatus, computer-readable medium, and computer-implemented method for data subsetting, including receiving a request comprising a criterion indicating a criterion table in a plurality of tables of a database, as schema of the database corresponding to an entity graph, the entity graph comprising a plurality of entities corresponding to the plurality of tables and a plurality of directed edges connecting the plurality of entities, determining directed edges in the plurality of directed edges that must be traversed in both directions in order to traverse all entities in the entity graph starting from a criterion entity corresponding to the criterion table, generating, an ordered list of edges for the entity graph based on the directed edges that must be traversed in both directions and topological ordering, and generating a subset of data from the plurality of tables based on the ordered list of edges for the entity graph and the request. | 1. A method executed by one or more computing devices for extracting a subset of data from a database, the method comprising:
receiving, by at least one of the one or more computing devices, a request comprising at least one criterion indicating a criterion table in a plurality of tables of the database, wherein a schema of the database corresponds to an entity graph, the entity graph comprising a plurality of entities corresponding to the plurality of tables and a plurality of directed edges connecting the plurality of entities; determining, by at least one of the one or more computing devices, one or more directed edges in the plurality of directed edges that must be traversed in both directions in order to traverse all entities in the entity graph starting from a criterion entity corresponding to the criterion table; generating, by at least one of the one or more computing devices, an ordered list of edges for the entity graph based at least in part on the one or more directed edges that must be traversed in both directions and a topological ordering of one or more entities in the entity graph; and generating, by at least one of the one or more computing devices, the subset of data from the plurality of tables based at least in part on the ordered list of edges for the entity graph and the request. 2. The method of claim 1, further comprising:
determining, by at least one of the one or more computing devices, whether the entity graph includes one or more cycles, wherein a cycle comprises a cyclical sequence of entities in the plurality of entities which are connected by a cyclical sequence of one or more directed edges in the plurality of directed edges; condensing, by at least one of the one or more computing devices, the entity graph to remove the one or more cycles based at least in part on a determination that the entity graph includes one or more cycles. 3. The method of claim 2, wherein condensing the entity graph to remove the cycle comprises, for each cycle in the one or more cycles:
combining all entities in the cyclical sequence of entities to generate a combined entity corresponding to the cycle; and for each entity in the cyclical sequence of entities, adding any directed edge connecting that entity to an entity outside the cyclical sequence of entities as a directed edge connecting the combined entity to the entity outside the cyclical sequence of entities, unless the directed edge connecting the combined entity to the entity outside the cyclical sequence of entities already exists. 4. The method of claim 1, wherein each directed edge in the plurality of directed edges runs from a child entity corresponding to a table in the plurality of tables to a parent entity corresponding to a table in the plurality of tables. 5. The method of claim 1, wherein the one or more directed edges correspond to a minimum quantity of directed edges that must be traversed in both directions in order to traverse all entities in the entity graph starting from the criterion entity. 6. The method of claim 1, wherein determining one or more directed edges in the plurality of directed edges that must be traversed in both directions in order to traverse all entities in the entity graph starting from the criterion entity comprises:
generating an expanded entity graph by adding a plurality of opposite directed edges to the entity graph, wherein the plurality of opposite directed edges correspond to the plurality of directed edges and wherein each opposite directed edge in the plurality of opposite directed edges runs in a direction opposite to that of a corresponding directed edge in the plurality of directed edges; assigning a first weight to each directed edge in the plurality of directed edges in the expanded entity graph and a second weight to each opposite directed edge in the plurality of opposite directed edges in the expanded entity graph; determining a minimum spanning arborescence for the expanded entity graph starting at the criterion entity; identifying one or more opposite directed edges in the plurality of opposite directed edges which are part of the minimum spanning arborescence; and designating, in the entity graph, one or more directed edges which correspond to the identified one or more opposite directed edges in the expanded entity graph as edges that must be traversed in both directions in order to traverse all entities in the entity graph starting from the criterion entity. 7. The method of claim 6, wherein the second weight is greater than a combined weight assigned to all directed edges in the plurality of directed edges. 8. The method of claim 1, wherein generating an ordered list of edges for the entity graph based at least in part on the one or more directed edges that must be traversed in both directions and a topological ordering of one or more entities in the entity graph comprises:
adding the criterion entity to a list of discovered entities; starting from the criterion entity, traversing the entity graph over one or more iterations and adding traversed edges to the ordered list of edges for the entity graph until the one or more directed edges that must be traversed in both directions have been traversed in a parent to child direction, wherein each iteration in the one or more iterations comprises:
a first phase traversing, in a parent to child direction, at least one directed edge in the one or more directed edges that must be traversed in both directions, wherein the at least one directed edge has not already been traversed in a parent to child direction and wherein the at least one directed edge has a parent entity in the list of discovered entities; and
a second phase traversing, in a child to parent direction, any directed edges connected to any entities in the list of discovered entities, wherein the second phases adds directed edges to the to the ordered list of edges for the entity graph based at least in part on a topological ordering of the entities in the list of discovered entities. 9. The method of claim 8, wherein the first phase comprises:
iteratively traversing, in a parent to child direction, any directed edges in the one or more directed edges that must be traversed in both directions which have not already been traversed in a parent to child direction and which have a parent entity in the list of discovered entities, wherein the list of discovered entities is updated after each traversed edge to add each child entity of the traversed edge to the list of discovered entities; and adding the traversed directed edges to the ordered list of edges in the order of traversal. 10. The method of claim 8, wherein the second phase comprises:
iteratively traversing, in a child to parent direction, any directed edges which have a child entity in the list of discovered entities, wherein the list of discovered entities is updated after each traversed edge to add each parent entity of the traversed edge to the list of discovered entities; and adding any traversed directed edges which are not already in the ordered list of edges to the ordered list of edges based at least in part on a topological ordering of the entities in the list of discovered entities. 11. The method of claim 10, wherein adding any traversed directed edges which are not already in the ordered list of edges to the ordered list of edges based at least in part on a topological ordering of the entities in the list of discovered entities comprises:
adding any traversed directed edges which have a child entity with a lower topological sort position in the list of discovered entities prior to adding any traversed directed edges which have a child entity with a higher topological sort position. 12. The method of claim 10, wherein the second phase further comprises:
determining whether an entity in the list of discovered entities is a combined entity corresponding to a cyclical sequence of entities in the plurality of entities which are connected by a cyclical sequence of one or more directed edges in the plurality of directed edges; and adding the cyclical sequence of one or more directed edges to the ordered list of edges for the entity graph along with a loop indicator based at least in part on a topological ordering of the entities in the list of discovered entities. 13. The method of claim 1, wherein generating the subset of data from the plurality of tables based at least in part on the ordered list of edges for the entity graph and the request comprises:
executing a database command corresponding to the request on the database to mark one or more records in the criterion table for selection in the subset of data; generating a list of database commands corresponding to the ordered list of edges, wherein each database command in the list of database commands corresponds to an edge in the ordered list of edges and wherein each database command references two tables in the plurality of tables corresponding to two entities specified by each edge; and executing the list of database commands on the database to mark one or more additional records in the remaining tables of the plurality of tables for selection in the subset of data. 14. An apparatus for extracting a subset of data from a database, the apparatus comprising:
one or more processors; and one or more memories operatively coupled to at least one of the one or more processors and having instructions stored thereon that, when executed by at least one of the one or more processors, cause at least one of the one or more processors to:
receive a request comprising at least one criterion indicating a criterion table in a plurality of tables of the database, wherein a schema of the database corresponds to an entity graph, the entity graph comprising a plurality of entities corresponding to the plurality of tables and a plurality of directed edges connecting the plurality of entities;
determine one or more directed edges in the plurality of directed edges that must be traversed in both directions in order to traverse all entities in the entity graph starting from a criterion entity corresponding to the criterion table;
generate an ordered list of edges for the entity graph based at least in part on the one or more directed edges that must be traversed in both directions and a topological ordering of one or more entities in the entity graph; and
generate the subset of data from the plurality of tables based at least in part on the ordered list of edges for the entity graph and the request. 15. The apparatus of claim 14, wherein at least one of the one or more memories has further instructions stored thereon that, when executed by at least one of the one or more processors, cause at least one of the one or more processors to:
receive a request comprising at least one criterion indicating a criterion table in a plurality of tables of the database, wherein a schema of the database corresponds to an entity graph, the entity graph comprising a plurality of entities corresponding to the plurality of tables and a plurality of directed edges connecting the plurality of entities; determine one or more directed edges in the plurality of directed edges that must be traversed in both directions in order to traverse all entities in the entity graph starting from a criterion entity corresponding to the criterion table; generate an ordered list of edges for the entity graph based at least in part on the one or more directed edges that must be traversed in both directions and a topological ordering of one or more entities in the entity graph; and generate the subset of data from the plurality of tables based at least in part on the ordered list of edges for the entity graph and the request. 16. The apparatus of claim 15, wherein the instructions that, when executed by at least one of the one or more processors, cause at least one of the one or more processors to condense the entity graph to remove the cycle further cause at least one of the one or more processors to:
combine all entities in the cyclical sequence of entities to generate a combined entity corresponding to the cycle; and for each entity in the cyclical sequence of entities, add any directed edge connecting that entity to an entity outside the cyclical sequence of entities as a directed edge connecting the combined entity to the entity outside the cyclical sequence of entities, unless the directed edge connecting the combined entity to the entity outside the cyclical sequence of entities already exists. 17. The apparatus of claim 14, wherein each directed edge in the plurality of directed edges runs from a child entity corresponding to a table in the plurality of tables to a parent entity corresponding to a table in the plurality of tables. 18. The apparatus of claim 14, wherein the one or more directed edges correspond to a minimum quantity of directed edges that must be traversed in both directions in order to traverse all entities in the entity graph starting from the criterion entity. 19. The apparatus of claim 14, wherein the instructions that, when executed by at least one of the one or more processors, cause at least one of the one or more processors to determine one or more directed edges in the plurality of directed edges that must be traversed in both directions in order to traverse all entities in the entity graph starting from the criterion entity further cause at least one of the one or more processors to:
generate an expanded entity graph by adding a plurality of opposite directed edges to the entity graph, wherein the plurality of opposite directed edges correspond to the plurality of directed edges and wherein each opposite directed edge in the plurality of opposite directed edges runs in a direction opposite to that of a corresponding directed edge in the plurality of directed edges; assign a first weight to each directed edge in the plurality of directed edges in the expanded entity graph and a second weight to each opposite directed edge in the plurality of opposite directed edges in the expanded entity graph; determine a minimum spanning arborescence for the expanded entity graph starting at the criterion entity; identify one or more opposite directed edges in the plurality of opposite directed edges which are part of the minimum spanning arborescence; and designate, in the entity graph, one or more directed edges which correspond to the identified one or more opposite directed edges in the expanded entity graph as edges that must be traversed in both directions in order to traverse all entities in the entity graph starting from the criterion entity. 20. The apparatus of claim 19, wherein the second weight is greater than a combined weight assigned to all directed edges in the plurality of directed edges. 21. The apparatus of claim 14, wherein the instructions that, when executed by at least one of the one or more processors, cause at least one of the one or more processors to generate an ordered list of edges for the entity graph based at least in part on the one or more directed edges that must be traversed in both directions and a topological ordering of one or more entities in the entity graph further cause at least one of the one or more processors to:
add the criterion entity to a list of discovered entities; starting from the criterion entity, traverse the entity graph over one or more iterations and adding traversed edges to the ordered list of edges for the entity graph until the one or more directed edges that must be traversed in both directions have been traversed in a parent to child direction, wherein each iteration in the one or more iterations comprises:
a first phase traversing, in a parent to child direction, at least one directed edge in the one or more directed edges that must be traversed in both directions, wherein the at least one directed edge has not already been traversed in a parent to child direction and wherein the at least one directed edge has a parent entity in the list of discovered entities; and
a second phase traversing, in a child to parent direction, any directed edges connected to any entities in the list of discovered entities, wherein the second phases adds directed edges to the to the ordered list of edges for the entity graph based at least in part on a topological ordering of the entities in the list of discovered entities. 22. The apparatus of claim 21, wherein the first phase comprises:
iteratively traversing, in a parent to child direction, any directed edges in the one or more directed edges that must be traversed in both directions which have not already been traversed in a parent to child direction and which have a parent entity in the list of discovered entities, wherein the list of discovered entities is updated after each traversed edge to add each child entity of the traversed edge to the list of discovered entities; and adding the traversed directed edges to the ordered list of edges in the order of traversal. 23. The apparatus of claim 21, wherein the second phase comprises:
iteratively traversing, in a child to parent direction, any directed edges which have a child entity in the list of discovered entities, wherein the list of discovered entities is updated after each traversed edge to add each parent entity of the traversed edge to the list of discovered entities; and adding any traversed directed edges which are not already in the ordered list of edges to the ordered list of edges based at least in part on a topological ordering of the entities in the list of discovered entities. 24. The apparatus of claim 23, wherein the instructions that, when executed by at least one of the one or more processors, cause at least one of the one or more processors to add any traversed directed edges which are not already in the ordered list of edges to the ordered list of edges based at least in part on a topological ordering of the entities in the list of discovered entities further cause at least one of the one or more processors to:
add any traversed directed edges which have a child entity with a lower topological sort position in the list of discovered entities prior to adding any traversed directed edges which have a child entity with a higher topological sort position. 25. The apparatus of claim 23, wherein the second phase further comprises:
determining whether an entity in the list of discovered entities is a combined entity corresponding to a cyclical sequence of entities in the plurality of entities which are connected by a cyclical sequence of one or more directed edges in the plurality of directed edges; and adding the cyclical sequence of one or more directed edges to the ordered list of edges for the entity graph along with a loop indicator based at least in part on a topological ordering of the entities in the list of discovered entities. 26. The apparatus of claim 14, wherein the instructions that, when executed by at least one of the one or more processors, cause at least one of the one or more processors to generate the subset of data from the plurality of tables based at least in part on the ordered list of edges for the entity graph and the request further cause at least one of the one or more processors to:
execute a database command corresponding to the request on the database to mark one or more records in the criterion table for selection in the subset of data; generate a list of database commands corresponding to the ordered list of edges, wherein each database command in the list of database commands corresponds to an edge in the ordered list of edges and wherein each database command references two tables in the plurality of tables corresponding to two entities specified by each edge; and execute the list of database commands on the database to mark one or more additional records in the remaining tables of the plurality of tables for selection in the subset of data. 27. At least one non-transitory computer-readable medium storing computer-readable instructions that, when executed by one or more computing devices, cause at least one of the one or more computing devices to:
receive a request comprising at least one criterion indicating a criterion table in a plurality of tables of the database, wherein a schema of the database corresponds to an entity graph, the entity graph comprising a plurality of entities corresponding to the plurality of tables and a plurality of directed edges connecting the plurality of entities; determine one or more directed edges in the plurality of directed edges that must be traversed in both directions in order to traverse all entities in the entity graph starting from a criterion entity corresponding to the criterion table; generate an ordered list of edges for the entity graph based at least in part on the one or more directed edges that must be traversed in both directions and a topological ordering of one or more entities in the entity graph; and generate the subset of data from the plurality of tables based at least in part on the ordered list of edges for the entity graph and the request. 28. The at least one non-transitory computer-readable medium of claim 27, further storing computer-readable instructions that, when executed by at least one of the one or more computing devices, cause at least one of the one or more computing devices to:
receive a request comprising at least one criterion indicating a criterion table in a plurality of tables of the database, wherein a schema of the database corresponds to an entity graph, the entity graph comprising a plurality of entities corresponding to the plurality of tables and a plurality of directed edges connecting the plurality of entities; determine one or more directed edges in the plurality of directed edges that must be traversed in both directions in order to traverse all entities in the entity graph starting from a criterion entity corresponding to the criterion table; generate an ordered list of edges for the entity graph based at least in part on the one or more directed edges that must be traversed in both directions and a topological ordering of one or more entities in the entity graph; and generate the subset of data from the plurality of tables based at least in part on the ordered list of edges for the entity graph and the request. 29. The at least one non-transitory computer-readable medium of claim 28, wherein the instructions that, when executed by at least one of the one or more computing devices, cause at least one of the one or more computing devices to condense the entity graph to remove the cycle further cause at least one of the one or more computing devices to:
combine all entities in the cyclical sequence of entities to generate a combined entity corresponding to the cycle; and for each entity in the cyclical sequence of entities, add any directed edge connecting that entity to an entity outside the cyclical sequence of entities as a directed edge connecting the combined entity to the entity outside the cyclical sequence of entities, unless the directed edge connecting the combined entity to the entity outside the cyclical sequence of entities already exists. 30. The at least one non-transitory computer-readable medium of claim 27, wherein each directed edge in the plurality of directed edges runs from a child entity corresponding to a table in the plurality of tables to a parent entity corresponding to a table in the plurality of tables. 31. The at least one non-transitory computer-readable medium of claim 27, wherein the one or more directed edges correspond to a minimum quantity of directed edges that must be traversed in both directions in order to traverse all entities in the entity graph starting from the criterion entity. 32. The at least one non-transitory computer-readable medium of claim 27, wherein the instructions that, when executed by at least one of the one or more computing devices, cause at least one of the one or more computing devices to determine one or more directed edges in the plurality of directed edges that must be traversed in both directions in order to traverse all entities in the entity graph starting from the criterion entity further cause at least one of the one or more computing devices to:
generate an expanded entity graph by adding a plurality of opposite directed edges to the entity graph, wherein the plurality of opposite directed edges correspond to the plurality of directed edges and wherein each opposite directed edge in the plurality of opposite directed edges runs in a direction opposite to that of a corresponding directed edge in the plurality of directed edges; assign a first weight to each directed edge in the plurality of directed edges in the expanded entity graph and a second weight to each opposite directed edge in the plurality of opposite directed edges in the expanded entity graph; determine a minimum spanning arborescence for the expanded entity graph starting at the criterion entity; identify one or more opposite directed edges in the plurality of opposite directed edges which are part of the minimum spanning arborescence; and designate, in the entity graph, one or more directed edges which correspond to the identified one or more opposite directed edges in the expanded entity graph as edges that must be traversed in both directions in order to traverse all entities in the entity graph starting from the criterion entity. 33. The at least one non-transitory computer-readable medium of claim 32, wherein the second weight is greater than a combined weight assigned to all directed edges in the plurality of directed edges. 34. The at least one non-transitory computer-readable medium of claim 27, wherein the instructions that, when executed by at least one of the one or more computing devices, cause at least one of the one or more computing devices to generating an ordered list of edges for the entity graph based at least in part on the one or more directed edges that must be traversed in both directions and a topological ordering of one or more entities in the entity graph further cause at least one of the one or more computing devices to:
add the criterion entity to a list of discovered entities; starting from the criterion entity, traverse the entity graph over one or more iterations and adding traversed edges to the ordered list of edges for the entity graph until the one or more directed edges that must be traversed in both directions have been traversed in a parent to child direction, wherein each iteration in the one or more iterations comprises:
a first phase traversing, in a parent to child direction, at least one directed edge in the one or more directed edges that must be traversed in both directions, wherein the at least one directed edge has not already been traversed in a parent to child direction and wherein the at least one directed edge has a parent entity in the list of discovered entities; and
a second phase traversing, in a child to parent direction, any directed edges connected to any entities in the list of discovered entities, wherein the second phases adds directed edges to the to the ordered list of edges for the entity graph based at least in part on a topological ordering of the entities in the list of discovered entities. 35. The at least one non-transitory computer-readable medium of claim 34, wherein the first phase comprises:
iteratively traversing, in a parent to child direction, any directed edges in the one or more directed edges that must be traversed in both directions which have not already been traversed in a parent to child direction and which have a parent entity in the list of discovered entities, wherein the list of discovered entities is updated after each traversed edge to add each child entity of the traversed edge to the list of discovered entities; and adding the traversed directed edges to the ordered list of edges in the order of traversal. 36. The at least one non-transitory computer-readable medium of claim 34, wherein the second phase comprises:
iteratively traversing, in a child to parent direction, any directed edges which have a child entity in the list of discovered entities, wherein the list of discovered entities is updated after each traversed edge to add each parent entity of the traversed edge to the list of discovered entities; and adding any traversed directed edges which are not already in the ordered list of edges to the ordered list of edges based at least in part on a topological ordering of the entities in the list of discovered entities. 37. The at least one non-transitory computer-readable medium of claim 36, wherein the instructions that, when executed by at least one of the one or more computing devices, cause at least one of the one or more computing devices to add any traversed directed edges which are not already in the ordered list of edges to the ordered list of edges based at least in part on a topological ordering of the entities in the list of discovered entities further cause at least one of the one or more computing devices to:
add any traversed directed edges which have a child entity with a lower topological sort position in the list of discovered entities prior to adding any traversed directed edges which have a child entity with a higher topological sort position. 38. The at least one non-transitory computer-readable medium of claim 36, wherein the second phase further comprises:
determining whether an entity in the list of discovered entities is a combined entity corresponding to a cyclical sequence of entities in the plurality of entities which are connected by a cyclical sequence of one or more directed edges in the plurality of directed edges; and adding the cyclical sequence of one or more directed edges to the ordered list of edges for the entity graph along with a loop indicator based at least in part on a topological ordering of the entities in the list of discovered entities. 39. The at least one non-transitory computer-readable medium of claim 27, wherein the instructions that, when executed by at least one of the one or more computing devices, cause at least one of the one or more computing devices to generate the subset of data from the plurality of tables based at least in part on the ordered list of edges for the entity graph and the request further cause at least one of the one or more computing devices to:
execute a database command corresponding to the request on the database to mark one or more records in the criterion table for selection in the subset of data; generate a list of database commands corresponding to the ordered list of edges, wherein each database command in the list of database commands corresponds to an edge in the ordered list of edges and wherein each database command references two tables in the plurality of tables corresponding to two entities specified by each edge; and execute the list of database commands on the database to mark one or more additional records in the remaining tables of the plurality of tables for selection in the subset of data. | An apparatus, computer-readable medium, and computer-implemented method for data subsetting, including receiving a request comprising a criterion indicating a criterion table in a plurality of tables of a database, as schema of the database corresponding to an entity graph, the entity graph comprising a plurality of entities corresponding to the plurality of tables and a plurality of directed edges connecting the plurality of entities, determining directed edges in the plurality of directed edges that must be traversed in both directions in order to traverse all entities in the entity graph starting from a criterion entity corresponding to the criterion table, generating, an ordered list of edges for the entity graph based on the directed edges that must be traversed in both directions and topological ordering, and generating a subset of data from the plurality of tables based on the ordered list of edges for the entity graph and the request.1. A method executed by one or more computing devices for extracting a subset of data from a database, the method comprising:
receiving, by at least one of the one or more computing devices, a request comprising at least one criterion indicating a criterion table in a plurality of tables of the database, wherein a schema of the database corresponds to an entity graph, the entity graph comprising a plurality of entities corresponding to the plurality of tables and a plurality of directed edges connecting the plurality of entities; determining, by at least one of the one or more computing devices, one or more directed edges in the plurality of directed edges that must be traversed in both directions in order to traverse all entities in the entity graph starting from a criterion entity corresponding to the criterion table; generating, by at least one of the one or more computing devices, an ordered list of edges for the entity graph based at least in part on the one or more directed edges that must be traversed in both directions and a topological ordering of one or more entities in the entity graph; and generating, by at least one of the one or more computing devices, the subset of data from the plurality of tables based at least in part on the ordered list of edges for the entity graph and the request. 2. The method of claim 1, further comprising:
determining, by at least one of the one or more computing devices, whether the entity graph includes one or more cycles, wherein a cycle comprises a cyclical sequence of entities in the plurality of entities which are connected by a cyclical sequence of one or more directed edges in the plurality of directed edges; condensing, by at least one of the one or more computing devices, the entity graph to remove the one or more cycles based at least in part on a determination that the entity graph includes one or more cycles. 3. The method of claim 2, wherein condensing the entity graph to remove the cycle comprises, for each cycle in the one or more cycles:
combining all entities in the cyclical sequence of entities to generate a combined entity corresponding to the cycle; and for each entity in the cyclical sequence of entities, adding any directed edge connecting that entity to an entity outside the cyclical sequence of entities as a directed edge connecting the combined entity to the entity outside the cyclical sequence of entities, unless the directed edge connecting the combined entity to the entity outside the cyclical sequence of entities already exists. 4. The method of claim 1, wherein each directed edge in the plurality of directed edges runs from a child entity corresponding to a table in the plurality of tables to a parent entity corresponding to a table in the plurality of tables. 5. The method of claim 1, wherein the one or more directed edges correspond to a minimum quantity of directed edges that must be traversed in both directions in order to traverse all entities in the entity graph starting from the criterion entity. 6. The method of claim 1, wherein determining one or more directed edges in the plurality of directed edges that must be traversed in both directions in order to traverse all entities in the entity graph starting from the criterion entity comprises:
generating an expanded entity graph by adding a plurality of opposite directed edges to the entity graph, wherein the plurality of opposite directed edges correspond to the plurality of directed edges and wherein each opposite directed edge in the plurality of opposite directed edges runs in a direction opposite to that of a corresponding directed edge in the plurality of directed edges; assigning a first weight to each directed edge in the plurality of directed edges in the expanded entity graph and a second weight to each opposite directed edge in the plurality of opposite directed edges in the expanded entity graph; determining a minimum spanning arborescence for the expanded entity graph starting at the criterion entity; identifying one or more opposite directed edges in the plurality of opposite directed edges which are part of the minimum spanning arborescence; and designating, in the entity graph, one or more directed edges which correspond to the identified one or more opposite directed edges in the expanded entity graph as edges that must be traversed in both directions in order to traverse all entities in the entity graph starting from the criterion entity. 7. The method of claim 6, wherein the second weight is greater than a combined weight assigned to all directed edges in the plurality of directed edges. 8. The method of claim 1, wherein generating an ordered list of edges for the entity graph based at least in part on the one or more directed edges that must be traversed in both directions and a topological ordering of one or more entities in the entity graph comprises:
adding the criterion entity to a list of discovered entities; starting from the criterion entity, traversing the entity graph over one or more iterations and adding traversed edges to the ordered list of edges for the entity graph until the one or more directed edges that must be traversed in both directions have been traversed in a parent to child direction, wherein each iteration in the one or more iterations comprises:
a first phase traversing, in a parent to child direction, at least one directed edge in the one or more directed edges that must be traversed in both directions, wherein the at least one directed edge has not already been traversed in a parent to child direction and wherein the at least one directed edge has a parent entity in the list of discovered entities; and
a second phase traversing, in a child to parent direction, any directed edges connected to any entities in the list of discovered entities, wherein the second phases adds directed edges to the to the ordered list of edges for the entity graph based at least in part on a topological ordering of the entities in the list of discovered entities. 9. The method of claim 8, wherein the first phase comprises:
iteratively traversing, in a parent to child direction, any directed edges in the one or more directed edges that must be traversed in both directions which have not already been traversed in a parent to child direction and which have a parent entity in the list of discovered entities, wherein the list of discovered entities is updated after each traversed edge to add each child entity of the traversed edge to the list of discovered entities; and adding the traversed directed edges to the ordered list of edges in the order of traversal. 10. The method of claim 8, wherein the second phase comprises:
iteratively traversing, in a child to parent direction, any directed edges which have a child entity in the list of discovered entities, wherein the list of discovered entities is updated after each traversed edge to add each parent entity of the traversed edge to the list of discovered entities; and adding any traversed directed edges which are not already in the ordered list of edges to the ordered list of edges based at least in part on a topological ordering of the entities in the list of discovered entities. 11. The method of claim 10, wherein adding any traversed directed edges which are not already in the ordered list of edges to the ordered list of edges based at least in part on a topological ordering of the entities in the list of discovered entities comprises:
adding any traversed directed edges which have a child entity with a lower topological sort position in the list of discovered entities prior to adding any traversed directed edges which have a child entity with a higher topological sort position. 12. The method of claim 10, wherein the second phase further comprises:
determining whether an entity in the list of discovered entities is a combined entity corresponding to a cyclical sequence of entities in the plurality of entities which are connected by a cyclical sequence of one or more directed edges in the plurality of directed edges; and adding the cyclical sequence of one or more directed edges to the ordered list of edges for the entity graph along with a loop indicator based at least in part on a topological ordering of the entities in the list of discovered entities. 13. The method of claim 1, wherein generating the subset of data from the plurality of tables based at least in part on the ordered list of edges for the entity graph and the request comprises:
executing a database command corresponding to the request on the database to mark one or more records in the criterion table for selection in the subset of data; generating a list of database commands corresponding to the ordered list of edges, wherein each database command in the list of database commands corresponds to an edge in the ordered list of edges and wherein each database command references two tables in the plurality of tables corresponding to two entities specified by each edge; and executing the list of database commands on the database to mark one or more additional records in the remaining tables of the plurality of tables for selection in the subset of data. 14. An apparatus for extracting a subset of data from a database, the apparatus comprising:
one or more processors; and one or more memories operatively coupled to at least one of the one or more processors and having instructions stored thereon that, when executed by at least one of the one or more processors, cause at least one of the one or more processors to:
receive a request comprising at least one criterion indicating a criterion table in a plurality of tables of the database, wherein a schema of the database corresponds to an entity graph, the entity graph comprising a plurality of entities corresponding to the plurality of tables and a plurality of directed edges connecting the plurality of entities;
determine one or more directed edges in the plurality of directed edges that must be traversed in both directions in order to traverse all entities in the entity graph starting from a criterion entity corresponding to the criterion table;
generate an ordered list of edges for the entity graph based at least in part on the one or more directed edges that must be traversed in both directions and a topological ordering of one or more entities in the entity graph; and
generate the subset of data from the plurality of tables based at least in part on the ordered list of edges for the entity graph and the request. 15. The apparatus of claim 14, wherein at least one of the one or more memories has further instructions stored thereon that, when executed by at least one of the one or more processors, cause at least one of the one or more processors to:
receive a request comprising at least one criterion indicating a criterion table in a plurality of tables of the database, wherein a schema of the database corresponds to an entity graph, the entity graph comprising a plurality of entities corresponding to the plurality of tables and a plurality of directed edges connecting the plurality of entities; determine one or more directed edges in the plurality of directed edges that must be traversed in both directions in order to traverse all entities in the entity graph starting from a criterion entity corresponding to the criterion table; generate an ordered list of edges for the entity graph based at least in part on the one or more directed edges that must be traversed in both directions and a topological ordering of one or more entities in the entity graph; and generate the subset of data from the plurality of tables based at least in part on the ordered list of edges for the entity graph and the request. 16. The apparatus of claim 15, wherein the instructions that, when executed by at least one of the one or more processors, cause at least one of the one or more processors to condense the entity graph to remove the cycle further cause at least one of the one or more processors to:
combine all entities in the cyclical sequence of entities to generate a combined entity corresponding to the cycle; and for each entity in the cyclical sequence of entities, add any directed edge connecting that entity to an entity outside the cyclical sequence of entities as a directed edge connecting the combined entity to the entity outside the cyclical sequence of entities, unless the directed edge connecting the combined entity to the entity outside the cyclical sequence of entities already exists. 17. The apparatus of claim 14, wherein each directed edge in the plurality of directed edges runs from a child entity corresponding to a table in the plurality of tables to a parent entity corresponding to a table in the plurality of tables. 18. The apparatus of claim 14, wherein the one or more directed edges correspond to a minimum quantity of directed edges that must be traversed in both directions in order to traverse all entities in the entity graph starting from the criterion entity. 19. The apparatus of claim 14, wherein the instructions that, when executed by at least one of the one or more processors, cause at least one of the one or more processors to determine one or more directed edges in the plurality of directed edges that must be traversed in both directions in order to traverse all entities in the entity graph starting from the criterion entity further cause at least one of the one or more processors to:
generate an expanded entity graph by adding a plurality of opposite directed edges to the entity graph, wherein the plurality of opposite directed edges correspond to the plurality of directed edges and wherein each opposite directed edge in the plurality of opposite directed edges runs in a direction opposite to that of a corresponding directed edge in the plurality of directed edges; assign a first weight to each directed edge in the plurality of directed edges in the expanded entity graph and a second weight to each opposite directed edge in the plurality of opposite directed edges in the expanded entity graph; determine a minimum spanning arborescence for the expanded entity graph starting at the criterion entity; identify one or more opposite directed edges in the plurality of opposite directed edges which are part of the minimum spanning arborescence; and designate, in the entity graph, one or more directed edges which correspond to the identified one or more opposite directed edges in the expanded entity graph as edges that must be traversed in both directions in order to traverse all entities in the entity graph starting from the criterion entity. 20. The apparatus of claim 19, wherein the second weight is greater than a combined weight assigned to all directed edges in the plurality of directed edges. 21. The apparatus of claim 14, wherein the instructions that, when executed by at least one of the one or more processors, cause at least one of the one or more processors to generate an ordered list of edges for the entity graph based at least in part on the one or more directed edges that must be traversed in both directions and a topological ordering of one or more entities in the entity graph further cause at least one of the one or more processors to:
add the criterion entity to a list of discovered entities; starting from the criterion entity, traverse the entity graph over one or more iterations and adding traversed edges to the ordered list of edges for the entity graph until the one or more directed edges that must be traversed in both directions have been traversed in a parent to child direction, wherein each iteration in the one or more iterations comprises:
a first phase traversing, in a parent to child direction, at least one directed edge in the one or more directed edges that must be traversed in both directions, wherein the at least one directed edge has not already been traversed in a parent to child direction and wherein the at least one directed edge has a parent entity in the list of discovered entities; and
a second phase traversing, in a child to parent direction, any directed edges connected to any entities in the list of discovered entities, wherein the second phases adds directed edges to the to the ordered list of edges for the entity graph based at least in part on a topological ordering of the entities in the list of discovered entities. 22. The apparatus of claim 21, wherein the first phase comprises:
iteratively traversing, in a parent to child direction, any directed edges in the one or more directed edges that must be traversed in both directions which have not already been traversed in a parent to child direction and which have a parent entity in the list of discovered entities, wherein the list of discovered entities is updated after each traversed edge to add each child entity of the traversed edge to the list of discovered entities; and adding the traversed directed edges to the ordered list of edges in the order of traversal. 23. The apparatus of claim 21, wherein the second phase comprises:
iteratively traversing, in a child to parent direction, any directed edges which have a child entity in the list of discovered entities, wherein the list of discovered entities is updated after each traversed edge to add each parent entity of the traversed edge to the list of discovered entities; and adding any traversed directed edges which are not already in the ordered list of edges to the ordered list of edges based at least in part on a topological ordering of the entities in the list of discovered entities. 24. The apparatus of claim 23, wherein the instructions that, when executed by at least one of the one or more processors, cause at least one of the one or more processors to add any traversed directed edges which are not already in the ordered list of edges to the ordered list of edges based at least in part on a topological ordering of the entities in the list of discovered entities further cause at least one of the one or more processors to:
add any traversed directed edges which have a child entity with a lower topological sort position in the list of discovered entities prior to adding any traversed directed edges which have a child entity with a higher topological sort position. 25. The apparatus of claim 23, wherein the second phase further comprises:
determining whether an entity in the list of discovered entities is a combined entity corresponding to a cyclical sequence of entities in the plurality of entities which are connected by a cyclical sequence of one or more directed edges in the plurality of directed edges; and adding the cyclical sequence of one or more directed edges to the ordered list of edges for the entity graph along with a loop indicator based at least in part on a topological ordering of the entities in the list of discovered entities. 26. The apparatus of claim 14, wherein the instructions that, when executed by at least one of the one or more processors, cause at least one of the one or more processors to generate the subset of data from the plurality of tables based at least in part on the ordered list of edges for the entity graph and the request further cause at least one of the one or more processors to:
execute a database command corresponding to the request on the database to mark one or more records in the criterion table for selection in the subset of data; generate a list of database commands corresponding to the ordered list of edges, wherein each database command in the list of database commands corresponds to an edge in the ordered list of edges and wherein each database command references two tables in the plurality of tables corresponding to two entities specified by each edge; and execute the list of database commands on the database to mark one or more additional records in the remaining tables of the plurality of tables for selection in the subset of data. 27. At least one non-transitory computer-readable medium storing computer-readable instructions that, when executed by one or more computing devices, cause at least one of the one or more computing devices to:
receive a request comprising at least one criterion indicating a criterion table in a plurality of tables of the database, wherein a schema of the database corresponds to an entity graph, the entity graph comprising a plurality of entities corresponding to the plurality of tables and a plurality of directed edges connecting the plurality of entities; determine one or more directed edges in the plurality of directed edges that must be traversed in both directions in order to traverse all entities in the entity graph starting from a criterion entity corresponding to the criterion table; generate an ordered list of edges for the entity graph based at least in part on the one or more directed edges that must be traversed in both directions and a topological ordering of one or more entities in the entity graph; and generate the subset of data from the plurality of tables based at least in part on the ordered list of edges for the entity graph and the request. 28. The at least one non-transitory computer-readable medium of claim 27, further storing computer-readable instructions that, when executed by at least one of the one or more computing devices, cause at least one of the one or more computing devices to:
receive a request comprising at least one criterion indicating a criterion table in a plurality of tables of the database, wherein a schema of the database corresponds to an entity graph, the entity graph comprising a plurality of entities corresponding to the plurality of tables and a plurality of directed edges connecting the plurality of entities; determine one or more directed edges in the plurality of directed edges that must be traversed in both directions in order to traverse all entities in the entity graph starting from a criterion entity corresponding to the criterion table; generate an ordered list of edges for the entity graph based at least in part on the one or more directed edges that must be traversed in both directions and a topological ordering of one or more entities in the entity graph; and generate the subset of data from the plurality of tables based at least in part on the ordered list of edges for the entity graph and the request. 29. The at least one non-transitory computer-readable medium of claim 28, wherein the instructions that, when executed by at least one of the one or more computing devices, cause at least one of the one or more computing devices to condense the entity graph to remove the cycle further cause at least one of the one or more computing devices to:
combine all entities in the cyclical sequence of entities to generate a combined entity corresponding to the cycle; and for each entity in the cyclical sequence of entities, add any directed edge connecting that entity to an entity outside the cyclical sequence of entities as a directed edge connecting the combined entity to the entity outside the cyclical sequence of entities, unless the directed edge connecting the combined entity to the entity outside the cyclical sequence of entities already exists. 30. The at least one non-transitory computer-readable medium of claim 27, wherein each directed edge in the plurality of directed edges runs from a child entity corresponding to a table in the plurality of tables to a parent entity corresponding to a table in the plurality of tables. 31. The at least one non-transitory computer-readable medium of claim 27, wherein the one or more directed edges correspond to a minimum quantity of directed edges that must be traversed in both directions in order to traverse all entities in the entity graph starting from the criterion entity. 32. The at least one non-transitory computer-readable medium of claim 27, wherein the instructions that, when executed by at least one of the one or more computing devices, cause at least one of the one or more computing devices to determine one or more directed edges in the plurality of directed edges that must be traversed in both directions in order to traverse all entities in the entity graph starting from the criterion entity further cause at least one of the one or more computing devices to:
generate an expanded entity graph by adding a plurality of opposite directed edges to the entity graph, wherein the plurality of opposite directed edges correspond to the plurality of directed edges and wherein each opposite directed edge in the plurality of opposite directed edges runs in a direction opposite to that of a corresponding directed edge in the plurality of directed edges; assign a first weight to each directed edge in the plurality of directed edges in the expanded entity graph and a second weight to each opposite directed edge in the plurality of opposite directed edges in the expanded entity graph; determine a minimum spanning arborescence for the expanded entity graph starting at the criterion entity; identify one or more opposite directed edges in the plurality of opposite directed edges which are part of the minimum spanning arborescence; and designate, in the entity graph, one or more directed edges which correspond to the identified one or more opposite directed edges in the expanded entity graph as edges that must be traversed in both directions in order to traverse all entities in the entity graph starting from the criterion entity. 33. The at least one non-transitory computer-readable medium of claim 32, wherein the second weight is greater than a combined weight assigned to all directed edges in the plurality of directed edges. 34. The at least one non-transitory computer-readable medium of claim 27, wherein the instructions that, when executed by at least one of the one or more computing devices, cause at least one of the one or more computing devices to generating an ordered list of edges for the entity graph based at least in part on the one or more directed edges that must be traversed in both directions and a topological ordering of one or more entities in the entity graph further cause at least one of the one or more computing devices to:
add the criterion entity to a list of discovered entities; starting from the criterion entity, traverse the entity graph over one or more iterations and adding traversed edges to the ordered list of edges for the entity graph until the one or more directed edges that must be traversed in both directions have been traversed in a parent to child direction, wherein each iteration in the one or more iterations comprises:
a first phase traversing, in a parent to child direction, at least one directed edge in the one or more directed edges that must be traversed in both directions, wherein the at least one directed edge has not already been traversed in a parent to child direction and wherein the at least one directed edge has a parent entity in the list of discovered entities; and
a second phase traversing, in a child to parent direction, any directed edges connected to any entities in the list of discovered entities, wherein the second phases adds directed edges to the to the ordered list of edges for the entity graph based at least in part on a topological ordering of the entities in the list of discovered entities. 35. The at least one non-transitory computer-readable medium of claim 34, wherein the first phase comprises:
iteratively traversing, in a parent to child direction, any directed edges in the one or more directed edges that must be traversed in both directions which have not already been traversed in a parent to child direction and which have a parent entity in the list of discovered entities, wherein the list of discovered entities is updated after each traversed edge to add each child entity of the traversed edge to the list of discovered entities; and adding the traversed directed edges to the ordered list of edges in the order of traversal. 36. The at least one non-transitory computer-readable medium of claim 34, wherein the second phase comprises:
iteratively traversing, in a child to parent direction, any directed edges which have a child entity in the list of discovered entities, wherein the list of discovered entities is updated after each traversed edge to add each parent entity of the traversed edge to the list of discovered entities; and adding any traversed directed edges which are not already in the ordered list of edges to the ordered list of edges based at least in part on a topological ordering of the entities in the list of discovered entities. 37. The at least one non-transitory computer-readable medium of claim 36, wherein the instructions that, when executed by at least one of the one or more computing devices, cause at least one of the one or more computing devices to add any traversed directed edges which are not already in the ordered list of edges to the ordered list of edges based at least in part on a topological ordering of the entities in the list of discovered entities further cause at least one of the one or more computing devices to:
add any traversed directed edges which have a child entity with a lower topological sort position in the list of discovered entities prior to adding any traversed directed edges which have a child entity with a higher topological sort position. 38. The at least one non-transitory computer-readable medium of claim 36, wherein the second phase further comprises:
determining whether an entity in the list of discovered entities is a combined entity corresponding to a cyclical sequence of entities in the plurality of entities which are connected by a cyclical sequence of one or more directed edges in the plurality of directed edges; and adding the cyclical sequence of one or more directed edges to the ordered list of edges for the entity graph along with a loop indicator based at least in part on a topological ordering of the entities in the list of discovered entities. 39. The at least one non-transitory computer-readable medium of claim 27, wherein the instructions that, when executed by at least one of the one or more computing devices, cause at least one of the one or more computing devices to generate the subset of data from the plurality of tables based at least in part on the ordered list of edges for the entity graph and the request further cause at least one of the one or more computing devices to:
execute a database command corresponding to the request on the database to mark one or more records in the criterion table for selection in the subset of data; generate a list of database commands corresponding to the ordered list of edges, wherein each database command in the list of database commands corresponds to an edge in the ordered list of edges and wherein each database command references two tables in the plurality of tables corresponding to two entities specified by each edge; and execute the list of database commands on the database to mark one or more additional records in the remaining tables of the plurality of tables for selection in the subset of data. | 2,100 |
6,667 | 6,667 | 14,709,402 | 2,164 | A backup system is described for prioritizing backup data in enterprise networks. Messages containing data to be backed up are received at a backup server from endpoint devices and a priority value is determined for each message based on numerous factors, such as the organizational role of the user of the endpoint, the time since last backup, average upload speed, frequency of backups, and other properties. The system prioritizes backing up of messages based on the priority value of the messages. | 1. A method, comprising:
receiving at a backup server messages containing backup data from a plurality of endpoint devices, wherein the endpoint devices send the messages to the backup server to backup data on a backup database; determining a priority value corresponding to each message based on properties related to the endpoint device from which the message is received; placing each message into a queue that temporarily stores messages before messages are written to the backup database; and processing the messages from the queue to the backup database according to the priority value corresponding to each message. 2. The method of claim 1, wherein processing the messages from the queue to the backup database according to the priority value corresponding to each message comprises at least one of:
writing messages in the queue with a higher corresponding priority value to the database before messages with a lower corresponding priority value; or allocating more resources to writing messages with a higher corresponding priority value to the database than to messages with a lower corresponding priority value. 3. The method of claim 1, wherein the priority value corresponding to a message is further determined based on an organizational role corresponding to the endpoint device from which the message is received. 4. The method of claim 1, wherein the priority value corresponding to a message is further determined based on how close the endpoint device from which the message is received is to completing backup. 5. The method of claim 1, wherein the priority value corresponding to a message is further determined based on an amount of time that passed since the endpoint device from which the message is received was backed up last. 6. The method of claim 1, wherein the priority value corresponding to a message is further determined based on at least one of:
a comparison of a current upload speed with an average upload speed of the endpoint device from which the message is received; or an average connected time to the backup server of the endpoint device from which the message is received. 7. The method of claim 1, wherein the priority value corresponding to a message is further determined based on a uniqueness of data that is to be backed up on the endpoint from which the message is received. 8. A computing device, comprising:
at least one processor; and memory including instructions that, when executed by the at least one processor, cause the computing device to: receive at a backup server messages containing backup data from a plurality of endpoint devices, wherein the endpoint devices send the messages to the backup server to backup data on a backup database; determine a priority value corresponding to each message based on properties related to the endpoint device from which the message is received; place each message into a queue that temporarily stores messages before messages are written to the backup database; and process the messages from the queue to the backup database according to the priority value corresponding to each message. 9. The computing device of claim 8, wherein processing the messages from the queue to the backup database according to the priority value corresponding to each message comprises at least one of:
writing messages in the queue with a higher corresponding priority value to the database before messages with a lower corresponding priority value; or allocating more resources to writing messages with a higher corresponding priority value to the database than to messages with a lower corresponding priority value. 10. The computing device of claim 8, wherein the priority value corresponding to a message is further determined based on an organizational role corresponding to the endpoint device from which the message is received. 11. The computing device of claim 8, wherein the priority value corresponding to a message is further determined based on how close the endpoint device from which the message is received is to completing backup. 12. The computing device of claim 8, wherein the priority value corresponding to a message is further determined based on an amount of time that passed since the endpoint device from which the message is received was backed up last. 13. The computing device of claim 8, wherein the priority value corresponding to a message is further determined based on at least one of:
a comparison of a current upload speed with an average upload speed of the endpoint device from which the message is received; or an average connected time to the backup server of the endpoint device from which the message is received. 14. The computing device of claim 8, wherein the priority value corresponding to a message is further determined based on a uniqueness of data that is to be backed up on the endpoint from which the message is received. 15. A non-transitory computer readable storage medium comprising one or more sequences of instructions, the instructions when executed by one or more processors causing the one or more processors to execute the operations of:
receiving at a backup server messages containing backup data from a plurality of endpoint devices, wherein the endpoint devices send the messages to the backup server to backup data on a backup database; determining a priority value corresponding to each message based on properties related to the endpoint device from which the message is received; placing each message into a queue that temporarily stores messages before messages are written to the backup database; and processing the messages from the queue to the backup database according to the priority value corresponding to each message. 16. The non-transitory computer readable storage medium of claim 15, wherein processing the messages from the queue to the backup database according to the priority value corresponding to each message comprises at least one of:
writing messages in the queue with a higher corresponding priority value to the database before messages with a lower corresponding priority value; or allocating more resources to writing messages with a higher corresponding priority value to the database than to messages with a lower corresponding priority value. 17. The non-transitory computer readable storage medium of claim 15, wherein the priority value corresponding to a message is further determined based on an organizational role corresponding to the endpoint device from which the message is received. 18. The non-transitory computer readable storage medium of claim 15, wherein the priority value corresponding to a message is further determined based on how close the endpoint device from which the message is received is to completing backup. 19. The non-transitory computer readable storage medium of claim 15, wherein the priority value corresponding to a message is further determined based on an amount of time that passed since the endpoint device from which the message is received was backed up last. 20. The non-transitory computer readable storage medium of claim 15, wherein the priority value corresponding to a message is further determined based on at least one of:
a comparison of a current upload speed with an average upload speed of the endpoint device from which the message is received; or an average connected time to the backup server of the endpoint device from which the message is received. | A backup system is described for prioritizing backup data in enterprise networks. Messages containing data to be backed up are received at a backup server from endpoint devices and a priority value is determined for each message based on numerous factors, such as the organizational role of the user of the endpoint, the time since last backup, average upload speed, frequency of backups, and other properties. The system prioritizes backing up of messages based on the priority value of the messages.1. A method, comprising:
receiving at a backup server messages containing backup data from a plurality of endpoint devices, wherein the endpoint devices send the messages to the backup server to backup data on a backup database; determining a priority value corresponding to each message based on properties related to the endpoint device from which the message is received; placing each message into a queue that temporarily stores messages before messages are written to the backup database; and processing the messages from the queue to the backup database according to the priority value corresponding to each message. 2. The method of claim 1, wherein processing the messages from the queue to the backup database according to the priority value corresponding to each message comprises at least one of:
writing messages in the queue with a higher corresponding priority value to the database before messages with a lower corresponding priority value; or allocating more resources to writing messages with a higher corresponding priority value to the database than to messages with a lower corresponding priority value. 3. The method of claim 1, wherein the priority value corresponding to a message is further determined based on an organizational role corresponding to the endpoint device from which the message is received. 4. The method of claim 1, wherein the priority value corresponding to a message is further determined based on how close the endpoint device from which the message is received is to completing backup. 5. The method of claim 1, wherein the priority value corresponding to a message is further determined based on an amount of time that passed since the endpoint device from which the message is received was backed up last. 6. The method of claim 1, wherein the priority value corresponding to a message is further determined based on at least one of:
a comparison of a current upload speed with an average upload speed of the endpoint device from which the message is received; or an average connected time to the backup server of the endpoint device from which the message is received. 7. The method of claim 1, wherein the priority value corresponding to a message is further determined based on a uniqueness of data that is to be backed up on the endpoint from which the message is received. 8. A computing device, comprising:
at least one processor; and memory including instructions that, when executed by the at least one processor, cause the computing device to: receive at a backup server messages containing backup data from a plurality of endpoint devices, wherein the endpoint devices send the messages to the backup server to backup data on a backup database; determine a priority value corresponding to each message based on properties related to the endpoint device from which the message is received; place each message into a queue that temporarily stores messages before messages are written to the backup database; and process the messages from the queue to the backup database according to the priority value corresponding to each message. 9. The computing device of claim 8, wherein processing the messages from the queue to the backup database according to the priority value corresponding to each message comprises at least one of:
writing messages in the queue with a higher corresponding priority value to the database before messages with a lower corresponding priority value; or allocating more resources to writing messages with a higher corresponding priority value to the database than to messages with a lower corresponding priority value. 10. The computing device of claim 8, wherein the priority value corresponding to a message is further determined based on an organizational role corresponding to the endpoint device from which the message is received. 11. The computing device of claim 8, wherein the priority value corresponding to a message is further determined based on how close the endpoint device from which the message is received is to completing backup. 12. The computing device of claim 8, wherein the priority value corresponding to a message is further determined based on an amount of time that passed since the endpoint device from which the message is received was backed up last. 13. The computing device of claim 8, wherein the priority value corresponding to a message is further determined based on at least one of:
a comparison of a current upload speed with an average upload speed of the endpoint device from which the message is received; or an average connected time to the backup server of the endpoint device from which the message is received. 14. The computing device of claim 8, wherein the priority value corresponding to a message is further determined based on a uniqueness of data that is to be backed up on the endpoint from which the message is received. 15. A non-transitory computer readable storage medium comprising one or more sequences of instructions, the instructions when executed by one or more processors causing the one or more processors to execute the operations of:
receiving at a backup server messages containing backup data from a plurality of endpoint devices, wherein the endpoint devices send the messages to the backup server to backup data on a backup database; determining a priority value corresponding to each message based on properties related to the endpoint device from which the message is received; placing each message into a queue that temporarily stores messages before messages are written to the backup database; and processing the messages from the queue to the backup database according to the priority value corresponding to each message. 16. The non-transitory computer readable storage medium of claim 15, wherein processing the messages from the queue to the backup database according to the priority value corresponding to each message comprises at least one of:
writing messages in the queue with a higher corresponding priority value to the database before messages with a lower corresponding priority value; or allocating more resources to writing messages with a higher corresponding priority value to the database than to messages with a lower corresponding priority value. 17. The non-transitory computer readable storage medium of claim 15, wherein the priority value corresponding to a message is further determined based on an organizational role corresponding to the endpoint device from which the message is received. 18. The non-transitory computer readable storage medium of claim 15, wherein the priority value corresponding to a message is further determined based on how close the endpoint device from which the message is received is to completing backup. 19. The non-transitory computer readable storage medium of claim 15, wherein the priority value corresponding to a message is further determined based on an amount of time that passed since the endpoint device from which the message is received was backed up last. 20. The non-transitory computer readable storage medium of claim 15, wherein the priority value corresponding to a message is further determined based on at least one of:
a comparison of a current upload speed with an average upload speed of the endpoint device from which the message is received; or an average connected time to the backup server of the endpoint device from which the message is received. | 2,100 |
6,668 | 6,668 | 15,846,748 | 2,192 | A computer-implemented method comprising: receiving, by a computing device, user input defining a workflow; receiving, by the computing device, information defining schemas at convergence points in the workflow; determining, by the computing device, a set of mapping parameters at outputs of nodes of the workflow based on the schemas; receiving, by the computing device, input values to the mapping parameters; storing, by the computing device, the input values to the mapping parameters in a structure corresponding to the schemas; and executing, by the computing device, the workflow based on the input values to the mapping parameters, wherein the executing includes invoking one or more applications residing on one or more application servers through application programming interface (API) calls. | 1. A computer-implemented method comprising:
receiving, by a computing device, user input defining a workflow; receiving, by the computing device, information defining schemas at convergence points in the workflow; determining, by the computing device, a set of mapping parameters at outputs of nodes of the workflow based on the schemas; receiving, by the computing device, input values to the mapping parameters; storing, by the computing device, the input values to the mapping parameters in a structure corresponding to the schemas; and executing, by the computing device, the workflow based on the input values to the mapping parameters, wherein the executing includes invoking one or more applications residing on one or more application servers through application programming interface (API) calls. 2. The method of claim 1, wherein a number of points in the workflow for defining the schemas and inputting the input values to the mapping parameters is less than a number of paths in the workflow. 3. The method of claim 1, wherein the workflow is associated with at least one selected from the group consisting of:
an e-mail or communications application; an e-commerce application; a banking or financial application; a gaming application; a social media application; a content streaming application; a data processing application; a data record generation and storage application; and a security application. 4. The method of claim 1, further comprising auto-populating a subset of the inputs values to the mapping parameters. 5. The method of claim 1, further comprising presenting a menu of available inputs to map a subset of the mapping parameters, wherein the available inputs include previously defined data mappings. 6. The method of claim 5, wherein the previously defined data mappings are structured in accordance with the schema. 7. The method of claim 5, wherein the available inputs include only inputs from an IF node in the workflow that have a schema defined. 8. The method of claim 1, wherein the workflow includes at least two IF nodes in a series. 9. The method of claim 1, wherein a service provider at least one of creates, maintains, deploys and supports the computing device. 10. The method of claim 1, wherein the receiving the user input defining the workflow, the receiving the information defining the schemas, the determining the set of mapping parameters, the receiving the inputs to the mapping parameters, the storing the mapping parameters, and the executing the workflow are provided by a service provider on a subscription, advertising, and/or fee basis. 11. The method of claim 1, wherein the computing device includes software provided as a service in a cloud environment. 12. The method of claim 1, further comprising deploying a system for simplifying data mapping in complex flows, comprising providing a computer infrastructure operable to perform the receiving the user input defining the workflow, the receiving the information defining the schemas, the determining the set of mapping parameters, the receiving the inputs to the mapping parameters, the storing the mapping parameters, and the executing the workflow. 13. A computer program product for data mapping in complex flows, the computer program product comprising a computer readable storage medium having program instructions embodied therewith, the program instructions executable by a computing device to cause the computing device to:
receive, via a user interface of a workflow management application, user input for constructing a workflow; receive, via the user interface, user input that defines schemas at convergence points in the workflow; receive, via the user interface, input values to mapping parameters defining output data at nodes in the workflow; store the input values to the mapping parameters in a structure corresponding to the schemas at respective nodes; and execute the workflow based on the input values to the mapping parameters. 14. The computer program product of claim 13, wherein a number of points in the workflow for defining the schemas and inputting the input values to the mapping parameters is less than a number of paths in the workflow. 15. The computer program product of claim 13, wherein the program instructions further cause the computing device to display an error when input values to mapping parameters for non-optional mapping parameters are not received. 16. The computer program product of claim 13, wherein the program instructions further cause the computing device to auto-populate a subset of the input values to the mapping parameters. 17. The computer program product of claim 13, wherein the program instructions further cause the computing device to present a menu of available inputs to map a subset of the mapping parameters, wherein the available inputs include previously defined data mappings and include only inputs from an IF node in the workflow that have a schema defined. 18. A system comprising:
a CPU, a computer readable memory and a computer readable storage medium associated with a computing device; program instructions to present a workflow having a plurality of convergence points; program instructions to present a plurality of schema definition dialogue boxes, wherein each of the plurality of schema definition dialogue boxes receive user inputs for defining a schema at a respective convergence point; program instructions to present a plurality of data mapping dialogue boxes, wherein each of the plurality of data mapping dialogue boxes receive user inputs for defining data mapping values at a respective output of a node in the workflow, wherein the data mapping values are structured in accordance with the schema; and program instructions to execute the workflow based on the data mapping values, wherein the program instructions are stored on the computer readable storage medium for execution by the CPU via the computer readable memory. 19. The system of claim 18, wherein a number of points in the workflow for defining schemas in the workflow receiving user inputs for defining data mapping values at output nodes of the workflow is less than a number of paths in the workflow. 20. The system of claim 18, further comprising program instructions to auto-populate a subset of the data mapping values. | A computer-implemented method comprising: receiving, by a computing device, user input defining a workflow; receiving, by the computing device, information defining schemas at convergence points in the workflow; determining, by the computing device, a set of mapping parameters at outputs of nodes of the workflow based on the schemas; receiving, by the computing device, input values to the mapping parameters; storing, by the computing device, the input values to the mapping parameters in a structure corresponding to the schemas; and executing, by the computing device, the workflow based on the input values to the mapping parameters, wherein the executing includes invoking one or more applications residing on one or more application servers through application programming interface (API) calls.1. A computer-implemented method comprising:
receiving, by a computing device, user input defining a workflow; receiving, by the computing device, information defining schemas at convergence points in the workflow; determining, by the computing device, a set of mapping parameters at outputs of nodes of the workflow based on the schemas; receiving, by the computing device, input values to the mapping parameters; storing, by the computing device, the input values to the mapping parameters in a structure corresponding to the schemas; and executing, by the computing device, the workflow based on the input values to the mapping parameters, wherein the executing includes invoking one or more applications residing on one or more application servers through application programming interface (API) calls. 2. The method of claim 1, wherein a number of points in the workflow for defining the schemas and inputting the input values to the mapping parameters is less than a number of paths in the workflow. 3. The method of claim 1, wherein the workflow is associated with at least one selected from the group consisting of:
an e-mail or communications application; an e-commerce application; a banking or financial application; a gaming application; a social media application; a content streaming application; a data processing application; a data record generation and storage application; and a security application. 4. The method of claim 1, further comprising auto-populating a subset of the inputs values to the mapping parameters. 5. The method of claim 1, further comprising presenting a menu of available inputs to map a subset of the mapping parameters, wherein the available inputs include previously defined data mappings. 6. The method of claim 5, wherein the previously defined data mappings are structured in accordance with the schema. 7. The method of claim 5, wherein the available inputs include only inputs from an IF node in the workflow that have a schema defined. 8. The method of claim 1, wherein the workflow includes at least two IF nodes in a series. 9. The method of claim 1, wherein a service provider at least one of creates, maintains, deploys and supports the computing device. 10. The method of claim 1, wherein the receiving the user input defining the workflow, the receiving the information defining the schemas, the determining the set of mapping parameters, the receiving the inputs to the mapping parameters, the storing the mapping parameters, and the executing the workflow are provided by a service provider on a subscription, advertising, and/or fee basis. 11. The method of claim 1, wherein the computing device includes software provided as a service in a cloud environment. 12. The method of claim 1, further comprising deploying a system for simplifying data mapping in complex flows, comprising providing a computer infrastructure operable to perform the receiving the user input defining the workflow, the receiving the information defining the schemas, the determining the set of mapping parameters, the receiving the inputs to the mapping parameters, the storing the mapping parameters, and the executing the workflow. 13. A computer program product for data mapping in complex flows, the computer program product comprising a computer readable storage medium having program instructions embodied therewith, the program instructions executable by a computing device to cause the computing device to:
receive, via a user interface of a workflow management application, user input for constructing a workflow; receive, via the user interface, user input that defines schemas at convergence points in the workflow; receive, via the user interface, input values to mapping parameters defining output data at nodes in the workflow; store the input values to the mapping parameters in a structure corresponding to the schemas at respective nodes; and execute the workflow based on the input values to the mapping parameters. 14. The computer program product of claim 13, wherein a number of points in the workflow for defining the schemas and inputting the input values to the mapping parameters is less than a number of paths in the workflow. 15. The computer program product of claim 13, wherein the program instructions further cause the computing device to display an error when input values to mapping parameters for non-optional mapping parameters are not received. 16. The computer program product of claim 13, wherein the program instructions further cause the computing device to auto-populate a subset of the input values to the mapping parameters. 17. The computer program product of claim 13, wherein the program instructions further cause the computing device to present a menu of available inputs to map a subset of the mapping parameters, wherein the available inputs include previously defined data mappings and include only inputs from an IF node in the workflow that have a schema defined. 18. A system comprising:
a CPU, a computer readable memory and a computer readable storage medium associated with a computing device; program instructions to present a workflow having a plurality of convergence points; program instructions to present a plurality of schema definition dialogue boxes, wherein each of the plurality of schema definition dialogue boxes receive user inputs for defining a schema at a respective convergence point; program instructions to present a plurality of data mapping dialogue boxes, wherein each of the plurality of data mapping dialogue boxes receive user inputs for defining data mapping values at a respective output of a node in the workflow, wherein the data mapping values are structured in accordance with the schema; and program instructions to execute the workflow based on the data mapping values, wherein the program instructions are stored on the computer readable storage medium for execution by the CPU via the computer readable memory. 19. The system of claim 18, wherein a number of points in the workflow for defining schemas in the workflow receiving user inputs for defining data mapping values at output nodes of the workflow is less than a number of paths in the workflow. 20. The system of claim 18, further comprising program instructions to auto-populate a subset of the data mapping values. | 2,100 |
6,669 | 6,669 | 15,438,833 | 2,177 | Tools for optimizing complex processes or systems, such as flow process charts are provided. More specifically, processes are automatically graphically rendered and editable. A progression diagram is automatically generated from data stored in a data storage, and at least some directional lines of the progression diagram unintelligibly overlap. In response to a selection of a particular operation in the progression diagram a view is presented that includes a focus diagram. The focus diagram and progression diagram are editable to enable a change in a functional relationship between two nodes within the focus diagram and/or the progression diagram. | 1. A system comprising:
a server coupled to data storage; a tool in communication with the server to render a progression diagram from data stored in the data storage, wherein the progression diagram includes an origin, at least one originating operation, a directional line linking nodes representing the origin with the originating operation, and a second directional line linking a node representing the originating operation to nodes representing corresponding subsequent operations, wherein the directional lines represent outcomes of the originating operation, and wherein at least some of the directional lines overlap; and the tool to render a view comprising a focus diagram automatically generated based on a selection of a particular operation in the progression diagram, wherein the focus diagram is a separate progression diagram including only those directional lines that represent an outcome of the particular operation. 2. The system of claim 1, further comprising a directional line tool to draw a new directional line between two nodes shown in the progression diagram, each of the nodes representing two different operations in the focus diagram. 3. The system of claim 2, wherein the new directional line changes a functional relationship between the two nodes in the focus diagram. 4. The system of claim 3, wherein the data storage is updated with the changed functional relationship. 5. The system of claim 1, further comprising a directional line tool to delete at least one directional line between two nodes in the focus diagram, the deletion representing termination of a relationship between the two nodes. 6. The system of claim 5, wherein the directional line deletion changes a functional relationship between the two nodes in the progression diagram. 7. The system of claim 6, wherein the data storage is updated with the changed functional relationship. 8. A computer program product for rendering operations of a process, the computer program product comprising a computer readable storage device having program code embodied therewith, the program code executable by a processor to:
render a progression diagram from data stored in a data storage, wherein the progression diagram includes an origin, at least one originating operation, a directional line linking nodes representing the origin with the originating operation, and a second directional line linking a node representing the originating operation to nodes representing corresponding subsequent operations, wherein the directional lines represent outcomes of the originating operation, and wherein at least some of the directional lines overlap; and render a view comprising a focus diagram automatically generated based on a selection of a particular operation in the progression diagram, wherein the focus diagram is a separate progression diagram including only those directional lines that represent an outcome of the particular operation. 9. The computer program product of claim 8, further comprising a program code to draw a new directional line between two nodes shown in the progression diagram, each of the nodes representing two different operations in the focus diagram. 10. The computer program product of claim 9, wherein the new directional line changes a functional relationship between the two nodes in the focus diagram. 11. The computer program product of claim 10, further comprising program code to update the data storage with the changed functional relationship. 12. The computer program product of claim 8, further comprising program code to delete at least one directional line between two nodes in the focus diagram, the deletion representing termination of a relationship between the two nodes. 13. The computer program product of claim 12, wherein the directional line deletion changes a functional relationship between the two nodes in the progression diagram. 14. A method comprising:
rendering a progression diagram from data stored in the data storage, wherein the progression diagram includes an origin, at least one originating operation, a directional line linking nodes representing the origin with the originating operation, and a second directional line linking a node representing the originating operation to nodes representing corresponding subsequent operations, wherein the directional lines represent outcomes of the originating operation, and wherein at least some of the directional lines overlap; and rendering a view comprising a focus diagram automatically generated based on a selection of a particular operation in the progression diagram, wherein the focus diagram is a separate progression diagram including only those directional lines that represent an outcome of the particular operation. 15. The method of claim 14, further comprising drawing a new directional line between two nodes shown in the progression diagram, each of the nodes representing two different operations in the focus diagram. 16. The method of claim 15, wherein the new directional line changes a functional relationship between the two nodes in the focus diagram. 17. The method of claim 16, further comprising updating the data storage with the changed functional relationship. 18. The method of claim 14, further comprising deleting at least one directional line between two nodes in the focus diagram, the deletion representing termination of a relationship between the two nodes. 19. The method of claim 18, wherein the directional line deletion changes a functional relationship between the two nodes in the progression diagram. 20. The method of claim 19, further comprising updating the data storage with the changed functional relationship. | Tools for optimizing complex processes or systems, such as flow process charts are provided. More specifically, processes are automatically graphically rendered and editable. A progression diagram is automatically generated from data stored in a data storage, and at least some directional lines of the progression diagram unintelligibly overlap. In response to a selection of a particular operation in the progression diagram a view is presented that includes a focus diagram. The focus diagram and progression diagram are editable to enable a change in a functional relationship between two nodes within the focus diagram and/or the progression diagram.1. A system comprising:
a server coupled to data storage; a tool in communication with the server to render a progression diagram from data stored in the data storage, wherein the progression diagram includes an origin, at least one originating operation, a directional line linking nodes representing the origin with the originating operation, and a second directional line linking a node representing the originating operation to nodes representing corresponding subsequent operations, wherein the directional lines represent outcomes of the originating operation, and wherein at least some of the directional lines overlap; and the tool to render a view comprising a focus diagram automatically generated based on a selection of a particular operation in the progression diagram, wherein the focus diagram is a separate progression diagram including only those directional lines that represent an outcome of the particular operation. 2. The system of claim 1, further comprising a directional line tool to draw a new directional line between two nodes shown in the progression diagram, each of the nodes representing two different operations in the focus diagram. 3. The system of claim 2, wherein the new directional line changes a functional relationship between the two nodes in the focus diagram. 4. The system of claim 3, wherein the data storage is updated with the changed functional relationship. 5. The system of claim 1, further comprising a directional line tool to delete at least one directional line between two nodes in the focus diagram, the deletion representing termination of a relationship between the two nodes. 6. The system of claim 5, wherein the directional line deletion changes a functional relationship between the two nodes in the progression diagram. 7. The system of claim 6, wherein the data storage is updated with the changed functional relationship. 8. A computer program product for rendering operations of a process, the computer program product comprising a computer readable storage device having program code embodied therewith, the program code executable by a processor to:
render a progression diagram from data stored in a data storage, wherein the progression diagram includes an origin, at least one originating operation, a directional line linking nodes representing the origin with the originating operation, and a second directional line linking a node representing the originating operation to nodes representing corresponding subsequent operations, wherein the directional lines represent outcomes of the originating operation, and wherein at least some of the directional lines overlap; and render a view comprising a focus diagram automatically generated based on a selection of a particular operation in the progression diagram, wherein the focus diagram is a separate progression diagram including only those directional lines that represent an outcome of the particular operation. 9. The computer program product of claim 8, further comprising a program code to draw a new directional line between two nodes shown in the progression diagram, each of the nodes representing two different operations in the focus diagram. 10. The computer program product of claim 9, wherein the new directional line changes a functional relationship between the two nodes in the focus diagram. 11. The computer program product of claim 10, further comprising program code to update the data storage with the changed functional relationship. 12. The computer program product of claim 8, further comprising program code to delete at least one directional line between two nodes in the focus diagram, the deletion representing termination of a relationship between the two nodes. 13. The computer program product of claim 12, wherein the directional line deletion changes a functional relationship between the two nodes in the progression diagram. 14. A method comprising:
rendering a progression diagram from data stored in the data storage, wherein the progression diagram includes an origin, at least one originating operation, a directional line linking nodes representing the origin with the originating operation, and a second directional line linking a node representing the originating operation to nodes representing corresponding subsequent operations, wherein the directional lines represent outcomes of the originating operation, and wherein at least some of the directional lines overlap; and rendering a view comprising a focus diagram automatically generated based on a selection of a particular operation in the progression diagram, wherein the focus diagram is a separate progression diagram including only those directional lines that represent an outcome of the particular operation. 15. The method of claim 14, further comprising drawing a new directional line between two nodes shown in the progression diagram, each of the nodes representing two different operations in the focus diagram. 16. The method of claim 15, wherein the new directional line changes a functional relationship between the two nodes in the focus diagram. 17. The method of claim 16, further comprising updating the data storage with the changed functional relationship. 18. The method of claim 14, further comprising deleting at least one directional line between two nodes in the focus diagram, the deletion representing termination of a relationship between the two nodes. 19. The method of claim 18, wherein the directional line deletion changes a functional relationship between the two nodes in the progression diagram. 20. The method of claim 19, further comprising updating the data storage with the changed functional relationship. | 2,100 |
6,670 | 6,670 | 15,707,670 | 2,184 | A variable power bus cable, such as a USB Type-C cable, is validated for actual current capacity with respect to a specified power rating for the cable. The power cable validation is performed when the cable is connected to a power storage adapter and a portable information handling system. | 1. A method for power cable validation, the method comprising:
detecting that a variable power bus (VPB) cable is connected to a power storage adapter and to an information handling system; prior to negotiating a power delivery contract for electrical power to be supplied to the information handling system from the power storage adapter via the VPB cable, applying a first voltage to the VPB cable to identify a first indication of a current capacity of the VPB cable; and when the first indication confirms that the current capacity of the VPB cable corresponds to a specified power rating for the VPB cable, enabling the power delivery contract to be negotiated according to the specified power rating, otherwise blocking the power delivery contract using the VPB cable. 2. The method of claim 1, wherein the VPB cable is a universal serial bus (USB) Type-C cable and the specified power rating conforms to a USB Type-C specification. 3. The method of claim 2, wherein the first voltage is less than or equal to a minimum voltage specified for USB Type-C power delivery. 4. The method of claim 2, wherein the first indication is a measurement of a current flow across the USB Type-C cable, wherein the first voltage and the current flow are indicative of an impedance of the USB Type-C cable that determines the current capacity. 5. The method of claim 2, wherein the first indication is a temperature measurement of a polyfuse in a current path for the electrical power, wherein a given temperature rise of the polyfuse in response to the first voltage is indicative of the current capacity. 6. The method of claim 5, wherein the polyfuse is integrated within an electronic circuit included with the USB Type-C cable and coupled to the current path. 7. The method of claim 5, wherein the polyfuse is integrated with a port at one of the power storage adapter and the information handling system and coupled to the current path, and wherein a local temperature at the information handling system is used to offset the temperature measurement. 8. A power storage adapter enabled for power cable validation, the power storage adapter comprising:
a variable power bus (VPB) port; a processor; and memory media storing instructions executable by the processor for:
detecting that a VPB cable is connected to the VPB port and to an information handling system;
prior to negotiating a power delivery contract for electrical power to be supplied to the information handling system from the VPB port via the VPB cable, applying a first voltage to the VPB cable to identify a first indication of a current capacity of the VPB cable; and
when the first indication confirms that the current capacity of the VPB cable corresponds to a specified power rating for the VPB cable, enabling the power delivery contract to be negotiated according to the specified power rating, otherwise blocking the power delivery contract using the VPB cable. 9. The power storage adapter of claim 8, wherein the VPB cable is a universal serial bus (USB) Type-C cable and the specified power rating conforms to the USB Type-C specification. 10. The power storage adapter of claim 9, wherein the first voltage is less than or equal to a minimum voltage specified for USB Type-C power delivery. 11. The power storage adapter of claim 9, wherein the first indication is a measurement of a current flow across the USB Type-C cable, wherein the first voltage and the current flow are indicative of an impedance of the USB Type-C cable that determines the current capacity. 12. The power storage adapter of claim 9, wherein the first indication is a temperature measurement of a polyfuse in a current path for the electrical power, wherein a given temperature rise of the polyfuse in response to the first voltage is indicative of the current capacity. 13. The power storage adapter of claim 12, wherein the polyfuse is integrated within an electronic circuit included with the USB Type-C cable and coupled to the current path. 14. The power storage adapter of claim 12, further comprising:
a polyfuse coupled to the current path and integrated with the VPB port, wherein a local temperature at the VPB port is used to offset the temperature measurement. 15. An information handling system enabled for power cable validation, the information handling system comprising:
a variable power bus (VPB) port; an embedded controller (EC) further comprising a power control module and an EC processor; and memory media storing instructions executable by the EC processor to control the power control module for:
detecting that a VPB cable is connected to the VPB port and to a power delivery device;
prior to negotiating a power delivery contract for electrical power to be supplied to the information handling system from the power delivery device via the VPB cable, applying a first voltage to the VPB cable to identify a first indication of a current capacity of the VPB cable; and
when the first indication confirms that the current capacity of the VPB cable corresponds to a specified power rating for the VPB cable, enabling the power delivery contract to be negotiated according to the specified power rating, otherwise blocking the power delivery contract using the VPB cable. 16. The information handling system of claim 15, wherein the VPB cable is a universal serial bus (USB) Type-C cable and the specified power rating conforms to the USB Type-C specification. 17. The information handling system of claim 16, wherein the first voltage is less than or equal to a minimum voltage specified for USB Type-C power delivery. 18. The information handling system of claim 16, wherein the first indication is a measurement of a current flow across the USB Type-C cable, wherein the first voltage and the current flow are indicative of an impedance of the USB Type-C cable that determines the current capacity. 19. The information handling system of claim 16, wherein the first indication is a temperature measurement of a polyfuse in a current path for the electrical power, wherein a given temperature rise of the polyfuse in response to the first voltage is indicative of the current capacity. 20. The information handling system of claim 19, wherein the polyfuse is integrated within an electronic circuit included with the USB Type-C cable and coupled to the current path. 21. The information handling system of claim 19, further comprising:
a polyfuse coupled to the current path and integrated with the VPB port, wherein a local temperature at the information handling system is used to offset the temperature measurement. 22. The information handling system of claim 16, wherein the power delivery device is a power storage adapter. | A variable power bus cable, such as a USB Type-C cable, is validated for actual current capacity with respect to a specified power rating for the cable. The power cable validation is performed when the cable is connected to a power storage adapter and a portable information handling system.1. A method for power cable validation, the method comprising:
detecting that a variable power bus (VPB) cable is connected to a power storage adapter and to an information handling system; prior to negotiating a power delivery contract for electrical power to be supplied to the information handling system from the power storage adapter via the VPB cable, applying a first voltage to the VPB cable to identify a first indication of a current capacity of the VPB cable; and when the first indication confirms that the current capacity of the VPB cable corresponds to a specified power rating for the VPB cable, enabling the power delivery contract to be negotiated according to the specified power rating, otherwise blocking the power delivery contract using the VPB cable. 2. The method of claim 1, wherein the VPB cable is a universal serial bus (USB) Type-C cable and the specified power rating conforms to a USB Type-C specification. 3. The method of claim 2, wherein the first voltage is less than or equal to a minimum voltage specified for USB Type-C power delivery. 4. The method of claim 2, wherein the first indication is a measurement of a current flow across the USB Type-C cable, wherein the first voltage and the current flow are indicative of an impedance of the USB Type-C cable that determines the current capacity. 5. The method of claim 2, wherein the first indication is a temperature measurement of a polyfuse in a current path for the electrical power, wherein a given temperature rise of the polyfuse in response to the first voltage is indicative of the current capacity. 6. The method of claim 5, wherein the polyfuse is integrated within an electronic circuit included with the USB Type-C cable and coupled to the current path. 7. The method of claim 5, wherein the polyfuse is integrated with a port at one of the power storage adapter and the information handling system and coupled to the current path, and wherein a local temperature at the information handling system is used to offset the temperature measurement. 8. A power storage adapter enabled for power cable validation, the power storage adapter comprising:
a variable power bus (VPB) port; a processor; and memory media storing instructions executable by the processor for:
detecting that a VPB cable is connected to the VPB port and to an information handling system;
prior to negotiating a power delivery contract for electrical power to be supplied to the information handling system from the VPB port via the VPB cable, applying a first voltage to the VPB cable to identify a first indication of a current capacity of the VPB cable; and
when the first indication confirms that the current capacity of the VPB cable corresponds to a specified power rating for the VPB cable, enabling the power delivery contract to be negotiated according to the specified power rating, otherwise blocking the power delivery contract using the VPB cable. 9. The power storage adapter of claim 8, wherein the VPB cable is a universal serial bus (USB) Type-C cable and the specified power rating conforms to the USB Type-C specification. 10. The power storage adapter of claim 9, wherein the first voltage is less than or equal to a minimum voltage specified for USB Type-C power delivery. 11. The power storage adapter of claim 9, wherein the first indication is a measurement of a current flow across the USB Type-C cable, wherein the first voltage and the current flow are indicative of an impedance of the USB Type-C cable that determines the current capacity. 12. The power storage adapter of claim 9, wherein the first indication is a temperature measurement of a polyfuse in a current path for the electrical power, wherein a given temperature rise of the polyfuse in response to the first voltage is indicative of the current capacity. 13. The power storage adapter of claim 12, wherein the polyfuse is integrated within an electronic circuit included with the USB Type-C cable and coupled to the current path. 14. The power storage adapter of claim 12, further comprising:
a polyfuse coupled to the current path and integrated with the VPB port, wherein a local temperature at the VPB port is used to offset the temperature measurement. 15. An information handling system enabled for power cable validation, the information handling system comprising:
a variable power bus (VPB) port; an embedded controller (EC) further comprising a power control module and an EC processor; and memory media storing instructions executable by the EC processor to control the power control module for:
detecting that a VPB cable is connected to the VPB port and to a power delivery device;
prior to negotiating a power delivery contract for electrical power to be supplied to the information handling system from the power delivery device via the VPB cable, applying a first voltage to the VPB cable to identify a first indication of a current capacity of the VPB cable; and
when the first indication confirms that the current capacity of the VPB cable corresponds to a specified power rating for the VPB cable, enabling the power delivery contract to be negotiated according to the specified power rating, otherwise blocking the power delivery contract using the VPB cable. 16. The information handling system of claim 15, wherein the VPB cable is a universal serial bus (USB) Type-C cable and the specified power rating conforms to the USB Type-C specification. 17. The information handling system of claim 16, wherein the first voltage is less than or equal to a minimum voltage specified for USB Type-C power delivery. 18. The information handling system of claim 16, wherein the first indication is a measurement of a current flow across the USB Type-C cable, wherein the first voltage and the current flow are indicative of an impedance of the USB Type-C cable that determines the current capacity. 19. The information handling system of claim 16, wherein the first indication is a temperature measurement of a polyfuse in a current path for the electrical power, wherein a given temperature rise of the polyfuse in response to the first voltage is indicative of the current capacity. 20. The information handling system of claim 19, wherein the polyfuse is integrated within an electronic circuit included with the USB Type-C cable and coupled to the current path. 21. The information handling system of claim 19, further comprising:
a polyfuse coupled to the current path and integrated with the VPB port, wherein a local temperature at the information handling system is used to offset the temperature measurement. 22. The information handling system of claim 16, wherein the power delivery device is a power storage adapter. | 2,100 |
6,671 | 6,671 | 16,377,517 | 2,136 | A method includes determining a capacity model that configures computing resource capacity for a capacity container. The method also includes estimating an available capacity in a capacity container based on a capacity of host devices in the capacity container. The method also includes generating, based on a selection of a visualization method, a visualization of a trend curve and a forecast curve, the trend curve representing historical capacity usage of the host devices. Implementations may include selecting an average virtual machine unit display or a raw units display and determining an average virtual machine based on averaging an attribute of one or more virtual machines. | 1. A method, comprising:
determining a capacity model that configures computing resource capacity for a capacity container; estimating an available capacity in a capacity container based at least in part on a capacity of a plurality of host devices in the capacity container, wherein at least one virtual machine is deployed on each of the plurality of host devices; and generating, based on a selection of a visualization method, a visualization of a trend curve and a forecast curve, the trend curve representing historical capacity usage of the host devices and the forecast curve representing a forecast horizon based on the historical capacity usage. 2. The method of claim 1, wherein the selection comprises selecting an average virtual machine unit display, the method further comprising:
determining an average virtual machine based on averaging, over a time interval, an attribute of the at least one virtual machine executing on the host devices; fitting the average virtual machine into the available capacity; and wherein the forecast curve projects a number of average virtual machines that can be deployed in the capacity container based on the fitting. 3. The method of claim 2, wherein the attribute comprises CPU demand, memory consumption, or disk consumption. 4. The method of claim 1, wherein the selection comprises selecting a raw units display, the method further comprising:
determining a host capacity usage based on averaging, over a time interval, a CPU, memory, disk, or disk input/output attribute of the host devices in the capacity container; fitting the host capacity usage into the available capacity; and wherein the forecast curve further projects an amount of CPU, memory, disk, or disk input/output remaining in the capacity container based on the fitting. 5. The method of claim 1, further comprising determining the available capacity based on the capacity model and historical usage data of capacity based on a set of one or more virtual machines comprising the at least one virtual machine in the capacity container, wherein the available capacity comprises capacity available for deployment of new virtual machine units in the capacity container. 6. The method of claim 1, further comprising receiving a request to modify the capacity container by performing one or more of: adding a data store, removing a data store, or restoring a data store previously deleted in the capacity container. 7. The method of claim 1, further comprising receiving a request to modify the capacity container by performing one or more of: adding a host device, removing a host device, or restoring a host device previously deleted in the capacity container. 8. A system, comprising:
a capacity model that configures computing resource capacity for a capacity container;
a processor programmed to:
determine a capacity model that configures computing resource capacity for a capacity container;
estimate an available capacity in a capacity container based at least in part on a capacity of a plurality of host devices in the capacity container, wherein at least one virtual machine is deployed on each of the plurality of host devices; and
generate, based on a selection of a visualization method, a visualization of a trend curve and a forecast curve, the trend curve representing historical capacity usage of the host devices and the forecast curve representing a forecast horizon based at least on the historical capacity usage. 9. The system of claim 8, wherein the selection comprises selecting an average virtual machine unit display, wherein the processor is further programmed to:
determine an average virtual machine based on averaging, over a time interval, an attribute of the at least one virtual machine executing on the host devices; fit the average virtual machine into the available capacity; and wherein the forecast curve projects a number of average virtual machines that can be deployed in the capacity container based on the fitting. 10. The system of claim 9, wherein the attribute comprises CPU demand, memory consumption, or disk consumption. 11. The system of claim 8, wherein the selection comprises selecting a raw units display, wherein the processor is further programmed to:
determine a host capacity usage based on averaging, over a time interval, a CPU, memory, disk, or disk input/output attribute of the host devices in the capacity container; fit the host capacity usage into the available capacity; and wherein the forecast curve further projects an amount of CPU, memory, disk, or disk input/output remaining in the capacity container based on the fitting. 12. The system of claim 8, wherein the processor is further programmed to determine the available capacity based on the capacity model and historical usage data of capacity based on a set of one or more virtual machines comprising the at least one virtual machine in the capacity container, wherein the available capacity comprises capacity available for deployment of new virtual machine units in the capacity container. 13. The system of claim 8, wherein the processor is further programmed to receive a request to modify the capacity container by performing one or more of: adding a data store, removing a data store, or restoring a data store previously deleted in the capacity container. 14. The system of claim 8, wherein the processor is further programmed to receive a request to modify the capacity container by performing one or more of: adding a host device, removing a host device, or restoring a host device previously deleted in the capacity container. 15. A non-transitory computer-readable medium comprising instructions that when executed by a processor, cause the processor to at least:
determine a capacity model that configures computing resource capacity for a capacity container; estimate an available capacity in a capacity container based at least in part on a capacity of a plurality of host devices in the capacity container, wherein at least one virtual machine is deployed on each of the plurality of host devices; and generate, based on a selection of a visualization method, a visualization of a trend curve and a forecast curve, the trend curve representing historical capacity usage of the host devices and the forecast curve representing a forecast horizon based at least on the historical capacity usage. 16. The non-transitory computer-readable medium of claim 15, wherein the instructions further cause the processor to:
determine an average virtual machine based on averaging, over a time interval, an attribute of the at least one virtual machine executing on the host devices; fit the average virtual machine into the available capacity; and wherein the forecast curve projects a number of average virtual machines that can be deployed in the capacity container based on the fitting. 17. The non-transitory computer-readable medium of claim 16, wherein the attribute comprises CPU demand, memory consumption, or disk consumption. 18. The non-transitory computer-readable medium of claim 15, wherein the selection comprises selecting a raw units display, wherein the instructions further cause the processor to:
determine a host capacity usage based on averaging, over a time interval, a CPU, memory, disk, or disk input/output attribute of the host devices in the capacity container; fit the host capacity usage into the available capacity; and wherein the forecast curve further projects an amount of CPU, memory, disk, or disk input/output remaining in the capacity container based on the fitting. 19. The non-transitory computer-readable medium of claim 15, wherein the instructions further cause the processor to determine the available capacity based on the capacity model and historical usage data of capacity based on a set of one or more virtual machines comprising the at least one virtual machine in the capacity container, wherein the available capacity comprises capacity available for deployment of new virtual machine units in the capacity container. 20. The non-transitory computer-readable medium of claim 15, wherein the instructions further cause the processor to receive a request to modify the capacity container by performing one or more of: adding a data store, removing a data store, or restoring a data store previously deleted in the capacity container. | A method includes determining a capacity model that configures computing resource capacity for a capacity container. The method also includes estimating an available capacity in a capacity container based on a capacity of host devices in the capacity container. The method also includes generating, based on a selection of a visualization method, a visualization of a trend curve and a forecast curve, the trend curve representing historical capacity usage of the host devices. Implementations may include selecting an average virtual machine unit display or a raw units display and determining an average virtual machine based on averaging an attribute of one or more virtual machines.1. A method, comprising:
determining a capacity model that configures computing resource capacity for a capacity container; estimating an available capacity in a capacity container based at least in part on a capacity of a plurality of host devices in the capacity container, wherein at least one virtual machine is deployed on each of the plurality of host devices; and generating, based on a selection of a visualization method, a visualization of a trend curve and a forecast curve, the trend curve representing historical capacity usage of the host devices and the forecast curve representing a forecast horizon based on the historical capacity usage. 2. The method of claim 1, wherein the selection comprises selecting an average virtual machine unit display, the method further comprising:
determining an average virtual machine based on averaging, over a time interval, an attribute of the at least one virtual machine executing on the host devices; fitting the average virtual machine into the available capacity; and wherein the forecast curve projects a number of average virtual machines that can be deployed in the capacity container based on the fitting. 3. The method of claim 2, wherein the attribute comprises CPU demand, memory consumption, or disk consumption. 4. The method of claim 1, wherein the selection comprises selecting a raw units display, the method further comprising:
determining a host capacity usage based on averaging, over a time interval, a CPU, memory, disk, or disk input/output attribute of the host devices in the capacity container; fitting the host capacity usage into the available capacity; and wherein the forecast curve further projects an amount of CPU, memory, disk, or disk input/output remaining in the capacity container based on the fitting. 5. The method of claim 1, further comprising determining the available capacity based on the capacity model and historical usage data of capacity based on a set of one or more virtual machines comprising the at least one virtual machine in the capacity container, wherein the available capacity comprises capacity available for deployment of new virtual machine units in the capacity container. 6. The method of claim 1, further comprising receiving a request to modify the capacity container by performing one or more of: adding a data store, removing a data store, or restoring a data store previously deleted in the capacity container. 7. The method of claim 1, further comprising receiving a request to modify the capacity container by performing one or more of: adding a host device, removing a host device, or restoring a host device previously deleted in the capacity container. 8. A system, comprising:
a capacity model that configures computing resource capacity for a capacity container;
a processor programmed to:
determine a capacity model that configures computing resource capacity for a capacity container;
estimate an available capacity in a capacity container based at least in part on a capacity of a plurality of host devices in the capacity container, wherein at least one virtual machine is deployed on each of the plurality of host devices; and
generate, based on a selection of a visualization method, a visualization of a trend curve and a forecast curve, the trend curve representing historical capacity usage of the host devices and the forecast curve representing a forecast horizon based at least on the historical capacity usage. 9. The system of claim 8, wherein the selection comprises selecting an average virtual machine unit display, wherein the processor is further programmed to:
determine an average virtual machine based on averaging, over a time interval, an attribute of the at least one virtual machine executing on the host devices; fit the average virtual machine into the available capacity; and wherein the forecast curve projects a number of average virtual machines that can be deployed in the capacity container based on the fitting. 10. The system of claim 9, wherein the attribute comprises CPU demand, memory consumption, or disk consumption. 11. The system of claim 8, wherein the selection comprises selecting a raw units display, wherein the processor is further programmed to:
determine a host capacity usage based on averaging, over a time interval, a CPU, memory, disk, or disk input/output attribute of the host devices in the capacity container; fit the host capacity usage into the available capacity; and wherein the forecast curve further projects an amount of CPU, memory, disk, or disk input/output remaining in the capacity container based on the fitting. 12. The system of claim 8, wherein the processor is further programmed to determine the available capacity based on the capacity model and historical usage data of capacity based on a set of one or more virtual machines comprising the at least one virtual machine in the capacity container, wherein the available capacity comprises capacity available for deployment of new virtual machine units in the capacity container. 13. The system of claim 8, wherein the processor is further programmed to receive a request to modify the capacity container by performing one or more of: adding a data store, removing a data store, or restoring a data store previously deleted in the capacity container. 14. The system of claim 8, wherein the processor is further programmed to receive a request to modify the capacity container by performing one or more of: adding a host device, removing a host device, or restoring a host device previously deleted in the capacity container. 15. A non-transitory computer-readable medium comprising instructions that when executed by a processor, cause the processor to at least:
determine a capacity model that configures computing resource capacity for a capacity container; estimate an available capacity in a capacity container based at least in part on a capacity of a plurality of host devices in the capacity container, wherein at least one virtual machine is deployed on each of the plurality of host devices; and generate, based on a selection of a visualization method, a visualization of a trend curve and a forecast curve, the trend curve representing historical capacity usage of the host devices and the forecast curve representing a forecast horizon based at least on the historical capacity usage. 16. The non-transitory computer-readable medium of claim 15, wherein the instructions further cause the processor to:
determine an average virtual machine based on averaging, over a time interval, an attribute of the at least one virtual machine executing on the host devices; fit the average virtual machine into the available capacity; and wherein the forecast curve projects a number of average virtual machines that can be deployed in the capacity container based on the fitting. 17. The non-transitory computer-readable medium of claim 16, wherein the attribute comprises CPU demand, memory consumption, or disk consumption. 18. The non-transitory computer-readable medium of claim 15, wherein the selection comprises selecting a raw units display, wherein the instructions further cause the processor to:
determine a host capacity usage based on averaging, over a time interval, a CPU, memory, disk, or disk input/output attribute of the host devices in the capacity container; fit the host capacity usage into the available capacity; and wherein the forecast curve further projects an amount of CPU, memory, disk, or disk input/output remaining in the capacity container based on the fitting. 19. The non-transitory computer-readable medium of claim 15, wherein the instructions further cause the processor to determine the available capacity based on the capacity model and historical usage data of capacity based on a set of one or more virtual machines comprising the at least one virtual machine in the capacity container, wherein the available capacity comprises capacity available for deployment of new virtual machine units in the capacity container. 20. The non-transitory computer-readable medium of claim 15, wherein the instructions further cause the processor to receive a request to modify the capacity container by performing one or more of: adding a data store, removing a data store, or restoring a data store previously deleted in the capacity container. | 2,100 |
6,672 | 6,672 | 16,235,504 | 2,138 | Unified hardware and software two-level memory mechanisms and associated methods, systems, and software. Data is stored on near and far memory devices, wherein an access latency for a near memory device is less than an access latency for a far memory device. The near memory devices store data in data units having addresses in a near memory virtual address space, while the far memory devices store data in data units having addresses in a far memory address space, with a portion of the data being stored on both near and far memory devices. In response to memory read access requests, a determination is made to where data corresponding to the request is located on a near memory device, and if so the data is read from the near memory device; otherwise, the data is read from a far memory device. Memory access patterns are observed, and portions of far memory that are frequently accessed are copied to near memory to reduce access latency for subsequent accesses. | 1. A method comprising:
implementing a two-level memory access mechanism for a compute platform installed in one of a first chassis, drawer, tray or sled that is communicatively coupled, via a fabric, to one of a second chassis, drawer, tray or sled in which one or more far memory devices are installed, the compute platform including a processor operatively coupled to one or more near memory devices and enabled to access the one or more far memory devices via the fabric, wherein an access latency for a near memory device is less than an access latency for a far memory device, at least a portion of memory in the one or more near memory devices storing data in data units having addresses in a near memory virtual address space and at least a portion of memory in the one or more far memory devices storing data in data units having addresses in a far memory virtual address space; storing data in data units having addresses in the near memory virtual address space and in data units having addresses in the far memory virtual address space, a portion of the data that is stored being stored in data units in both the near memory virtual address space and the far memory virtual address space; in response to a memory read access request including a virtual memory address corresponding to a data unit storing data to be accessed,
determining whether the data is stored in a near memory device, and, if so,
accessing the data from the near memory device; otherwise,
accessing the data from a far memory device via the fabric,
wherein the first chassis, drawer, tray or sled is installed in a first slot in a data center rack, and the second chassis, drawer, tray or sled is installed in a second slot in the data center rack, and wherein the one or more near memory devices and the one or more far memory devices store data in Dynamic Random Access Memory (DRAM). 2. The method of claim 1, wherein the first chassis, drawer, tray or sled is communicatively coupled to the second chassis, drawer, tray or sled via first and second fabric links coupled via a fabric switch, the first and second fabric links comprising wired cable or optical cable links. 3. The method of claim 1, wherein the far memory device from which data is accessed comprises a Dual In-line Memory Module (DIMM). 4. The method of claim 1, wherein the near memory devices are volatile memory devices and the far memory devices are non-volatile memory devices comprising Dual In-line Memory Modules (DIMMs). 5. The method of claim 4, wherein the non-volatile memory devices include three-dimensional crosspoint DIMM memory devices. 6. The method of claim 1, further comprising mapping, for each data unit in the near memory virtual address space, an address of the data unit in the near memory virtual address space to an address of the data unit in the far memory virtual address space. 7. The method of claim 1, further comprising accessing a far memory device over the fabric using a Non-volatile Memory Express over Fabric (NVMe-oF) protocol. 8. The method of claim 7, wherein the far memory device comprises a storage class memory device. 9. The method of claim 1, wherein the data units comprise memory pages. 10. The method of claim 1, wherein the one or more far memory devices comprise one or more block storage devices, and the data units comprise storage blocks. 11. A system, comprising:
one or more far memory devices including Dynamic Random Access Memory (DRAM), installed in one of a first chassis, drawer, tray or sled that is installed in a first slot in a data center rack and communicatively coupled to a fabric including a plurality of fabric links and at least one fabric switch; and a compute platform, installed in one of a second chassis, drawer, tray or sled that is installed in a second slot in the data center rack, including,
a processor having a memory controller;
one or more near memory devices, communicatively coupled to the memory controller;
a host fabric interface (HFI), communicatively coupled to the processor and communicatively coupled to the fabric,
wherein an access latency for a near memory device is less than an access latency for a far memory device, at least a portion of the memory in the one or more near memory devices configured to store data in data units having addresses in a near memory virtual address space and at least a portion of the memory in the one or more far memory devices configured to store data in data units having addresses in a far memory virtual address space, and wherein the system is configured to, store data in data units having addresses in the near memory virtual address space and in data units having addresses in the far memory virtual address space, a portion of the data that is stored being stored in data units in both the near memory virtual address space and the far memory virtual address space; in response to a memory read access request including a virtual memory address corresponding to a data unit storing data to be accessed,
determine whether the data is stored in a near memory device, and, if so,
access the data from the near memory device; otherwise,
access the data from a far memory device via the fabric. 12. The system of claim 11, wherein the far memory devices comprise Dual In-line Memory Modules (DIMMs). 13. The system of claim 12, wherein the near memory devices are volatile memory devices and the far memory devices are non-volatile DIMMs. 14. The system of claim 13, wherein the non-volatile DIMMs include three-dimensional crosspoint DIMMs. 15. The system of claim 11, wherein the plurality of fabric links comprises wired cables or optical cables. 16. The system of claim 11, wherein the system is further configured to map, for each data unit in the near memory virtual address space, an address of the data unit in the near memory virtual address space to an address of the data unit in the far memory virtual address space. 17. The system of claim 11, wherein the system is further configured to access the one or more far memory devices over the fabric using a Non-volatile Memory Express over Fabric (NVMe-oF) protocol. 18. The system of claim 11, wherein the data units comprise memory pages. 19. The system of claim 11, wherein the one or more far memory devices comprise one or more block storage devices, and the data units comprise storage blocks. 20. The system of claim 11, wherein the system includes one or more storage class memory (SCM) nodes communicatively coupled to fabric, and wherein each SMC node includes one or more far memory devices. | Unified hardware and software two-level memory mechanisms and associated methods, systems, and software. Data is stored on near and far memory devices, wherein an access latency for a near memory device is less than an access latency for a far memory device. The near memory devices store data in data units having addresses in a near memory virtual address space, while the far memory devices store data in data units having addresses in a far memory address space, with a portion of the data being stored on both near and far memory devices. In response to memory read access requests, a determination is made to where data corresponding to the request is located on a near memory device, and if so the data is read from the near memory device; otherwise, the data is read from a far memory device. Memory access patterns are observed, and portions of far memory that are frequently accessed are copied to near memory to reduce access latency for subsequent accesses.1. A method comprising:
implementing a two-level memory access mechanism for a compute platform installed in one of a first chassis, drawer, tray or sled that is communicatively coupled, via a fabric, to one of a second chassis, drawer, tray or sled in which one or more far memory devices are installed, the compute platform including a processor operatively coupled to one or more near memory devices and enabled to access the one or more far memory devices via the fabric, wherein an access latency for a near memory device is less than an access latency for a far memory device, at least a portion of memory in the one or more near memory devices storing data in data units having addresses in a near memory virtual address space and at least a portion of memory in the one or more far memory devices storing data in data units having addresses in a far memory virtual address space; storing data in data units having addresses in the near memory virtual address space and in data units having addresses in the far memory virtual address space, a portion of the data that is stored being stored in data units in both the near memory virtual address space and the far memory virtual address space; in response to a memory read access request including a virtual memory address corresponding to a data unit storing data to be accessed,
determining whether the data is stored in a near memory device, and, if so,
accessing the data from the near memory device; otherwise,
accessing the data from a far memory device via the fabric,
wherein the first chassis, drawer, tray or sled is installed in a first slot in a data center rack, and the second chassis, drawer, tray or sled is installed in a second slot in the data center rack, and wherein the one or more near memory devices and the one or more far memory devices store data in Dynamic Random Access Memory (DRAM). 2. The method of claim 1, wherein the first chassis, drawer, tray or sled is communicatively coupled to the second chassis, drawer, tray or sled via first and second fabric links coupled via a fabric switch, the first and second fabric links comprising wired cable or optical cable links. 3. The method of claim 1, wherein the far memory device from which data is accessed comprises a Dual In-line Memory Module (DIMM). 4. The method of claim 1, wherein the near memory devices are volatile memory devices and the far memory devices are non-volatile memory devices comprising Dual In-line Memory Modules (DIMMs). 5. The method of claim 4, wherein the non-volatile memory devices include three-dimensional crosspoint DIMM memory devices. 6. The method of claim 1, further comprising mapping, for each data unit in the near memory virtual address space, an address of the data unit in the near memory virtual address space to an address of the data unit in the far memory virtual address space. 7. The method of claim 1, further comprising accessing a far memory device over the fabric using a Non-volatile Memory Express over Fabric (NVMe-oF) protocol. 8. The method of claim 7, wherein the far memory device comprises a storage class memory device. 9. The method of claim 1, wherein the data units comprise memory pages. 10. The method of claim 1, wherein the one or more far memory devices comprise one or more block storage devices, and the data units comprise storage blocks. 11. A system, comprising:
one or more far memory devices including Dynamic Random Access Memory (DRAM), installed in one of a first chassis, drawer, tray or sled that is installed in a first slot in a data center rack and communicatively coupled to a fabric including a plurality of fabric links and at least one fabric switch; and a compute platform, installed in one of a second chassis, drawer, tray or sled that is installed in a second slot in the data center rack, including,
a processor having a memory controller;
one or more near memory devices, communicatively coupled to the memory controller;
a host fabric interface (HFI), communicatively coupled to the processor and communicatively coupled to the fabric,
wherein an access latency for a near memory device is less than an access latency for a far memory device, at least a portion of the memory in the one or more near memory devices configured to store data in data units having addresses in a near memory virtual address space and at least a portion of the memory in the one or more far memory devices configured to store data in data units having addresses in a far memory virtual address space, and wherein the system is configured to, store data in data units having addresses in the near memory virtual address space and in data units having addresses in the far memory virtual address space, a portion of the data that is stored being stored in data units in both the near memory virtual address space and the far memory virtual address space; in response to a memory read access request including a virtual memory address corresponding to a data unit storing data to be accessed,
determine whether the data is stored in a near memory device, and, if so,
access the data from the near memory device; otherwise,
access the data from a far memory device via the fabric. 12. The system of claim 11, wherein the far memory devices comprise Dual In-line Memory Modules (DIMMs). 13. The system of claim 12, wherein the near memory devices are volatile memory devices and the far memory devices are non-volatile DIMMs. 14. The system of claim 13, wherein the non-volatile DIMMs include three-dimensional crosspoint DIMMs. 15. The system of claim 11, wherein the plurality of fabric links comprises wired cables or optical cables. 16. The system of claim 11, wherein the system is further configured to map, for each data unit in the near memory virtual address space, an address of the data unit in the near memory virtual address space to an address of the data unit in the far memory virtual address space. 17. The system of claim 11, wherein the system is further configured to access the one or more far memory devices over the fabric using a Non-volatile Memory Express over Fabric (NVMe-oF) protocol. 18. The system of claim 11, wherein the data units comprise memory pages. 19. The system of claim 11, wherein the one or more far memory devices comprise one or more block storage devices, and the data units comprise storage blocks. 20. The system of claim 11, wherein the system includes one or more storage class memory (SCM) nodes communicatively coupled to fabric, and wherein each SMC node includes one or more far memory devices. | 2,100 |
6,673 | 6,673 | 15,391,116 | 2,173 | The apparatus and method of the disclosure relate to data entry and menu selection. Applications include: (a) data entry for ideographic languages, including Chinese, Japanese and Korean; (b) fast food ordering; (c) correction of documents generated by optical character recognition; and (d) computer access and speech synthesis by persons temporarily or permanently lacking normal motor capabilities. In a preferred embodiment, each option of a menu is associated respectively with a selectable region displayed adjacent an edge of a display, forming a perimeter menu and leaving a region in the center of the perimeter menu for the output of an application program. Selectable regions may be on the display, outside the display, or both. A menu option may be selected by clicking on the associated selectable region, by dwelling on it for a selection threshold period or by a cursor path toward the selectable region, or by a combination thereof. Remaining dwell time required to select a selectable region is preferably indicated by the brightness of the selectable region. Submenus of a perimeter menu may also be perimeter menus and the location of a submenu option may be foretold by the appearance of its parent menu option. Menu options may be ideographs sharing a sound, a structure or another characteristic. Ideographs, which may be homophones of one another, may be associated with colored indicating regions and selection of an ideograph may be made by speaking the name of the associated color. | 1. An apparatus comprising:
(a) a touch screen for receiving a first location on the touch screen and a second location on the touch screen, the second location occurring at a time after the first location occurs; and (b) a processor, coupled to the touch screen, for:
(1) simultaneously displaying on the touch screen:
(i) a first region, the first region having a perimeter, the perimeter having each of a first side and a second side; and
(ii) a first selectable region and a second selectable region, each of the first and the second selectable regions outside the first region, the first selectable region adjacent the first side of the perimeter, the second selectable region adjacent the second side of the perimeter; the first selectable region not intersecting the second selectable region; and
(2) selecting the first selectable region responsive to:
(i) the first location intersecting the first selectable region;
(ii) the second location intersecting the first selectable region; and
(iii) the period between the time the first location occurs and the time the second location occurs equalling or exceeding a first predetermined period. 2. The apparatus of claim 1 wherein the processor is further operative to display on the touch screen each of a first and a second menu option, the first menu option associated with the first selectable region, the second menu option associated with the second selectable region, each of the menu options displayed intersecting its associated first selectable region; and wherein the processor is still further operative, responsive to the selection of the first selectable region, to select the first menu option. 3. The apparatus of claim 2 wherein the perimeter has a third side; wherein the processor is further operative to display on the touch screen, simultaneously with the display of the first selectable region and the second selectable region, a third selectable region outside the first region and adjacent the third side of the perimeter, the third selectable region not intersecting either one of the first and the second selectable regions, the third selectable region associated with a third menu option; and wherein the processor is still further operative to display the third menu option on the touch screen intersecting the third selectable region. 4. The apparatus of claim 2 wherein the processor is further capable of coupling to a device; wherein the first menu option represents a control function for the device; and wherein the processor is further operative, responsive to the selection of the first selectable region, to initiate the control function for the device. 5. The apparatus of claim 4 wherein the device is a telephone. 6. The apparatus of claim 4 wherein the device is capable of playing either one of: (a) previously recorded music, and (b) previously recorded video; and further comprising the device. 7. The apparatus of claim 2 further comprising a program for execution on the processor, the program for displaying pages of a book; and wherein the processor is further operative to display a page of the book in the first region. 8. The apparatus of claim 7 wherein the first menu option represents a next page function; and wherein the processor, responsive to the selection of the first selectable region, is further operative to display in the first region the page of the book following the displayed page. 9. The apparatus of claim 2 wherein the touch screen is further operative to receive a third location on the touch screen and a fourth location on the touch screen, the third location occurring at a time after the second location occurs, the fourth location occurring at a time after the third location occurs; and wherein the processor is further operative, responsive to the selection of the first selectable region, to:
(1) simultaneously display on the touch screen:
(i) a second region, the second region having a perimeter, the perimeter of the second region having each of a first side and a second side; and
(ii) a third selectable region and a fourth selectable region, each of the third and the fourth selectable regions outside the second region, the third selectable region adjacent the first side of the perimeter of the second region, the fourth selectable region adjacent the second side of the perimeter of the second region; the third selectable region not intersecting the fourth selectable region; and
(2) select the third selectable region responsive to:
(i) the third location intersecting the third selectable region;
(ii) the fourth location intersecting the third selectable region; and
(iii) the period between the time the third location occurs and the time the fourth location occurs equalling or exceeding a second predetermined period. 10. The apparatus of claim 2 wherein the processor is further operative, responsive to the selection of the first menu option, to display the first menu option in the first region on the touch screen. 11. The apparatus of claim 1 wherein the touch screen is adapted to receive each of the first location and the second location responsive to movement of a digit of an operator. 12. The apparatus of claim 1 wherein the first side of the perimeter is opposite the second side of the perimeter. 13. The apparatus of claim 1 wherein the first side of the perimeter intersects the second side of the perimeter at a right angle. | The apparatus and method of the disclosure relate to data entry and menu selection. Applications include: (a) data entry for ideographic languages, including Chinese, Japanese and Korean; (b) fast food ordering; (c) correction of documents generated by optical character recognition; and (d) computer access and speech synthesis by persons temporarily or permanently lacking normal motor capabilities. In a preferred embodiment, each option of a menu is associated respectively with a selectable region displayed adjacent an edge of a display, forming a perimeter menu and leaving a region in the center of the perimeter menu for the output of an application program. Selectable regions may be on the display, outside the display, or both. A menu option may be selected by clicking on the associated selectable region, by dwelling on it for a selection threshold period or by a cursor path toward the selectable region, or by a combination thereof. Remaining dwell time required to select a selectable region is preferably indicated by the brightness of the selectable region. Submenus of a perimeter menu may also be perimeter menus and the location of a submenu option may be foretold by the appearance of its parent menu option. Menu options may be ideographs sharing a sound, a structure or another characteristic. Ideographs, which may be homophones of one another, may be associated with colored indicating regions and selection of an ideograph may be made by speaking the name of the associated color.1. An apparatus comprising:
(a) a touch screen for receiving a first location on the touch screen and a second location on the touch screen, the second location occurring at a time after the first location occurs; and (b) a processor, coupled to the touch screen, for:
(1) simultaneously displaying on the touch screen:
(i) a first region, the first region having a perimeter, the perimeter having each of a first side and a second side; and
(ii) a first selectable region and a second selectable region, each of the first and the second selectable regions outside the first region, the first selectable region adjacent the first side of the perimeter, the second selectable region adjacent the second side of the perimeter; the first selectable region not intersecting the second selectable region; and
(2) selecting the first selectable region responsive to:
(i) the first location intersecting the first selectable region;
(ii) the second location intersecting the first selectable region; and
(iii) the period between the time the first location occurs and the time the second location occurs equalling or exceeding a first predetermined period. 2. The apparatus of claim 1 wherein the processor is further operative to display on the touch screen each of a first and a second menu option, the first menu option associated with the first selectable region, the second menu option associated with the second selectable region, each of the menu options displayed intersecting its associated first selectable region; and wherein the processor is still further operative, responsive to the selection of the first selectable region, to select the first menu option. 3. The apparatus of claim 2 wherein the perimeter has a third side; wherein the processor is further operative to display on the touch screen, simultaneously with the display of the first selectable region and the second selectable region, a third selectable region outside the first region and adjacent the third side of the perimeter, the third selectable region not intersecting either one of the first and the second selectable regions, the third selectable region associated with a third menu option; and wherein the processor is still further operative to display the third menu option on the touch screen intersecting the third selectable region. 4. The apparatus of claim 2 wherein the processor is further capable of coupling to a device; wherein the first menu option represents a control function for the device; and wherein the processor is further operative, responsive to the selection of the first selectable region, to initiate the control function for the device. 5. The apparatus of claim 4 wherein the device is a telephone. 6. The apparatus of claim 4 wherein the device is capable of playing either one of: (a) previously recorded music, and (b) previously recorded video; and further comprising the device. 7. The apparatus of claim 2 further comprising a program for execution on the processor, the program for displaying pages of a book; and wherein the processor is further operative to display a page of the book in the first region. 8. The apparatus of claim 7 wherein the first menu option represents a next page function; and wherein the processor, responsive to the selection of the first selectable region, is further operative to display in the first region the page of the book following the displayed page. 9. The apparatus of claim 2 wherein the touch screen is further operative to receive a third location on the touch screen and a fourth location on the touch screen, the third location occurring at a time after the second location occurs, the fourth location occurring at a time after the third location occurs; and wherein the processor is further operative, responsive to the selection of the first selectable region, to:
(1) simultaneously display on the touch screen:
(i) a second region, the second region having a perimeter, the perimeter of the second region having each of a first side and a second side; and
(ii) a third selectable region and a fourth selectable region, each of the third and the fourth selectable regions outside the second region, the third selectable region adjacent the first side of the perimeter of the second region, the fourth selectable region adjacent the second side of the perimeter of the second region; the third selectable region not intersecting the fourth selectable region; and
(2) select the third selectable region responsive to:
(i) the third location intersecting the third selectable region;
(ii) the fourth location intersecting the third selectable region; and
(iii) the period between the time the third location occurs and the time the fourth location occurs equalling or exceeding a second predetermined period. 10. The apparatus of claim 2 wherein the processor is further operative, responsive to the selection of the first menu option, to display the first menu option in the first region on the touch screen. 11. The apparatus of claim 1 wherein the touch screen is adapted to receive each of the first location and the second location responsive to movement of a digit of an operator. 12. The apparatus of claim 1 wherein the first side of the perimeter is opposite the second side of the perimeter. 13. The apparatus of claim 1 wherein the first side of the perimeter intersects the second side of the perimeter at a right angle. | 2,100 |
6,674 | 6,674 | 16,426,722 | 2,133 | A method for configuring a computer system memory, includes powering on the computer system; retrieving options for initializing the computer system; assigning to a first segment of the memory a first pre-defined setting; assigning to a second segment of the memory a second pre-defined setting; and booting the computer system. | 1. A method for configuring a memory, comprising:
powering on a computer system associated with the memory; retrieving options for configuring the memory; assigning a first reliability setting to a first segment of the memory that is defined by a first starting address and a first length; assigning a second reliability setting to a second segment of the memory that is defined by a second starting address and a second length, wherein the second reliability setting indicates greater reliability than the first reliability setting; allocating the second segment of memory to an operating system kernel; and booting the computer system. 2. The method according to claim 1, wherein the first reliability setting and the second reliability setting are based on a resiliency, accessibility, serviceability (RAS) standard. 3. The method according to claim 1, further comprising applying double device data correction (DDDC) to the second segment of the memory based on the second reliability setting. 4. The method according to claim 1, further comprising applying single device data correction (SDDC) to the second segment of the memory based on the second reliability setting. 5. The method according to claim 1, further comprising mirroring the second segment of the memory based on the second reliability setting. 6. The method according to claim 5, further comprising:
detecting a failure of a portion of the second segment of the memory; and reconfiguring the second segment of the memory to stop the mirroring in response to the failure. 7. The method according to claim 6, further comprising:
receiving, from a user via a user interface, an indication of a performance goal for an application; allocating the first segment of the memory to the application based on the performance goal and the first reliability setting; and presenting an alert to the user indicating that the allocating of the first segment of the memory will take effect after the computer system associated with the memory is rebooted. 8. A computer system comprising:
a processor; and a memory containing instructions thereon that, when executed by the processor, cause the processor to perform a set of actions comprising:
powering on the computer system,
retrieving options for configuring the memory,
assigning a first reliability setting to a first segment of the memory that is defined by a first starting address and a first length,
assigning a second reliability setting to a second segment of the memory that is defined by a second starting address and a second length, wherein the second reliability setting indicates greater reliability than the first reliability setting,
allocating the second segment of memory to an operating system kernel, and
booting the computer system. 9. The computer system of claim 8, wherein the first reliability setting and the second reliability setting are based on a resiliency, accessibility, serviceability (RAS) standard. 10. The computer system of claim 8, wherein the set of actions further comprises:
applying double device data correction (DDDC) to the second segment of the memory based on the second reliability setting. 11. The computer system of claim 8, wherein the set of actions further comprises:
applying single device data correction (SDDC) to the second segment of the memory based on the second reliability setting. 12. The computer system of claim 8, wherein the set of actions further comprises:
mirroring the second segment of the memory based on the second reliability setting. 13. A non-transitory computer readable storage medium comprising programming executable as machine instructions by a processor, wherein executing the programming causes the processor to:
retrieve options for configuring a memory associated with a computer system; assign a first reliability setting to a first segment of the memory that is defined by a first starting address and a first length; assign a second reliability setting to a second segment of the memory that is defined by a second starting address and a second length, wherein the second reliability setting indicates greater reliability than the first reliability setting; allocate the second segment of memory to an operating system kernel, and boot the computer system. 14. The non-transitory computer readable storage medium of claim 13, wherein the first reliability setting and the second reliability setting are based on a resiliency, accessibility, serviceability (RAS) standard. 15. The non-transitory computer readable storage medium of claim 13, wherein executing the programming further causes the processor to apply double device data correction (DDDC) to the second segment of the memory based on the second reliability setting. 16. The non-transitory computer readable storage medium of claim 13, wherein executing the programming further causes the processor to apply single device data correction (SDDC) to the second segment of the memory based on the second reliability setting. 17. The non-transitory computer readable storage medium of claim 13, wherein executing the programming further causes the processor to mirror the second segment of the memory based on the second reliability setting. 18. The non-transitory computer readable storage medium of claim 17, wherein executing the programming further causes the processor to:
detect a failure of a portion of the second segment of the memory; and reconfigure the second segment of the memory to stop the mirroring in response to the failure. 19. The non-transitory computer readable storage medium of claim 13, wherein executing the programming further causes the processor to:
receive, from a user via a user interface, an indication of a performance goal for an application; and allocate the first segment of the memory to the application based on the performance goal and the first reliability setting. 20. The non-transitory computer readable storage medium of claim 13, wherein executing the programming further causes the processor to:
present an alert to the user indicating that the allocating of the first segment of the memory will take effect after the computer system is rebooted. | A method for configuring a computer system memory, includes powering on the computer system; retrieving options for initializing the computer system; assigning to a first segment of the memory a first pre-defined setting; assigning to a second segment of the memory a second pre-defined setting; and booting the computer system.1. A method for configuring a memory, comprising:
powering on a computer system associated with the memory; retrieving options for configuring the memory; assigning a first reliability setting to a first segment of the memory that is defined by a first starting address and a first length; assigning a second reliability setting to a second segment of the memory that is defined by a second starting address and a second length, wherein the second reliability setting indicates greater reliability than the first reliability setting; allocating the second segment of memory to an operating system kernel; and booting the computer system. 2. The method according to claim 1, wherein the first reliability setting and the second reliability setting are based on a resiliency, accessibility, serviceability (RAS) standard. 3. The method according to claim 1, further comprising applying double device data correction (DDDC) to the second segment of the memory based on the second reliability setting. 4. The method according to claim 1, further comprising applying single device data correction (SDDC) to the second segment of the memory based on the second reliability setting. 5. The method according to claim 1, further comprising mirroring the second segment of the memory based on the second reliability setting. 6. The method according to claim 5, further comprising:
detecting a failure of a portion of the second segment of the memory; and reconfiguring the second segment of the memory to stop the mirroring in response to the failure. 7. The method according to claim 6, further comprising:
receiving, from a user via a user interface, an indication of a performance goal for an application; allocating the first segment of the memory to the application based on the performance goal and the first reliability setting; and presenting an alert to the user indicating that the allocating of the first segment of the memory will take effect after the computer system associated with the memory is rebooted. 8. A computer system comprising:
a processor; and a memory containing instructions thereon that, when executed by the processor, cause the processor to perform a set of actions comprising:
powering on the computer system,
retrieving options for configuring the memory,
assigning a first reliability setting to a first segment of the memory that is defined by a first starting address and a first length,
assigning a second reliability setting to a second segment of the memory that is defined by a second starting address and a second length, wherein the second reliability setting indicates greater reliability than the first reliability setting,
allocating the second segment of memory to an operating system kernel, and
booting the computer system. 9. The computer system of claim 8, wherein the first reliability setting and the second reliability setting are based on a resiliency, accessibility, serviceability (RAS) standard. 10. The computer system of claim 8, wherein the set of actions further comprises:
applying double device data correction (DDDC) to the second segment of the memory based on the second reliability setting. 11. The computer system of claim 8, wherein the set of actions further comprises:
applying single device data correction (SDDC) to the second segment of the memory based on the second reliability setting. 12. The computer system of claim 8, wherein the set of actions further comprises:
mirroring the second segment of the memory based on the second reliability setting. 13. A non-transitory computer readable storage medium comprising programming executable as machine instructions by a processor, wherein executing the programming causes the processor to:
retrieve options for configuring a memory associated with a computer system; assign a first reliability setting to a first segment of the memory that is defined by a first starting address and a first length; assign a second reliability setting to a second segment of the memory that is defined by a second starting address and a second length, wherein the second reliability setting indicates greater reliability than the first reliability setting; allocate the second segment of memory to an operating system kernel, and boot the computer system. 14. The non-transitory computer readable storage medium of claim 13, wherein the first reliability setting and the second reliability setting are based on a resiliency, accessibility, serviceability (RAS) standard. 15. The non-transitory computer readable storage medium of claim 13, wherein executing the programming further causes the processor to apply double device data correction (DDDC) to the second segment of the memory based on the second reliability setting. 16. The non-transitory computer readable storage medium of claim 13, wherein executing the programming further causes the processor to apply single device data correction (SDDC) to the second segment of the memory based on the second reliability setting. 17. The non-transitory computer readable storage medium of claim 13, wherein executing the programming further causes the processor to mirror the second segment of the memory based on the second reliability setting. 18. The non-transitory computer readable storage medium of claim 17, wherein executing the programming further causes the processor to:
detect a failure of a portion of the second segment of the memory; and reconfigure the second segment of the memory to stop the mirroring in response to the failure. 19. The non-transitory computer readable storage medium of claim 13, wherein executing the programming further causes the processor to:
receive, from a user via a user interface, an indication of a performance goal for an application; and allocate the first segment of the memory to the application based on the performance goal and the first reliability setting. 20. The non-transitory computer readable storage medium of claim 13, wherein executing the programming further causes the processor to:
present an alert to the user indicating that the allocating of the first segment of the memory will take effect after the computer system is rebooted. | 2,100 |
6,675 | 6,675 | 16,564,217 | 2,194 | In an embodiment, an operating system provides a port group service that permits two or more ports to be bound together as a port group. A thread may listen for messages and/or events on the port group, and thus may receive a message/event from any of the ports in the port group and may process that message/event. Threads that send messages/events (“sending threads”) may send a message/event to a port in the port group, and the messages/events received on the various ports may be processed according to a queue policy for the ports in the port group. Messages/events may be transmitted to from the ports to a listening thread (a “receiving thread”) using a receive policy that determines the priority at which the receiving thread is to execute to process the message/event. | 1. A non-transitory computer accessible storage medium storing a plurality of instructions that are computer-executable to cause the computer to:
receive a message on a first port that is configured into a port group with one or more second ports; enqueue the message in a queue associated with the port group according to a queue policy associated with the first port, wherein the queue is also used for messages from the one or more second ports; dequeue the message to a receiving thread based on a receive policy associated with the first port. 2. The non-transitory computer accessible storage medium as recited in claim 1 wherein the queue policy is first in, first out, and wherein messages received on the first port are processed in the order received on the port. 3. The non-transitory computer accessible storage medium as recited in claim 2 wherein a second queue policy associated with at least one of the one or more second ports is priority, and wherein the messages received on the first port are processed with respect to messages received on the at least one of the one or more second ports based on a relative priority of the first port to the at least one of the one or more second ports. 4. The non-transitory computer accessible storage medium as recited in claim 1 wherein the queue policy is priority, and wherein the message is processed with respect to messages received on the one or more other ports based on a relative priority of the first port to the one or more second ports. 5. The non-transitory computer accessible storage medium as recited in claim 4 wherein messages having a same priority are processed in the order the messages are received. 6. The non-transitory computer accessible storage medium as recited in claim 1 wherein the receive policy specifies a priority at which the receiving thread executes to process the message. 7. The non-transitory computer accessible storage medium as recited in claim 6 wherein the receive policy causes the receiving thread to execute at the receiving thread's current priority. 8. The non-transitory computer accessible storage medium as recited in claim 7 wherein the plurality of instructions, when executed, reset the receiving thread's current priority to an initially-assigned priority to process the message. 9. The non-transitory computer accessible storage medium as recited in claim 7 wherein the plurality of instructions, when executed, reset the receiving thread's current priority to an a most-recently changed priority to process the message. 10. The non-transitory computer accessible storage medium as recited in claim 7 wherein the receiving thread's current priority is a temporary priority used for processing a previous message that has not been completed. 11. The non-transitory computer accessible storage medium as recited in claim 6 wherein the receive policy causes the receiving thread to execute at a priority assigned to the first port. 12. The non-transitory computer accessible storage medium as recited in claim 6 wherein the receive policy causes the receiving thread to execute at a priority assigned to a source thread that transmitted the message to the first port. 13. The non-transitory computer accessible storage medium as recited in claim 12 wherein the port group further supports a ceiling that limits the priority to now more than a maximum priority. 14. The non-transitory computer accessible storage medium as recited in claim 12 wherein the port group further supports a floor that limits the priority to no less than a minimum priority. 15. A computer system comprising:
one or more processors; and a non-transitory computer accessible storage medium storing a plurality of instructions that are executable on the one or more processors to cause the computer system to:
receive a message on a first port that is configured into a port group with one or more second ports;
enqueue the message in a queue associated with the port group according to a queue policy associated with the first port, wherein the queue is also used for messages from the one or more second ports;
dequeue the message to a receiving thread based on a receive policy associated with the first port. 16. The computer system as recited in claim 15 wherein the one or more processors execute the receiving thread. 17. The computer system as recited in claim 15 wherein the one or more processors execute a source thread that transmits the message to the first port. 18. A method comprising:
receiving a message on a first port that is configured into a port group with one or more second ports in a computer system; enqueuing the message in a queue associated with the port group according to a queue policy associated with the first port, wherein the queue is also used for messages from the one or more second ports; dequeuing the message to a receiving thread based on a receive policy associated with the first port. 19. The method as recited in claim 18 wherein the queue policy is first in, first out. 20. The method as recited in claim 18 wherein the queue policy is priority. 21. The method as recited in claim 18 wherein the receive policy specifies a priority at which the receiving thread executes to process the message. | In an embodiment, an operating system provides a port group service that permits two or more ports to be bound together as a port group. A thread may listen for messages and/or events on the port group, and thus may receive a message/event from any of the ports in the port group and may process that message/event. Threads that send messages/events (“sending threads”) may send a message/event to a port in the port group, and the messages/events received on the various ports may be processed according to a queue policy for the ports in the port group. Messages/events may be transmitted to from the ports to a listening thread (a “receiving thread”) using a receive policy that determines the priority at which the receiving thread is to execute to process the message/event.1. A non-transitory computer accessible storage medium storing a plurality of instructions that are computer-executable to cause the computer to:
receive a message on a first port that is configured into a port group with one or more second ports; enqueue the message in a queue associated with the port group according to a queue policy associated with the first port, wherein the queue is also used for messages from the one or more second ports; dequeue the message to a receiving thread based on a receive policy associated with the first port. 2. The non-transitory computer accessible storage medium as recited in claim 1 wherein the queue policy is first in, first out, and wherein messages received on the first port are processed in the order received on the port. 3. The non-transitory computer accessible storage medium as recited in claim 2 wherein a second queue policy associated with at least one of the one or more second ports is priority, and wherein the messages received on the first port are processed with respect to messages received on the at least one of the one or more second ports based on a relative priority of the first port to the at least one of the one or more second ports. 4. The non-transitory computer accessible storage medium as recited in claim 1 wherein the queue policy is priority, and wherein the message is processed with respect to messages received on the one or more other ports based on a relative priority of the first port to the one or more second ports. 5. The non-transitory computer accessible storage medium as recited in claim 4 wherein messages having a same priority are processed in the order the messages are received. 6. The non-transitory computer accessible storage medium as recited in claim 1 wherein the receive policy specifies a priority at which the receiving thread executes to process the message. 7. The non-transitory computer accessible storage medium as recited in claim 6 wherein the receive policy causes the receiving thread to execute at the receiving thread's current priority. 8. The non-transitory computer accessible storage medium as recited in claim 7 wherein the plurality of instructions, when executed, reset the receiving thread's current priority to an initially-assigned priority to process the message. 9. The non-transitory computer accessible storage medium as recited in claim 7 wherein the plurality of instructions, when executed, reset the receiving thread's current priority to an a most-recently changed priority to process the message. 10. The non-transitory computer accessible storage medium as recited in claim 7 wherein the receiving thread's current priority is a temporary priority used for processing a previous message that has not been completed. 11. The non-transitory computer accessible storage medium as recited in claim 6 wherein the receive policy causes the receiving thread to execute at a priority assigned to the first port. 12. The non-transitory computer accessible storage medium as recited in claim 6 wherein the receive policy causes the receiving thread to execute at a priority assigned to a source thread that transmitted the message to the first port. 13. The non-transitory computer accessible storage medium as recited in claim 12 wherein the port group further supports a ceiling that limits the priority to now more than a maximum priority. 14. The non-transitory computer accessible storage medium as recited in claim 12 wherein the port group further supports a floor that limits the priority to no less than a minimum priority. 15. A computer system comprising:
one or more processors; and a non-transitory computer accessible storage medium storing a plurality of instructions that are executable on the one or more processors to cause the computer system to:
receive a message on a first port that is configured into a port group with one or more second ports;
enqueue the message in a queue associated with the port group according to a queue policy associated with the first port, wherein the queue is also used for messages from the one or more second ports;
dequeue the message to a receiving thread based on a receive policy associated with the first port. 16. The computer system as recited in claim 15 wherein the one or more processors execute the receiving thread. 17. The computer system as recited in claim 15 wherein the one or more processors execute a source thread that transmits the message to the first port. 18. A method comprising:
receiving a message on a first port that is configured into a port group with one or more second ports in a computer system; enqueuing the message in a queue associated with the port group according to a queue policy associated with the first port, wherein the queue is also used for messages from the one or more second ports; dequeuing the message to a receiving thread based on a receive policy associated with the first port. 19. The method as recited in claim 18 wherein the queue policy is first in, first out. 20. The method as recited in claim 18 wherein the queue policy is priority. 21. The method as recited in claim 18 wherein the receive policy specifies a priority at which the receiving thread executes to process the message. | 2,100 |
6,676 | 6,676 | 16,412,651 | 2,196 | Techniques are described for performing browser-driven application capture of application installations. When the browser on the client machine detects a request to begin an application capture session, it downloads an orchestrator binary from an origin server. The orchestrator is a self-extracting executable that decompresses components responsible for preparing the client machine for the application capture session. Preparing the client machine includes starting a local web server, executing a registry script to create the necessary registry state, mounting a virtual disk, and deploying an agent that will record state changes on the client machine. Once the client machine has been prepared, the application installation can begin. During the installation process, the agent intercepts state changes occurring on the client machine and redirects them to the virtual disk. Once finished, the application capture session is completed by adding identity and metadata information to the virtual disk to generate the application package. | 1. A method for performing a web-based capture of an application installation, the method comprising:
detecting, by a web browser on a client machine, a request to begin an application capture session; downloading an orchestrator binary executable to the client machine from an origin server, the orchestrator binary executable configured to prepare the client machine for capturing the application installation by
mounting a virtual disk on the client machine, and
deploying an agent configured to record state changes on the client machine;
detecting that the application installation has started on the client machine; intercepting the state changes occurring on the client machine during the application installation by the agent and redirecting the state changes to be stored in the virtual disk mounted on the client machine; determining that the application installation is complete; and completing the application capture session by generating an application package from the virtual disk, wherein the application package is deployable to execute the application on computing devices different from the client machine. 2. The method of claim 1, wherein determining that the application installation is complete further comprises:
restarting the client machine; launching the orchestrator binary executable after the client machine is restarted; detecting, by the orchestrator binary executable, that the application capture session is in progress; and generating the application package from the virtual disk by adding identity and metadata information to the application package. 3. The method of claim 1, wherein preparing the client machine for capturing the application installation further comprises:
launching a web server on the client machine, the web server configured to communicate with the web browser on the client machine in order to orchestrate the application capture session. 4. The method of claim 3, wherein the web server is further configured to:
receive a script request from the web browser; return a response containing a script executable on the web page displayed by the web browser, the script configured to convey status messages related to the state changes occurring during the application capture session to the web browser. 5. The method of claim 1, wherein preparing the client machine for capturing the application installation further comprises:
executing a script to create a registry state required for capturing the application installation; and registering and activating a file system driver and a system-service daemon that comprise the agent, wherein the file system driver is configured to intercept filesystem and registry operations occurring during the application installation and redirecting and storing the filesystem and registry operations to the virtual disk. 6. The method of claim 1, wherein the request to begin the application capture session is received on a web page displayed by the web browser on the client machine, wherein the web page displays a catalog of remote desktop applications and web applications. 7. The method of claim 1, further comprising:
deploying the application package to a second client machine by mounting the virtual disk to the second client machine, wherein the application is executable on the second client machine without executing an installer of the application on the second client machine. 8. A computing system, comprising:
at least one processor; and memory including instructions that, when executed by the at least one processor, cause the computing system to perform the operations of:
detecting, by a web browser on a client machine, a request to begin an application capture session;
downloading an orchestrator binary executable to the client machine from an origin server, the orchestrator binary executable configured to prepare the client machine for capturing the application installation by
mounting a virtual disk on the client machine, and
deploying an agent configured to record state changes on the client machine;
detecting that the application installation has started on the client machine;
intercepting the state changes occurring on the client machine during the application installation by the agent and redirecting the state changes to be stored in the virtual disk mounted on the client machine;
determining that the application installation is complete; and
completing the application capture session by generating an application package from the virtual disk, wherein the application package is deployable to execute the application on computing devices different from the client machine. 9. The computing system of claim 8, wherein determining that the application installation is complete further comprises:
restarting the client machine; launching the orchestrator binary executable after the client machine is restarted; detecting, by the orchestrator binary executable, that the application capture session is in progress; and generating the application package from the virtual disk by adding identity and metadata information to the application package. 10. The computing system of claim 8, wherein preparing the client machine for capturing the application installation further comprises:
launching a web server on the client machine, the web server configured to communicate with the web browser on the client machine in order to orchestrate the application capture session. 11. The computing system of claim 10, wherein the web server is further configured to:
receive a script request from the web browser; return a response containing a script executable on the web page displayed by the web browser, the script configured to convey status messages related to the state changes occurring during the application capture session to the web browser. 12. The computing system of claim 8, wherein preparing the client machine for capturing the application installation further comprises:
executing a script to create a registry state required for capturing the application installation; and registering and activating a file system driver and a system-service daemon that comprise the agent, wherein the file system driver is configured to intercept filesystem and registry operations occurring during the application installation and redirecting and storing the filesystem and registry operations to the virtual disk. 13. The computing system of claim 8, wherein the request to begin the application capture session is received on a web page displayed by the web browser on the client machine, wherein the web page displays a catalog of remote desktop applications and web applications. 14. The computing system of claim 8, wherein the memory further comprises instructions that, when executed by the at least one processor, cause the computing system to perform the operations of:
deploying the application package to a second client machine by mounting the virtual disk to the second client machine, wherein the application is executable on the second client machine without executing an installer of the application on the second client machine. 15. A non-transitory computer readable storage medium comprising one or more sequences of instructions, the instructions when executed by one or more processors causing the one or more processors to execute the operations of:
detecting, by a web browser on a client machine, a request to begin an application capture session; downloading an orchestrator binary executable to the client machine from an origin server, the orchestrator binary executable configured to prepare the client machine for capturing the application installation by
mounting a virtual disk on the client machine, and
deploying an agent configured to record state changes on the client machine;
detecting that the application installation has started on the client machine; intercepting the state changes occurring on the client machine during the application installation by the agent and redirecting the state changes to be stored in the virtual disk mounted on the client machine; determining that the application installation is complete; and completing the application capture session by generating an application package from the virtual disk, wherein the application package is deployable to execute the application on computing devices different from the client machine. 16. The non-transitory computer readable storage medium of claim 15, wherein determining that the application installation is complete further comprises:
restarting the client machine; launching the orchestrator binary executable after the client machine is restarted; detecting, by the orchestrator binary executable, that the application capture session is in progress; and generating the application package from the virtual disk by adding identity and metadata information to the application package. 17. The non-transitory computer readable storage medium of claim 15, wherein preparing the client machine for capturing the application installation further comprises:
launching a web server on the client machine, the web server configured to communicate with the web browser on the client machine in order to orchestrate the application capture session. 18. The non-transitory computer readable storage medium of claim 17, wherein the web server is further configured to:
receive a script request from the web browser; return a response containing a script executable on the web page displayed by the web browser, the script configured to convey status messages related to the state changes occurring during the application capture session to the web browser. 19. The non-transitory computer readable storage medium of claim 15, wherein preparing the client machine for capturing the application installation further comprises:
executing a script to create a registry state required for capturing the application installation; and registering and activating a file system driver and a system-service daemon that comprise the agent, wherein the file system driver is configured to intercept filesystem and registry operations occurring during the application installation and redirecting and storing the filesystem and registry operations to the virtual disk. 20. The non-transitory computer readable storage medium of claim 15, wherein the request to begin the application capture session is received on a web page displayed by the web browser on the client machine, wherein the web page displays a catalog of remote desktop applications and web applications. | Techniques are described for performing browser-driven application capture of application installations. When the browser on the client machine detects a request to begin an application capture session, it downloads an orchestrator binary from an origin server. The orchestrator is a self-extracting executable that decompresses components responsible for preparing the client machine for the application capture session. Preparing the client machine includes starting a local web server, executing a registry script to create the necessary registry state, mounting a virtual disk, and deploying an agent that will record state changes on the client machine. Once the client machine has been prepared, the application installation can begin. During the installation process, the agent intercepts state changes occurring on the client machine and redirects them to the virtual disk. Once finished, the application capture session is completed by adding identity and metadata information to the virtual disk to generate the application package.1. A method for performing a web-based capture of an application installation, the method comprising:
detecting, by a web browser on a client machine, a request to begin an application capture session; downloading an orchestrator binary executable to the client machine from an origin server, the orchestrator binary executable configured to prepare the client machine for capturing the application installation by
mounting a virtual disk on the client machine, and
deploying an agent configured to record state changes on the client machine;
detecting that the application installation has started on the client machine; intercepting the state changes occurring on the client machine during the application installation by the agent and redirecting the state changes to be stored in the virtual disk mounted on the client machine; determining that the application installation is complete; and completing the application capture session by generating an application package from the virtual disk, wherein the application package is deployable to execute the application on computing devices different from the client machine. 2. The method of claim 1, wherein determining that the application installation is complete further comprises:
restarting the client machine; launching the orchestrator binary executable after the client machine is restarted; detecting, by the orchestrator binary executable, that the application capture session is in progress; and generating the application package from the virtual disk by adding identity and metadata information to the application package. 3. The method of claim 1, wherein preparing the client machine for capturing the application installation further comprises:
launching a web server on the client machine, the web server configured to communicate with the web browser on the client machine in order to orchestrate the application capture session. 4. The method of claim 3, wherein the web server is further configured to:
receive a script request from the web browser; return a response containing a script executable on the web page displayed by the web browser, the script configured to convey status messages related to the state changes occurring during the application capture session to the web browser. 5. The method of claim 1, wherein preparing the client machine for capturing the application installation further comprises:
executing a script to create a registry state required for capturing the application installation; and registering and activating a file system driver and a system-service daemon that comprise the agent, wherein the file system driver is configured to intercept filesystem and registry operations occurring during the application installation and redirecting and storing the filesystem and registry operations to the virtual disk. 6. The method of claim 1, wherein the request to begin the application capture session is received on a web page displayed by the web browser on the client machine, wherein the web page displays a catalog of remote desktop applications and web applications. 7. The method of claim 1, further comprising:
deploying the application package to a second client machine by mounting the virtual disk to the second client machine, wherein the application is executable on the second client machine without executing an installer of the application on the second client machine. 8. A computing system, comprising:
at least one processor; and memory including instructions that, when executed by the at least one processor, cause the computing system to perform the operations of:
detecting, by a web browser on a client machine, a request to begin an application capture session;
downloading an orchestrator binary executable to the client machine from an origin server, the orchestrator binary executable configured to prepare the client machine for capturing the application installation by
mounting a virtual disk on the client machine, and
deploying an agent configured to record state changes on the client machine;
detecting that the application installation has started on the client machine;
intercepting the state changes occurring on the client machine during the application installation by the agent and redirecting the state changes to be stored in the virtual disk mounted on the client machine;
determining that the application installation is complete; and
completing the application capture session by generating an application package from the virtual disk, wherein the application package is deployable to execute the application on computing devices different from the client machine. 9. The computing system of claim 8, wherein determining that the application installation is complete further comprises:
restarting the client machine; launching the orchestrator binary executable after the client machine is restarted; detecting, by the orchestrator binary executable, that the application capture session is in progress; and generating the application package from the virtual disk by adding identity and metadata information to the application package. 10. The computing system of claim 8, wherein preparing the client machine for capturing the application installation further comprises:
launching a web server on the client machine, the web server configured to communicate with the web browser on the client machine in order to orchestrate the application capture session. 11. The computing system of claim 10, wherein the web server is further configured to:
receive a script request from the web browser; return a response containing a script executable on the web page displayed by the web browser, the script configured to convey status messages related to the state changes occurring during the application capture session to the web browser. 12. The computing system of claim 8, wherein preparing the client machine for capturing the application installation further comprises:
executing a script to create a registry state required for capturing the application installation; and registering and activating a file system driver and a system-service daemon that comprise the agent, wherein the file system driver is configured to intercept filesystem and registry operations occurring during the application installation and redirecting and storing the filesystem and registry operations to the virtual disk. 13. The computing system of claim 8, wherein the request to begin the application capture session is received on a web page displayed by the web browser on the client machine, wherein the web page displays a catalog of remote desktop applications and web applications. 14. The computing system of claim 8, wherein the memory further comprises instructions that, when executed by the at least one processor, cause the computing system to perform the operations of:
deploying the application package to a second client machine by mounting the virtual disk to the second client machine, wherein the application is executable on the second client machine without executing an installer of the application on the second client machine. 15. A non-transitory computer readable storage medium comprising one or more sequences of instructions, the instructions when executed by one or more processors causing the one or more processors to execute the operations of:
detecting, by a web browser on a client machine, a request to begin an application capture session; downloading an orchestrator binary executable to the client machine from an origin server, the orchestrator binary executable configured to prepare the client machine for capturing the application installation by
mounting a virtual disk on the client machine, and
deploying an agent configured to record state changes on the client machine;
detecting that the application installation has started on the client machine; intercepting the state changes occurring on the client machine during the application installation by the agent and redirecting the state changes to be stored in the virtual disk mounted on the client machine; determining that the application installation is complete; and completing the application capture session by generating an application package from the virtual disk, wherein the application package is deployable to execute the application on computing devices different from the client machine. 16. The non-transitory computer readable storage medium of claim 15, wherein determining that the application installation is complete further comprises:
restarting the client machine; launching the orchestrator binary executable after the client machine is restarted; detecting, by the orchestrator binary executable, that the application capture session is in progress; and generating the application package from the virtual disk by adding identity and metadata information to the application package. 17. The non-transitory computer readable storage medium of claim 15, wherein preparing the client machine for capturing the application installation further comprises:
launching a web server on the client machine, the web server configured to communicate with the web browser on the client machine in order to orchestrate the application capture session. 18. The non-transitory computer readable storage medium of claim 17, wherein the web server is further configured to:
receive a script request from the web browser; return a response containing a script executable on the web page displayed by the web browser, the script configured to convey status messages related to the state changes occurring during the application capture session to the web browser. 19. The non-transitory computer readable storage medium of claim 15, wherein preparing the client machine for capturing the application installation further comprises:
executing a script to create a registry state required for capturing the application installation; and registering and activating a file system driver and a system-service daemon that comprise the agent, wherein the file system driver is configured to intercept filesystem and registry operations occurring during the application installation and redirecting and storing the filesystem and registry operations to the virtual disk. 20. The non-transitory computer readable storage medium of claim 15, wherein the request to begin the application capture session is received on a web page displayed by the web browser on the client machine, wherein the web page displays a catalog of remote desktop applications and web applications. | 2,100 |
6,677 | 6,677 | 15,248,275 | 2,159 | A reduced version of a search query can be pre-applied to limit the search scope. A query processor can maintain one or more metadata structures for a structured data store where each metadata structure is based on a single field of documents that are stored in the structured data store. When a search query is received, the query processor can generate a reduced version of the search query to be run against one of the metadata structures. The results of running the reduced version of the search query will identify which of the portions of the structured data store the full search query should be run against. In this way, the query processor can avoid loading and evaluating the search query against all portions of the structured data store. | 1. In a server system that includes a query processor for running search queries against a structured data store containing a plurality of portions that store documents having a plurality of fields including a first field, a method, performed by the query processor, for identifying a subset of the portions against which a search query should be run, the method comprising:
maintaining a first metadata structure that includes a metadata portion for each portion in the structured data store, each metadata portion identifying values of the first field that exist in the corresponding portion; receiving a first search query that includes the first field as a parameter as well as one or more other fields of the plurality of fields as parameters; generating a reduced version of the first search query that does not include the one or more other fields; running the reduced version of the first search query against the metadata portions of the first metadata structure to identify which metadata portions match the reduced version of the first search query; and running the first search query against a subset of the portions in the structured data store, the subset including only portions of the structured data store that correspond to a metadata portion identified by running the reduced version of the first search query. 2. The method of claim 1, wherein each metadata portion stores metadocuments having the first field, each metadocument storing a value of the first field that is the same as a value of the first field in one or more documents in the corresponding portion of the structured data store. 3. The method of claim 1, wherein each metadata portion also identifies values of one or more additional fields of the plurality of fields, the first search query also includes the one or more additional fields, and the reduced version of the first search query includes the one or more additional fields. 4. The method of claim 1, wherein the first search query includes multiple instances of the first field. 5. The method of claim 4, wherein the multiple instances of the first field are combined with Boolean logic. 6. The method of claim 1, wherein generating a reduced version of the first search query that does not include the one or more other fields comprises substituting a value of each of the one or more other fields with a neutral value. 7. The method of claim 1, wherein generating a reduced version of the first search query that does not include the one or more other fields comprises:
converting the first search query into a logical tree; substituting a neutral value for a value of each of the one or more other fields; reducing the logical tree by removing from the logical tree any occurrence where a neutral value is combined with a value of the first field using a logical OR; and converting the reduced logical tree into the reduced version of the first search query. 8. The method of claim 1, wherein the plurality of fields includes a second field, the method further comprising:
maintaining a second metadata structure that includes a metadata portion for each portion in the structured data store, each metadata portion in the second metadata structure identifying values of the second field that exist in the corresponding segment; receiving a second search query that includes the second field as a parameter as well as one or more other fields of the plurality of fields as parameters; generating a reduced version of the second search query that does not include the one or more other fields included in the second search query; running the reduced version of the second search query against the metadata portions of the second metadata structure to identify which metadata portions match the reduced version of the second search query; and running the second search query against a subset of the portions in the structured data store, the subset including only portions of the structured data store that correspond to a metadata portion identified by running the reduced version of the second search query. 9. The method of claim 1, wherein the plurality of fields includes a second field and the first search query includes the second field as a parameter, the method further comprising:
maintaining a second metadata structure that includes a metadata portion for each portion in the structured data store, each metadata portion in the second metadata structure identifying values of the second field that exist in the corresponding portion; generating a second reduced version of the first search query that does not include the first field or the one or more other fields; and running the second reduced version of the first search query against the metadata portions of the second metadata structure to identify which metadata portions match the second reduced version of the first search query; wherein the subset of portions against which the first search query is run includes only portions of the structured data store that correspond to a metadata portion that was identified by both the reduced version of the first search query and the second reduced version of the first search query. 10. The method of claim 1, further comprising:
updating the metadata structure in response to an update to the structured data store. 11. The method of claim 10, wherein updating the metadata structure comprises adding a metadocument to or removing a metadocument from a metadata portion in response to a corresponding document being added to or removed from a corresponding portion. 12. The method of claim 10, wherein updating the metadata structure comprises adding a metadata portion in response to a portion being added to the structured data store. 13. One or more computer storage media storing computer executable instructions which, when executed on a server system that includes a query processor for running search queries against a structured data store containing a plurality of portions that store documents having a plurality of fields including a first field, perform a method for identifying a subset of the portions against which a search query should be run, the method comprising:
maintaining a first metadata structure that includes a metadata portion for each portion in the structured data store, each metadata portion storing metadocuments corresponding to documents in the corresponding portion, each metadocument including only the first field; receiving a first search query that includes the first field as a parameter as well as one or more other fields of the plurality of fields as parameters; generating a reduced version of the first search query that does not include the one or more other fields; running the reduced version of the first search query against the metadata portions of the first metadata structure to identify which metadata portions match the reduced version of the first search query; and running the first search query against a subset of the portions in the structured data store, the subset including only portions of the structured data store that correspond to a metadata portion identified by running the reduced version of the first search query. 14. The computer storage media of claim 13, wherein generating a reduced version of the first search query that does not include the one or more other fields comprises removing any occurrences of the first field that are combined with another field using a logical OR. 15. The computer storage media of claim 13, wherein generating a reduced version of the first search query that does not include the one or more other fields comprises maintaining any occurrences of the first field that are combined with another field using a logical AND. 16. The computer storage media of claim 13, wherein the plurality of fields includes a second field, the method further comprising:
maintaining a second metadata structure that includes a metadata portion for each portion in the structured data store, each metadata portion in the second metadata structure identifying values of the second field that exist in the corresponding portion; receiving a second search query that includes the second field as a parameter as well as one or more other fields of the plurality of fields as parameters; generating a reduced version of the second search query that does not include the one or more other fields included in the second search query; running the reduced version of the second search query against the metadata portions of the second metadata structure to identify which metadata portions match the reduced version of the second search query; and running the second search query against a subset of the portions in the structured data store, the subset including only portions of the structured data store that correspond to a metadata portion identified by running the reduced version of the second search query. 17. The computer storage media of claim 16 wherein each metadata portion in the second metadata structure also identifies values of another field that exist in the corresponding portion; and
wherein the second search query includes the other field as a parameter such that generating a reduced version of the second search query comprises including the other field in the reduced version of the second search query. 18. A server system comprising:
an indexed store containing a plurality of segments that store documents having a plurality of fields including a first field; a first metadata structure that includes a metadata segment for each segment in the indexed store, each metadata segment identifying values of the first field that exist in the corresponding segment; and a query processor for running search queries against the indexed store; wherein the query processor is configured to identify a subset of the segments of the indexed store against which the search queries should be run by performing the following:
in response to receiving a search query that includes the first field as a parameter as well as one or more other fields of the plurality of fields as parameters, generating a reduced version of the search query that does not include the one or more other fields; and
running the reduced version of the search query against the metadata segments of the first metadata structure to identify which metadata segments match the reduced version of the search query. 19. The server system of claim 18, wherein the plurality of fields include a second field, the server system further including:
a second metadata structure that includes a metadata segment for each segment in the indexed store, each metadata segment identifying values of the second field that exist in the corresponding segment. 20. The server system of claim 18, wherein the plurality of fields and the search query include one or more additional fields such that the reduced version of the search query also includes the one or more additional fields. | A reduced version of a search query can be pre-applied to limit the search scope. A query processor can maintain one or more metadata structures for a structured data store where each metadata structure is based on a single field of documents that are stored in the structured data store. When a search query is received, the query processor can generate a reduced version of the search query to be run against one of the metadata structures. The results of running the reduced version of the search query will identify which of the portions of the structured data store the full search query should be run against. In this way, the query processor can avoid loading and evaluating the search query against all portions of the structured data store.1. In a server system that includes a query processor for running search queries against a structured data store containing a plurality of portions that store documents having a plurality of fields including a first field, a method, performed by the query processor, for identifying a subset of the portions against which a search query should be run, the method comprising:
maintaining a first metadata structure that includes a metadata portion for each portion in the structured data store, each metadata portion identifying values of the first field that exist in the corresponding portion; receiving a first search query that includes the first field as a parameter as well as one or more other fields of the plurality of fields as parameters; generating a reduced version of the first search query that does not include the one or more other fields; running the reduced version of the first search query against the metadata portions of the first metadata structure to identify which metadata portions match the reduced version of the first search query; and running the first search query against a subset of the portions in the structured data store, the subset including only portions of the structured data store that correspond to a metadata portion identified by running the reduced version of the first search query. 2. The method of claim 1, wherein each metadata portion stores metadocuments having the first field, each metadocument storing a value of the first field that is the same as a value of the first field in one or more documents in the corresponding portion of the structured data store. 3. The method of claim 1, wherein each metadata portion also identifies values of one or more additional fields of the plurality of fields, the first search query also includes the one or more additional fields, and the reduced version of the first search query includes the one or more additional fields. 4. The method of claim 1, wherein the first search query includes multiple instances of the first field. 5. The method of claim 4, wherein the multiple instances of the first field are combined with Boolean logic. 6. The method of claim 1, wherein generating a reduced version of the first search query that does not include the one or more other fields comprises substituting a value of each of the one or more other fields with a neutral value. 7. The method of claim 1, wherein generating a reduced version of the first search query that does not include the one or more other fields comprises:
converting the first search query into a logical tree; substituting a neutral value for a value of each of the one or more other fields; reducing the logical tree by removing from the logical tree any occurrence where a neutral value is combined with a value of the first field using a logical OR; and converting the reduced logical tree into the reduced version of the first search query. 8. The method of claim 1, wherein the plurality of fields includes a second field, the method further comprising:
maintaining a second metadata structure that includes a metadata portion for each portion in the structured data store, each metadata portion in the second metadata structure identifying values of the second field that exist in the corresponding segment; receiving a second search query that includes the second field as a parameter as well as one or more other fields of the plurality of fields as parameters; generating a reduced version of the second search query that does not include the one or more other fields included in the second search query; running the reduced version of the second search query against the metadata portions of the second metadata structure to identify which metadata portions match the reduced version of the second search query; and running the second search query against a subset of the portions in the structured data store, the subset including only portions of the structured data store that correspond to a metadata portion identified by running the reduced version of the second search query. 9. The method of claim 1, wherein the plurality of fields includes a second field and the first search query includes the second field as a parameter, the method further comprising:
maintaining a second metadata structure that includes a metadata portion for each portion in the structured data store, each metadata portion in the second metadata structure identifying values of the second field that exist in the corresponding portion; generating a second reduced version of the first search query that does not include the first field or the one or more other fields; and running the second reduced version of the first search query against the metadata portions of the second metadata structure to identify which metadata portions match the second reduced version of the first search query; wherein the subset of portions against which the first search query is run includes only portions of the structured data store that correspond to a metadata portion that was identified by both the reduced version of the first search query and the second reduced version of the first search query. 10. The method of claim 1, further comprising:
updating the metadata structure in response to an update to the structured data store. 11. The method of claim 10, wherein updating the metadata structure comprises adding a metadocument to or removing a metadocument from a metadata portion in response to a corresponding document being added to or removed from a corresponding portion. 12. The method of claim 10, wherein updating the metadata structure comprises adding a metadata portion in response to a portion being added to the structured data store. 13. One or more computer storage media storing computer executable instructions which, when executed on a server system that includes a query processor for running search queries against a structured data store containing a plurality of portions that store documents having a plurality of fields including a first field, perform a method for identifying a subset of the portions against which a search query should be run, the method comprising:
maintaining a first metadata structure that includes a metadata portion for each portion in the structured data store, each metadata portion storing metadocuments corresponding to documents in the corresponding portion, each metadocument including only the first field; receiving a first search query that includes the first field as a parameter as well as one or more other fields of the plurality of fields as parameters; generating a reduced version of the first search query that does not include the one or more other fields; running the reduced version of the first search query against the metadata portions of the first metadata structure to identify which metadata portions match the reduced version of the first search query; and running the first search query against a subset of the portions in the structured data store, the subset including only portions of the structured data store that correspond to a metadata portion identified by running the reduced version of the first search query. 14. The computer storage media of claim 13, wherein generating a reduced version of the first search query that does not include the one or more other fields comprises removing any occurrences of the first field that are combined with another field using a logical OR. 15. The computer storage media of claim 13, wherein generating a reduced version of the first search query that does not include the one or more other fields comprises maintaining any occurrences of the first field that are combined with another field using a logical AND. 16. The computer storage media of claim 13, wherein the plurality of fields includes a second field, the method further comprising:
maintaining a second metadata structure that includes a metadata portion for each portion in the structured data store, each metadata portion in the second metadata structure identifying values of the second field that exist in the corresponding portion; receiving a second search query that includes the second field as a parameter as well as one or more other fields of the plurality of fields as parameters; generating a reduced version of the second search query that does not include the one or more other fields included in the second search query; running the reduced version of the second search query against the metadata portions of the second metadata structure to identify which metadata portions match the reduced version of the second search query; and running the second search query against a subset of the portions in the structured data store, the subset including only portions of the structured data store that correspond to a metadata portion identified by running the reduced version of the second search query. 17. The computer storage media of claim 16 wherein each metadata portion in the second metadata structure also identifies values of another field that exist in the corresponding portion; and
wherein the second search query includes the other field as a parameter such that generating a reduced version of the second search query comprises including the other field in the reduced version of the second search query. 18. A server system comprising:
an indexed store containing a plurality of segments that store documents having a plurality of fields including a first field; a first metadata structure that includes a metadata segment for each segment in the indexed store, each metadata segment identifying values of the first field that exist in the corresponding segment; and a query processor for running search queries against the indexed store; wherein the query processor is configured to identify a subset of the segments of the indexed store against which the search queries should be run by performing the following:
in response to receiving a search query that includes the first field as a parameter as well as one or more other fields of the plurality of fields as parameters, generating a reduced version of the search query that does not include the one or more other fields; and
running the reduced version of the search query against the metadata segments of the first metadata structure to identify which metadata segments match the reduced version of the search query. 19. The server system of claim 18, wherein the plurality of fields include a second field, the server system further including:
a second metadata structure that includes a metadata segment for each segment in the indexed store, each metadata segment identifying values of the second field that exist in the corresponding segment. 20. The server system of claim 18, wherein the plurality of fields and the search query include one or more additional fields such that the reduced version of the search query also includes the one or more additional fields. | 2,100 |
6,678 | 6,678 | 15,296,249 | 2,152 | A media programming log is evaluated, and a determination is made that the media programming log includes an ad song scheduled for broadcast within a proximity threshold of a commercial set. The ad song is associated with an advertiser. An evaluation of spot inventory associated with the commercial set is performed, and a determination is made that at least one spot in the commercial set can be used for insertion of an ad-song advertisement. The ad-song advertisement contains at least a portion of a version of the ad song. The ad-song advertisement can then be inserted into at least one spot of a broadcast log, which can be delivered to an audio server for broadcast. | 1. A method for use in an automated media scheduling system, the method comprising:
performing a first evaluation of a media programming log, by executing a program instruction in a data processing apparatus; determining, based on the first evaluation, that an ad song associated with an advertiser is scheduled for broadcast within a proximity threshold of a commercial set, by executing a program instruction in a data processing apparatus; performing a second evaluation of a spot inventory associated with the commercial set, by executing a program instruction in a data processing apparatus; determining, based at least in part on the second evaluation, that at least one spot in the commercial set can be used for insertion of an ad-song advertisement containing at least a portion of a version of the ad song in response to evaluating the spot inventory, wherein the ad-song advertisement is associated with the advertiser, by executing a program instruction in a data processing apparatus; inserting the ad-song advertisement in the at least one spot of a broadcast log; and delivering the broadcast log to an audio server. 2. The method of claim 1, wherein determining that at least one spot in the commercial set is available to be used for insertion of the ad-song advertisement includes:
determining, based on the second evaluation, that a previously scheduled advertisement associated with the advertiser is available to be replaced by the ad-song advertisement, by executing a program instruction in a data processing apparatus; and wherein inserting the ad-song advertisement includes removing the previously scheduled advertisement from the at least one spot, by executing a program instruction in a data processing apparatus. 3. The method of claim 1, wherein determining that at least one spot in the commercial set is available to be used for insertion of the ad-song advertisement includes:
determining, based on the second evaluation, that an unscheduled spot is available for placement of the ad-song advertisement, by executing a program instruction in a data processing apparatus. 4. The method of claim 1, further comprising:
transmitting a notification to a traffic system, the notification including information indicating that the ad-song advertisement has been inserted, by executing a program instruction in a data processing apparatus. 5. The method of claim 1, wherein determining that at least one spot in the commercial set is available to be used for insertion of the ad-song advertisement includes:
determining, based on the second evaluation, that a previously scheduled advertisement associated with a different advertiser can be replaced by the ad-song advertisement, by executing a program instruction in a data processing apparatus; substituting the ad-song advertisement for the previously scheduled advertisement, by executing a program instruction in a data processing apparatus; and attempting to reschedule the previously scheduled advertisement, by executing a program instruction in a data processing apparatus. 6. The method of claim 1, further comprising:
storing an ad-song tag in metadata associated with the ad song, by executing a program instruction in a data processing apparatus; and wherein the first evaluation of the media programming log is based, at least in part, on evaluating the ad-song tag, by executing a program instruction in a data processing apparatus. 7. The method of claim 1, further comprising:
storing an ad-song advertisement tag in metadata associated with the ad-song advertisement, by executing a program instruction in a data processing apparatus; and wherein the second evaluation of the media programming log is based, at least in part, on evaluating the ad-song advertisement tag. 8. The method of claim 1, further comprising:
performing the operations of claim 1 for both a terrestrial broadcast and an on-line broadcast. 9. The method of claim 1, further comprising:
performing the operations of claim 1 after content ingest and traffic load processes. 10. An automated media scheduling system comprising:
a processing device including a processor and associated memory, the processing device configured to implement a database server; at least one processing device including a processor and associated memory configured to implement a traffic system, the traffic system coupled to the database server, and configured to provide to the database server with information about advertiser orders; the database server configured to:
perform a first evaluation of a media programming log based at least in part on the information about advertiser orders;
determine, based on the first evaluation, that an ad song associated with an advertiser is scheduled for broadcast within a proximity threshold of a commercial set;
perform a second evaluation of a spot inventory associated with the commercial set;
determine, based at least in part on the second evaluation, that at least one spot in the commercial set can be used for insertion of an ad-song advertisement containing at least a portion of a version of the ad song in response to evaluating the spot inventory, wherein the ad-song advertisement is associated with the advertiser, by executing a program instruction in a data processing apparatus;
insert the ad-song advertisement in the at least one spot of a broadcast log; and
deliver the broadcast log to an audio server. 11. The automated media scheduling system of claim 10, wherein the database server is further configured to:
determine, based on the second evaluation, that a previously scheduled advertisement associated with the advertiser is available to be replaced by the ad-song advertisement; and remove the previously scheduled advertisement from the at least one spot. 12. The automated media scheduling system of claim 10, wherein the database server is further configured to:
determine, based on the second evaluation, that an unscheduled spot is available for placement of the ad-song advertisement. 13. The automated media scheduling system of claim 10, wherein the database server is further configured to:
transmit a notification to the traffic system, the notification including information indicating that the ad-song advertisement has been inserted. 14. The automated media scheduling system of claim 10, wherein the database server is further configured to
determine, based on the second evaluation, that a previously scheduled advertisement associated with a different advertiser can be replaced by the ad-song advertisement; substitute the ad-song advertisement for the previously scheduled advertisement; and attempt to reschedule the previously scheduled advertisement. 15. The automated media scheduling system of claim 10, wherein the database server is further configured to:
store an ad-song tag in metadata associated with the ad song; perform the first evaluation of the media programming log based, at least in part, on evaluating the ad-song tag; store an ad-song advertisement tag in metadata associated with the ad-song advertisement; and perform the second evaluation of the media programming log based, at least in part, on evaluating the ad-song advertisement tag. 16. A method for use in an automated media scheduling system, the method comprising:
performing a first evaluation of a media programming log to identify an ad song associated with an advertiser, the ad song scheduled for broadcast within a proximity threshold of a commercial set, by executing a program instruction in a data processing apparatus, wherein the first evaluation is performed, at least in part, based on an ad-song tag associated with a scheduled song listed in the media programming log; performing a second evaluation of a spot inventory associated with the commercial set, by executing a program instruction in a data processing apparatus; determining, based at least in part on the second evaluation, that at least one spot in the commercial set can be used for insertion of an ad-song advertisement containing at least a portion of a version of the ad song in response to evaluating the spot inventory, wherein the ad-song advertisement is associated with the advertiser, by executing a program instruction in a data processing apparatus; inserting the ad-song advertisement in the at least one spot of a broadcast log; and delivering the broadcast log to an audio server. 17. The method of claim 16, further comprising:
performing the operations of claim 16 after content ingest and traffic load processes. 18. The method of claim 16, further comprising:
transmitting a notification to a traffic system, the notification including information indicating that the ad-song advertisement has been inserted, by executing a program instruction in a data processing apparatus. 19. The method of claim 16, wherein determining that at least one spot in the commercial set is available to be used for insertion of the ad-song advertisement includes:
determining, based on the second evaluation, that a previously scheduled advertisement associated with the advertiser is available to be replaced by the ad-song advertisement, by executing a program instruction in a data processing apparatus; and wherein inserting the ad-song advertisement includes removing the previously scheduled advertisement from the at least one spot, by executing a program instruction in a data processing apparatus. 20. The method of claim 16, wherein determining that at least one spot in the commercial set is available to be used for insertion of the ad-song advertisement includes:
determining, based on the second evaluation, that a previously scheduled advertisement associated with a different advertiser can be replaced by the ad-song advertisement, by executing a program instruction in a data processing apparatus; substituting the ad-song advertisement for the previously scheduled advertisement, by executing a program instruction in a data processing apparatus; and attempting to reschedule the previously scheduled advertisement, by executing a program instruction in a data processing apparatus. | A media programming log is evaluated, and a determination is made that the media programming log includes an ad song scheduled for broadcast within a proximity threshold of a commercial set. The ad song is associated with an advertiser. An evaluation of spot inventory associated with the commercial set is performed, and a determination is made that at least one spot in the commercial set can be used for insertion of an ad-song advertisement. The ad-song advertisement contains at least a portion of a version of the ad song. The ad-song advertisement can then be inserted into at least one spot of a broadcast log, which can be delivered to an audio server for broadcast.1. A method for use in an automated media scheduling system, the method comprising:
performing a first evaluation of a media programming log, by executing a program instruction in a data processing apparatus; determining, based on the first evaluation, that an ad song associated with an advertiser is scheduled for broadcast within a proximity threshold of a commercial set, by executing a program instruction in a data processing apparatus; performing a second evaluation of a spot inventory associated with the commercial set, by executing a program instruction in a data processing apparatus; determining, based at least in part on the second evaluation, that at least one spot in the commercial set can be used for insertion of an ad-song advertisement containing at least a portion of a version of the ad song in response to evaluating the spot inventory, wherein the ad-song advertisement is associated with the advertiser, by executing a program instruction in a data processing apparatus; inserting the ad-song advertisement in the at least one spot of a broadcast log; and delivering the broadcast log to an audio server. 2. The method of claim 1, wherein determining that at least one spot in the commercial set is available to be used for insertion of the ad-song advertisement includes:
determining, based on the second evaluation, that a previously scheduled advertisement associated with the advertiser is available to be replaced by the ad-song advertisement, by executing a program instruction in a data processing apparatus; and wherein inserting the ad-song advertisement includes removing the previously scheduled advertisement from the at least one spot, by executing a program instruction in a data processing apparatus. 3. The method of claim 1, wherein determining that at least one spot in the commercial set is available to be used for insertion of the ad-song advertisement includes:
determining, based on the second evaluation, that an unscheduled spot is available for placement of the ad-song advertisement, by executing a program instruction in a data processing apparatus. 4. The method of claim 1, further comprising:
transmitting a notification to a traffic system, the notification including information indicating that the ad-song advertisement has been inserted, by executing a program instruction in a data processing apparatus. 5. The method of claim 1, wherein determining that at least one spot in the commercial set is available to be used for insertion of the ad-song advertisement includes:
determining, based on the second evaluation, that a previously scheduled advertisement associated with a different advertiser can be replaced by the ad-song advertisement, by executing a program instruction in a data processing apparatus; substituting the ad-song advertisement for the previously scheduled advertisement, by executing a program instruction in a data processing apparatus; and attempting to reschedule the previously scheduled advertisement, by executing a program instruction in a data processing apparatus. 6. The method of claim 1, further comprising:
storing an ad-song tag in metadata associated with the ad song, by executing a program instruction in a data processing apparatus; and wherein the first evaluation of the media programming log is based, at least in part, on evaluating the ad-song tag, by executing a program instruction in a data processing apparatus. 7. The method of claim 1, further comprising:
storing an ad-song advertisement tag in metadata associated with the ad-song advertisement, by executing a program instruction in a data processing apparatus; and wherein the second evaluation of the media programming log is based, at least in part, on evaluating the ad-song advertisement tag. 8. The method of claim 1, further comprising:
performing the operations of claim 1 for both a terrestrial broadcast and an on-line broadcast. 9. The method of claim 1, further comprising:
performing the operations of claim 1 after content ingest and traffic load processes. 10. An automated media scheduling system comprising:
a processing device including a processor and associated memory, the processing device configured to implement a database server; at least one processing device including a processor and associated memory configured to implement a traffic system, the traffic system coupled to the database server, and configured to provide to the database server with information about advertiser orders; the database server configured to:
perform a first evaluation of a media programming log based at least in part on the information about advertiser orders;
determine, based on the first evaluation, that an ad song associated with an advertiser is scheduled for broadcast within a proximity threshold of a commercial set;
perform a second evaluation of a spot inventory associated with the commercial set;
determine, based at least in part on the second evaluation, that at least one spot in the commercial set can be used for insertion of an ad-song advertisement containing at least a portion of a version of the ad song in response to evaluating the spot inventory, wherein the ad-song advertisement is associated with the advertiser, by executing a program instruction in a data processing apparatus;
insert the ad-song advertisement in the at least one spot of a broadcast log; and
deliver the broadcast log to an audio server. 11. The automated media scheduling system of claim 10, wherein the database server is further configured to:
determine, based on the second evaluation, that a previously scheduled advertisement associated with the advertiser is available to be replaced by the ad-song advertisement; and remove the previously scheduled advertisement from the at least one spot. 12. The automated media scheduling system of claim 10, wherein the database server is further configured to:
determine, based on the second evaluation, that an unscheduled spot is available for placement of the ad-song advertisement. 13. The automated media scheduling system of claim 10, wherein the database server is further configured to:
transmit a notification to the traffic system, the notification including information indicating that the ad-song advertisement has been inserted. 14. The automated media scheduling system of claim 10, wherein the database server is further configured to
determine, based on the second evaluation, that a previously scheduled advertisement associated with a different advertiser can be replaced by the ad-song advertisement; substitute the ad-song advertisement for the previously scheduled advertisement; and attempt to reschedule the previously scheduled advertisement. 15. The automated media scheduling system of claim 10, wherein the database server is further configured to:
store an ad-song tag in metadata associated with the ad song; perform the first evaluation of the media programming log based, at least in part, on evaluating the ad-song tag; store an ad-song advertisement tag in metadata associated with the ad-song advertisement; and perform the second evaluation of the media programming log based, at least in part, on evaluating the ad-song advertisement tag. 16. A method for use in an automated media scheduling system, the method comprising:
performing a first evaluation of a media programming log to identify an ad song associated with an advertiser, the ad song scheduled for broadcast within a proximity threshold of a commercial set, by executing a program instruction in a data processing apparatus, wherein the first evaluation is performed, at least in part, based on an ad-song tag associated with a scheduled song listed in the media programming log; performing a second evaluation of a spot inventory associated with the commercial set, by executing a program instruction in a data processing apparatus; determining, based at least in part on the second evaluation, that at least one spot in the commercial set can be used for insertion of an ad-song advertisement containing at least a portion of a version of the ad song in response to evaluating the spot inventory, wherein the ad-song advertisement is associated with the advertiser, by executing a program instruction in a data processing apparatus; inserting the ad-song advertisement in the at least one spot of a broadcast log; and delivering the broadcast log to an audio server. 17. The method of claim 16, further comprising:
performing the operations of claim 16 after content ingest and traffic load processes. 18. The method of claim 16, further comprising:
transmitting a notification to a traffic system, the notification including information indicating that the ad-song advertisement has been inserted, by executing a program instruction in a data processing apparatus. 19. The method of claim 16, wherein determining that at least one spot in the commercial set is available to be used for insertion of the ad-song advertisement includes:
determining, based on the second evaluation, that a previously scheduled advertisement associated with the advertiser is available to be replaced by the ad-song advertisement, by executing a program instruction in a data processing apparatus; and wherein inserting the ad-song advertisement includes removing the previously scheduled advertisement from the at least one spot, by executing a program instruction in a data processing apparatus. 20. The method of claim 16, wherein determining that at least one spot in the commercial set is available to be used for insertion of the ad-song advertisement includes:
determining, based on the second evaluation, that a previously scheduled advertisement associated with a different advertiser can be replaced by the ad-song advertisement, by executing a program instruction in a data processing apparatus; substituting the ad-song advertisement for the previously scheduled advertisement, by executing a program instruction in a data processing apparatus; and attempting to reschedule the previously scheduled advertisement, by executing a program instruction in a data processing apparatus. | 2,100 |
6,679 | 6,679 | 15,847,546 | 2,152 | A playlist can be generated based on a chart or list including ranked media items, e.g. songs, videos, etc., by automatically including the highest ranked media items to the playlist, but only adding some of the lower ranked media items to the playlist. A particular lower-ranked media item can be pseudo-randomly excluded from the playlist if that media item has a ranking in a current version of the chart that is lower than its ranking in a previous version. Once the desired number of media items has been added to an intermediate list, the intermediate list can be inverted, and station identifiers can be interspersed between the media items. | 1. A method for use in a media server, the method comprising:
obtaining, from a network, a first media list of media items, the media items ranked relative to other media items included in the first media list based on a level of popularity; building an intermediate list of media items from selected ranked media items from within the first media list, the intermediate list to contain a target number of media items, the target number less than a total number of items in the first media list, the building including:
adding to the intermediate list a predetermined number of top-ranked media items from the first media list, the predetermined number less than the target number;
selectively adding lower-ranked media items from the first media list to the intermediate list until a combined number of top-ranked and lower-ranked media items added to the intermediate list reaches the target number, each of the lower-ranked media items having a relative ranking level inferior to the relative ranking level of all of the predetermined number of top-ranked media items, wherein the selectively adding includes at least considering an exclusion of lower-ranked media items based on skewing a pseudo-random selection criteria, wherein the skewing is configured to make it more or less likely that any one particular media item will be excluded from or added to the intermediate list;
in response to the combined number of top-ranked and lower-ranked media items reaching the target number: adding identifiers associated with the top-ranked and lower-ranked media items; generating a first playlist based on the intermediate list; creating a first playlist broadcast chain configured to be distributed through a communication system; and distributing, through the communication system, the first playlist broadcast chain to one or more media users. 2. The method of claim 1, wherein generating the first playlist further comprises:
interspersing station identifiers between top-ranked media items and lower-ranked media items included in the first playlist, wherein each of the station identifiers includes an announcement media item indicating a relative ranking of a particular media item in the first playlist. 3. The method of claim 2, wherein generating the first playlist further comprises:
inverting the first playlist to generate an inverted list, wherein each of the station identifiers includes an announcement media item indicating a relative ranking of a particular media item in the inverted list. 4. The method of claim 1, wherein the skewing a pseudo-random selection criteria includes skewing one or more of: a current ranking of the media item, an historical performance of the media item over a period of time, or a number of playlists the media item has been included in over time. 5. The method of claim 4, wherein the skewing a pseudo-random selection criteria is determined by:
determining whether a current ranking of at least one of the lower-ranked media items is different from a previous ranking of the at least one of the lower-ranked media items; and skewing a pseudo-randomly exclusion of the at least one of the lower-ranked media items in response to determining that the current ranking of the at least one of the lower-ranked media items has declined. 6. The method of claim 1, wherein the pseudo-random selection criteria includes declining popularity and the skewing a pseudo-random selection criteria is determined by:
determining whether a current ranking of at least one lower-ranked media item is different from a previous ranking of the at least one lower-ranked media item; determining whether the at least one lower-ranked media item has been included in a previous playlist within a set time period; excluding the at least one lower-ranked media item in response to determining that the current ranking of the at least one lower-ranked media item has declined and that the at least one lower-ranked media item has been included in a previous playlist within a set time period; and skewing a pseudo-randomly exclusion of the at least one lower-ranked media item in response to determining that the current ranking of the at least one lower-ranked media item has declined and that the at least one media item has not been included in a previous playlist within a set time period. 7. The method of claim 1, further comprising:
calculating a first difference between a playout length of the first playlist and a target playout length; generating a second playlist, the second playlist comprising: top-ranked media items selected from a second media list, the top-ranked media items having a relative popularity level satisfying a threshold popularity requirement; and lower-ranked media items pseudo-randomly added from a second version of the media list; calculating a second difference between a playout length of the second playlist and the target playout length; adding the first difference to the second difference to generate a combined playout length difference; and determining, based on a comparison of the combined playout length difference to the target playout length, if a third list is to be generated. 8. The method of claim 1, wherein the considering an exclusion of lower-ranked media items is based, in part, to prevent exhaustion of media items available to be selected from the first media list of media items before reaching the target number. 9. The method of claim 1, wherein the first playlist broadcast chain is configured to be distributed over one or more of: a broadcast system or a streaming system. 10. A media automation system comprising:
a processor; memory operably coupled to the processor; the processor and the memory configured to: obtain a first version of a first media list, the first media list indicating a relative popularity level of each one of media items in the first media list with respect to each other one of the media items therein, the relative popularity level of each one of the media items indicated by a respective popularity level indicating parameter; add a predetermined number of first media items from the first version of the media list to an intermediate list, each of the first media items having a relative popularity level satisfying a threshold popularity requirement based at least partially upon the respective popularity level indicating parameter thereof, wherein the predetermined number is less than a target number of media items comprising the intermediate list; selectively add second media items from the first version of the media list to the intermediate list until a combined number of first and second media items added to the intermediate list reaches the target number, each of the second media items having a relative popularity level inferior to the relative popularity level of the first media items based at least partially upon the respective popularity level indicating parameter thereof, wherein the selectively adding includes at least considering an exclusion of the second media items based, in part, on skewing a pseudo-random selection criteria, wherein the skewing is configured to make it more or less likely that any one particular song will be excluded from or added to the intermediate list; add identifiers associated with the first and second media items; and generate a first playlist based on the intermediate list; and create, on a server, a first playlist broadcast chain configured to be distributed through a communication system; and distribute, through the communication system, the first playlist broadcast chain to one or more media users. 11. The system of claim 10, wherein the generate the first playlist further comprises:
interspersing station identifiers between first media items and second media items included in the first playlist, wherein each of the station identifiers includes an announcement media item indicating a relative ranking of a particular media item in the first playlist. 12. The system of claim 10, wherein the skewing a pseudo-random selection criteria includes one or more of: a current ranking of the media item, an historical performance of the media item over a period of time, or a number of playlists the media item has been included in over time. 13. The system of claim 10, wherein the skewing a pseudo-random selection criteria is determined by:
determining whether a current ranking of at least one lower-ranked media item is different from a previous ranking of at least one lower-ranked media item; and skewing a pseudo-randomly exclusion of the at least one lower-ranked media item in response to determining that the current ranking of the at least one lower-ranked media item has declined. 14. The system of claim 10, wherein the selection criteria includes declining popularity and the skewing a pseudo-random selection criteria is determined by:
determining whether a current ranking of at least one lower-ranked media item is different from a previous ranking of the at least one lower-ranked media item; determining whether the at least one lower-ranked media item has been included in a previous playlist within a set time period; excluding the at least one lower-ranked media item in response to determining that the current ranking of the at least one lower-ranked media item has declined and that the at least one lower-ranked media item has been included in a previous playlist within a set time period; and skewing a pseudo-randomly exclusion of the at least one lower-ranked media item in response to determining that the current ranking of the at least one lower-ranked media item has declined and that the at least one media item has not been included in a previous playlist within a set time period. 15. The system of claim 10, wherein the considering an exclusion of the second media items comprises:
determining whether a current ranking of at least one of the media items is different from a previous ranking of the at least one of the media items; determining whether the at least one of the media items has been included in a previous playlist within a set time period; excluding the at least one of the media items in response to determining that the current ranking of the at least one of the media items has declined and that the at least one of the media items has been included in a previous playlist within a set time period; and skewing a pseudo-random exclusion the at least one of the media items in response to determining that the current ranking of the at least one of the media items has declined and that the at least one of the media items has not been included in a previous playlist within a set time period. 16. The system of claim 10, wherein the processor and the memory are further configured to:
calculate a first difference between a playout length of the first playlist and a target playout length; generate a second playlist comprising: third media items selected from a second version of the media list, the third media items having a relative popularity level satisfying a threshold popularity requirement; and fourth media items pseudo-randomly added from the second version of the media list; calculate a second difference between a playout length of the second playlist and the target playout length; add the first difference to the second difference to generate a combined playout length difference; and determine, based on a comparison of the combined playout length difference to the target playout length, if a third list is to be generated. 17 The system of claim 10, wherein the considering an exclusion of the second media items is based, in part, to prevent exhaustion of media items available to be selected from the first media list before reaching the target number. 18. A method for use in a server, the method comprising:
obtaining a first media list of media items, the media items ranked relative to other media items included in the first media list based on a level of popularity; building an intermediate list of media items from selected ranked media items from within the first media list, the intermediate list to contain a target number of media items, the target number less than a total number of the first media list of media items, the building including:
adding to the intermediate list a predetermined number of top-ranked media items from the first media list, the predetermined number less than the target number;
selectively adding lower-ranked media items from the first media list to the intermediate list until a combined number of top-ranked and lower-ranked media items added to the intermediate list reaches the target number, each of the lower-ranked media items having a relative popularity ranking level inferior to the relative popularity ranking level of all of the predetermined number of top-ranked media items, wherein the selectively adding includes at least considering an exclusion of lower-ranked media items based on skewed pseudo-random selection criteria;
in response to the combined number of top-ranked and lower-ranked media items reaching the target number: adding identifiers associated with the top-ranked and lower-ranked media items; and generating a first playlist based on the intermediate list. 19. The method of claim 18, wherein the skewed pseudo-random selection criteria include any of: current ranking of the media item, historical popularity of the media item, or on a number of playlists the media item has been included in. 20. The method of claim 18, wherein the skewed pseudo-random selection criteria are selected based, in part, to prevent exhaustion of media items available to be selected from the first media list of media items before reaching the target number. | A playlist can be generated based on a chart or list including ranked media items, e.g. songs, videos, etc., by automatically including the highest ranked media items to the playlist, but only adding some of the lower ranked media items to the playlist. A particular lower-ranked media item can be pseudo-randomly excluded from the playlist if that media item has a ranking in a current version of the chart that is lower than its ranking in a previous version. Once the desired number of media items has been added to an intermediate list, the intermediate list can be inverted, and station identifiers can be interspersed between the media items.1. A method for use in a media server, the method comprising:
obtaining, from a network, a first media list of media items, the media items ranked relative to other media items included in the first media list based on a level of popularity; building an intermediate list of media items from selected ranked media items from within the first media list, the intermediate list to contain a target number of media items, the target number less than a total number of items in the first media list, the building including:
adding to the intermediate list a predetermined number of top-ranked media items from the first media list, the predetermined number less than the target number;
selectively adding lower-ranked media items from the first media list to the intermediate list until a combined number of top-ranked and lower-ranked media items added to the intermediate list reaches the target number, each of the lower-ranked media items having a relative ranking level inferior to the relative ranking level of all of the predetermined number of top-ranked media items, wherein the selectively adding includes at least considering an exclusion of lower-ranked media items based on skewing a pseudo-random selection criteria, wherein the skewing is configured to make it more or less likely that any one particular media item will be excluded from or added to the intermediate list;
in response to the combined number of top-ranked and lower-ranked media items reaching the target number: adding identifiers associated with the top-ranked and lower-ranked media items; generating a first playlist based on the intermediate list; creating a first playlist broadcast chain configured to be distributed through a communication system; and distributing, through the communication system, the first playlist broadcast chain to one or more media users. 2. The method of claim 1, wherein generating the first playlist further comprises:
interspersing station identifiers between top-ranked media items and lower-ranked media items included in the first playlist, wherein each of the station identifiers includes an announcement media item indicating a relative ranking of a particular media item in the first playlist. 3. The method of claim 2, wherein generating the first playlist further comprises:
inverting the first playlist to generate an inverted list, wherein each of the station identifiers includes an announcement media item indicating a relative ranking of a particular media item in the inverted list. 4. The method of claim 1, wherein the skewing a pseudo-random selection criteria includes skewing one or more of: a current ranking of the media item, an historical performance of the media item over a period of time, or a number of playlists the media item has been included in over time. 5. The method of claim 4, wherein the skewing a pseudo-random selection criteria is determined by:
determining whether a current ranking of at least one of the lower-ranked media items is different from a previous ranking of the at least one of the lower-ranked media items; and skewing a pseudo-randomly exclusion of the at least one of the lower-ranked media items in response to determining that the current ranking of the at least one of the lower-ranked media items has declined. 6. The method of claim 1, wherein the pseudo-random selection criteria includes declining popularity and the skewing a pseudo-random selection criteria is determined by:
determining whether a current ranking of at least one lower-ranked media item is different from a previous ranking of the at least one lower-ranked media item; determining whether the at least one lower-ranked media item has been included in a previous playlist within a set time period; excluding the at least one lower-ranked media item in response to determining that the current ranking of the at least one lower-ranked media item has declined and that the at least one lower-ranked media item has been included in a previous playlist within a set time period; and skewing a pseudo-randomly exclusion of the at least one lower-ranked media item in response to determining that the current ranking of the at least one lower-ranked media item has declined and that the at least one media item has not been included in a previous playlist within a set time period. 7. The method of claim 1, further comprising:
calculating a first difference between a playout length of the first playlist and a target playout length; generating a second playlist, the second playlist comprising: top-ranked media items selected from a second media list, the top-ranked media items having a relative popularity level satisfying a threshold popularity requirement; and lower-ranked media items pseudo-randomly added from a second version of the media list; calculating a second difference between a playout length of the second playlist and the target playout length; adding the first difference to the second difference to generate a combined playout length difference; and determining, based on a comparison of the combined playout length difference to the target playout length, if a third list is to be generated. 8. The method of claim 1, wherein the considering an exclusion of lower-ranked media items is based, in part, to prevent exhaustion of media items available to be selected from the first media list of media items before reaching the target number. 9. The method of claim 1, wherein the first playlist broadcast chain is configured to be distributed over one or more of: a broadcast system or a streaming system. 10. A media automation system comprising:
a processor; memory operably coupled to the processor; the processor and the memory configured to: obtain a first version of a first media list, the first media list indicating a relative popularity level of each one of media items in the first media list with respect to each other one of the media items therein, the relative popularity level of each one of the media items indicated by a respective popularity level indicating parameter; add a predetermined number of first media items from the first version of the media list to an intermediate list, each of the first media items having a relative popularity level satisfying a threshold popularity requirement based at least partially upon the respective popularity level indicating parameter thereof, wherein the predetermined number is less than a target number of media items comprising the intermediate list; selectively add second media items from the first version of the media list to the intermediate list until a combined number of first and second media items added to the intermediate list reaches the target number, each of the second media items having a relative popularity level inferior to the relative popularity level of the first media items based at least partially upon the respective popularity level indicating parameter thereof, wherein the selectively adding includes at least considering an exclusion of the second media items based, in part, on skewing a pseudo-random selection criteria, wherein the skewing is configured to make it more or less likely that any one particular song will be excluded from or added to the intermediate list; add identifiers associated with the first and second media items; and generate a first playlist based on the intermediate list; and create, on a server, a first playlist broadcast chain configured to be distributed through a communication system; and distribute, through the communication system, the first playlist broadcast chain to one or more media users. 11. The system of claim 10, wherein the generate the first playlist further comprises:
interspersing station identifiers between first media items and second media items included in the first playlist, wherein each of the station identifiers includes an announcement media item indicating a relative ranking of a particular media item in the first playlist. 12. The system of claim 10, wherein the skewing a pseudo-random selection criteria includes one or more of: a current ranking of the media item, an historical performance of the media item over a period of time, or a number of playlists the media item has been included in over time. 13. The system of claim 10, wherein the skewing a pseudo-random selection criteria is determined by:
determining whether a current ranking of at least one lower-ranked media item is different from a previous ranking of at least one lower-ranked media item; and skewing a pseudo-randomly exclusion of the at least one lower-ranked media item in response to determining that the current ranking of the at least one lower-ranked media item has declined. 14. The system of claim 10, wherein the selection criteria includes declining popularity and the skewing a pseudo-random selection criteria is determined by:
determining whether a current ranking of at least one lower-ranked media item is different from a previous ranking of the at least one lower-ranked media item; determining whether the at least one lower-ranked media item has been included in a previous playlist within a set time period; excluding the at least one lower-ranked media item in response to determining that the current ranking of the at least one lower-ranked media item has declined and that the at least one lower-ranked media item has been included in a previous playlist within a set time period; and skewing a pseudo-randomly exclusion of the at least one lower-ranked media item in response to determining that the current ranking of the at least one lower-ranked media item has declined and that the at least one media item has not been included in a previous playlist within a set time period. 15. The system of claim 10, wherein the considering an exclusion of the second media items comprises:
determining whether a current ranking of at least one of the media items is different from a previous ranking of the at least one of the media items; determining whether the at least one of the media items has been included in a previous playlist within a set time period; excluding the at least one of the media items in response to determining that the current ranking of the at least one of the media items has declined and that the at least one of the media items has been included in a previous playlist within a set time period; and skewing a pseudo-random exclusion the at least one of the media items in response to determining that the current ranking of the at least one of the media items has declined and that the at least one of the media items has not been included in a previous playlist within a set time period. 16. The system of claim 10, wherein the processor and the memory are further configured to:
calculate a first difference between a playout length of the first playlist and a target playout length; generate a second playlist comprising: third media items selected from a second version of the media list, the third media items having a relative popularity level satisfying a threshold popularity requirement; and fourth media items pseudo-randomly added from the second version of the media list; calculate a second difference between a playout length of the second playlist and the target playout length; add the first difference to the second difference to generate a combined playout length difference; and determine, based on a comparison of the combined playout length difference to the target playout length, if a third list is to be generated. 17 The system of claim 10, wherein the considering an exclusion of the second media items is based, in part, to prevent exhaustion of media items available to be selected from the first media list before reaching the target number. 18. A method for use in a server, the method comprising:
obtaining a first media list of media items, the media items ranked relative to other media items included in the first media list based on a level of popularity; building an intermediate list of media items from selected ranked media items from within the first media list, the intermediate list to contain a target number of media items, the target number less than a total number of the first media list of media items, the building including:
adding to the intermediate list a predetermined number of top-ranked media items from the first media list, the predetermined number less than the target number;
selectively adding lower-ranked media items from the first media list to the intermediate list until a combined number of top-ranked and lower-ranked media items added to the intermediate list reaches the target number, each of the lower-ranked media items having a relative popularity ranking level inferior to the relative popularity ranking level of all of the predetermined number of top-ranked media items, wherein the selectively adding includes at least considering an exclusion of lower-ranked media items based on skewed pseudo-random selection criteria;
in response to the combined number of top-ranked and lower-ranked media items reaching the target number: adding identifiers associated with the top-ranked and lower-ranked media items; and generating a first playlist based on the intermediate list. 19. The method of claim 18, wherein the skewed pseudo-random selection criteria include any of: current ranking of the media item, historical popularity of the media item, or on a number of playlists the media item has been included in. 20. The method of claim 18, wherein the skewed pseudo-random selection criteria are selected based, in part, to prevent exhaustion of media items available to be selected from the first media list of media items before reaching the target number. | 2,100 |
6,680 | 6,680 | 15,498,408 | 2,152 | A pool of media items to be considered for potential inclusion in a playlist. Media items in the pool can be explicitly ranked relative to one another. First media items, which includes all media items that are both available and have a rank satisfying a first rank threshold can be selected from the pool, along with a subset of any remaining media items in the pool. The subset of remaining media items can be selected by pseudo-randomly excluding particular remaining media items based on their performance history. A playlist is created using the first media items and the subset of remaining media items, and station identifiers can be inserted into the playlist. The station identifiers include tags indicating, to a playout server, a location of information to be inserted within the playlist prior to broadcast. The playlist is transmitted to a playout server for broadcast. | 1. A method for use in a server including a processor and associated memory, the method comprising:
obtaining, from a database, a pool of media items to be considered for potential inclusion in a playlist, wherein media items included in the pool of media items are explicitly ranked relative to other media items included in the pool of media items; selecting from the pool of media items, for inclusion in the playlist, first media items, the first media items including all media items that are both available and have a rank satisfying a first rank threshold; selecting from the pool of media items, for inclusion in the playlist, a subset of remaining media items included in the pool of media items but not selected as first media items, wherein selecting the subset of remaining media items includes pseudo-randomly excluding particular remaining media items based on a performance history associated with the particular remaining media items; creating the playlist using the first media items and the subset of remaining media items; inserting station identifiers into the playlist, wherein the station identifiers include tags indicating, to a playout server, a location of information to be inserted within the playlist prior to broadcast; and transmitting the playlist to the playout server for broadcast. 2. The method of claim 1, wherein pseudo-randomly excluding includes:
generating a pseudo-random number for a remaining media item being considered for inclusion in the playlist; and excluding the remaining media item from the playlist based on the pseudo-random number. 3. The method of claim 2, further comprising:
skewing the pseudo-random number, based on a parameter associated with the remaining media item, to alter a likelihood that the remaining media item will be excluded from the playlist. 4. The method of claim 3, wherein skewing the pseudo-random number includes:
skewing the pseudo-random number based on one or more of: a current ranking of a media item, a historical performance of the media item over a period of time, or a number of playlists in which the media item has been included over a period of time. 5. The method of claim 1, further comprising:
determining the performance history of the particular remaining media items based on a plurality of media ranking charts. 6. The method of claim 5, wherein determining the performance history includes:
determining whether a ranking of a particular media item indicated by a second media ranking chart has declined relative to a ranking of the particular media item indicated by a first media ranking chart; and pseudo-randomly excluding the particular media item in response to determining that the ranking of the particular media item has declined. 7. The method of claim 1, wherein:
the station identifiers are associated with an announcement media item indicating a ranking of a particular media item included in the playlist relative to other media items included in the playlist. 8. A system comprising:
a processor; memory operably coupled to the processor; a program of instructions to be stored in the memory and executed by the processor, the program of instructions including:
at least one instruction to obtain, from a database, a pool of media items to be considered for potential inclusion in a playlist, wherein media items included in the pool of media items are explicitly ranked relative to other media items included in the pool of media items;
at least one instruction to select from the pool of media items, for inclusion in the playlist, first media items, the first media items including all media items that are both available and have a rank satisfying a first rank threshold;
at least one instruction to select from the pool of media items, for inclusion in the playlist, a subset of remaining media items included in the pool of media items but not selected as first media items, wherein the at least one instruction to select the subset of remaining media items includes at least one instruction to pseudo-randomly exclude particular remaining media items based on a performance history associated with the particular remaining media items;
at least one instruction to create the playlist using the first media items and the subset of remaining media items;
at least one instruction to insert station identifiers into the playlist, wherein the station identifiers include tags indicating, to a playout server, a location of information to be inserted within the playlist prior to broadcast; and
at least one instruction to transmit the playlist to the playout server for broadcast. 9. The system of claim 8, wherein the at least one instruction to pseudo-randomly exclude includes:
at least one instruction to generate a pseudo-random number for a remaining media item being considered for inclusion in the playlist; and at least one instruction to exclude the remaining media item from the playlist based on the pseudo-random number. 10. The system of claim 9, the program of instructions further including:
at least one instruction to skew the pseudo-random number, based on a parameter associated with the remaining media item, to alter a likelihood that the remaining media item will be excluded from the playlist. 11. The system of claim 10, wherein the at least one instruction to skew the pseudo-random number includes:
at least one instruction to skew the pseudo-random number based on one or more of: a current ranking of a media item, a historical performance of the media item over a period of time, or a number of playlists in which the media item has been included over a period of time. 12. The system of claim 8, the program of instructions further comprising:
at least one instruction to determine the performance history of the particular remaining media items based on a plurality of media ranking charts. 13. The system of claim 12, wherein the at least one instruction to determine the performance history includes:
at least one instruction to determine whether a ranking of a particular media item indicated by a second media ranking chart has declined relative to a ranking of the particular media item indicated by a first media ranking chart; and at least one instruction to pseudo-randomly exclude the particular media item in response to determining that the ranking of the particular media item has declined. 14. The system of claim 8, wherein:
the station identifiers are associated with an announcement media item indicating a ranking of a particular media item included in the playlist relative to other media items included in the playlist. 15. A non-transitory computer readable medium tangibly embodying a program of instructions, the program of instructions configured to be stored in a memory and executed by a processor, the program of instructions including:
at least one instruction to obtain, from a database, a pool of media items to be considered for potential inclusion in a playlist, wherein media items included in the pool of media items are explicitly ranked relative to other media items included in the pool of media items; at least one instruction to select from the pool of media items, for inclusion in the playlist, first media items, the first media items including all media items that are both available and have a rank satisfying a first rank threshold; at least one instruction to select from the pool of media items, for inclusion in the playlist, a subset of remaining media items included in the pool of media items but not selected as first media items, wherein the at least one instruction to select the subset of remaining media items includes at least one instruction to pseudo-randomly exclude particular remaining media items based on a performance history associated with the particular remaining media items; at least one instruction to create the playlist using the first media items and the subset of remaining media items; at least one instruction to insert station identifiers into the playlist, wherein the station identifiers include tags indicating, to a playout server, a location of information to be inserted within the playlist prior to broadcast; and at least one instruction to transmit the playlist to the playout server for broadcast. 16. The non-transitory computer readable medium of claim 15, wherein the at least one instruction to pseudo-randomly exclude includes:
at least one instruction to generate a pseudo-random number for a remaining media item being considered for inclusion in the playlist; and at least one instruction to exclude the remaining media item from the playlist based on the pseudo-random number. 17. The non-transitory computer readable medium of claim 16, the program of instructions further including:
at least one instruction to skew the pseudo-random number, based on a parameter associated with the remaining media item, to alter a likelihood that the remaining media item will be excluded from the playlist. 18. The non-transitory computer readable medium of claim 17, wherein the at least one instruction to skew the pseudo-random number includes:
at least one instruction to skew the pseudo-random number based on one or more of: a current ranking of a media item, a historical performance of the media item over a period of time, or a number of playlists in which the media item has been included over a period of time. 19. The non-transitory computer readable medium of claim 15, the program of instructions further comprising:
at least one instruction to determine the performance history of the particular remaining media items based on a plurality of media ranking charts. 20. The non-transitory computer readable medium of claim 19, wherein the at least one instruction to determine the performance history includes:
at least one instruction to determine whether a ranking of a particular media item indicated by a second media ranking chart has declined relative to a ranking of the particular media item indicated by a first media ranking chart; and at least one instruction to pseudo-randomly exclude the particular media item in response to determining that the ranking of the particular media item has declined. | A pool of media items to be considered for potential inclusion in a playlist. Media items in the pool can be explicitly ranked relative to one another. First media items, which includes all media items that are both available and have a rank satisfying a first rank threshold can be selected from the pool, along with a subset of any remaining media items in the pool. The subset of remaining media items can be selected by pseudo-randomly excluding particular remaining media items based on their performance history. A playlist is created using the first media items and the subset of remaining media items, and station identifiers can be inserted into the playlist. The station identifiers include tags indicating, to a playout server, a location of information to be inserted within the playlist prior to broadcast. The playlist is transmitted to a playout server for broadcast.1. A method for use in a server including a processor and associated memory, the method comprising:
obtaining, from a database, a pool of media items to be considered for potential inclusion in a playlist, wherein media items included in the pool of media items are explicitly ranked relative to other media items included in the pool of media items; selecting from the pool of media items, for inclusion in the playlist, first media items, the first media items including all media items that are both available and have a rank satisfying a first rank threshold; selecting from the pool of media items, for inclusion in the playlist, a subset of remaining media items included in the pool of media items but not selected as first media items, wherein selecting the subset of remaining media items includes pseudo-randomly excluding particular remaining media items based on a performance history associated with the particular remaining media items; creating the playlist using the first media items and the subset of remaining media items; inserting station identifiers into the playlist, wherein the station identifiers include tags indicating, to a playout server, a location of information to be inserted within the playlist prior to broadcast; and transmitting the playlist to the playout server for broadcast. 2. The method of claim 1, wherein pseudo-randomly excluding includes:
generating a pseudo-random number for a remaining media item being considered for inclusion in the playlist; and excluding the remaining media item from the playlist based on the pseudo-random number. 3. The method of claim 2, further comprising:
skewing the pseudo-random number, based on a parameter associated with the remaining media item, to alter a likelihood that the remaining media item will be excluded from the playlist. 4. The method of claim 3, wherein skewing the pseudo-random number includes:
skewing the pseudo-random number based on one or more of: a current ranking of a media item, a historical performance of the media item over a period of time, or a number of playlists in which the media item has been included over a period of time. 5. The method of claim 1, further comprising:
determining the performance history of the particular remaining media items based on a plurality of media ranking charts. 6. The method of claim 5, wherein determining the performance history includes:
determining whether a ranking of a particular media item indicated by a second media ranking chart has declined relative to a ranking of the particular media item indicated by a first media ranking chart; and pseudo-randomly excluding the particular media item in response to determining that the ranking of the particular media item has declined. 7. The method of claim 1, wherein:
the station identifiers are associated with an announcement media item indicating a ranking of a particular media item included in the playlist relative to other media items included in the playlist. 8. A system comprising:
a processor; memory operably coupled to the processor; a program of instructions to be stored in the memory and executed by the processor, the program of instructions including:
at least one instruction to obtain, from a database, a pool of media items to be considered for potential inclusion in a playlist, wherein media items included in the pool of media items are explicitly ranked relative to other media items included in the pool of media items;
at least one instruction to select from the pool of media items, for inclusion in the playlist, first media items, the first media items including all media items that are both available and have a rank satisfying a first rank threshold;
at least one instruction to select from the pool of media items, for inclusion in the playlist, a subset of remaining media items included in the pool of media items but not selected as first media items, wherein the at least one instruction to select the subset of remaining media items includes at least one instruction to pseudo-randomly exclude particular remaining media items based on a performance history associated with the particular remaining media items;
at least one instruction to create the playlist using the first media items and the subset of remaining media items;
at least one instruction to insert station identifiers into the playlist, wherein the station identifiers include tags indicating, to a playout server, a location of information to be inserted within the playlist prior to broadcast; and
at least one instruction to transmit the playlist to the playout server for broadcast. 9. The system of claim 8, wherein the at least one instruction to pseudo-randomly exclude includes:
at least one instruction to generate a pseudo-random number for a remaining media item being considered for inclusion in the playlist; and at least one instruction to exclude the remaining media item from the playlist based on the pseudo-random number. 10. The system of claim 9, the program of instructions further including:
at least one instruction to skew the pseudo-random number, based on a parameter associated with the remaining media item, to alter a likelihood that the remaining media item will be excluded from the playlist. 11. The system of claim 10, wherein the at least one instruction to skew the pseudo-random number includes:
at least one instruction to skew the pseudo-random number based on one or more of: a current ranking of a media item, a historical performance of the media item over a period of time, or a number of playlists in which the media item has been included over a period of time. 12. The system of claim 8, the program of instructions further comprising:
at least one instruction to determine the performance history of the particular remaining media items based on a plurality of media ranking charts. 13. The system of claim 12, wherein the at least one instruction to determine the performance history includes:
at least one instruction to determine whether a ranking of a particular media item indicated by a second media ranking chart has declined relative to a ranking of the particular media item indicated by a first media ranking chart; and at least one instruction to pseudo-randomly exclude the particular media item in response to determining that the ranking of the particular media item has declined. 14. The system of claim 8, wherein:
the station identifiers are associated with an announcement media item indicating a ranking of a particular media item included in the playlist relative to other media items included in the playlist. 15. A non-transitory computer readable medium tangibly embodying a program of instructions, the program of instructions configured to be stored in a memory and executed by a processor, the program of instructions including:
at least one instruction to obtain, from a database, a pool of media items to be considered for potential inclusion in a playlist, wherein media items included in the pool of media items are explicitly ranked relative to other media items included in the pool of media items; at least one instruction to select from the pool of media items, for inclusion in the playlist, first media items, the first media items including all media items that are both available and have a rank satisfying a first rank threshold; at least one instruction to select from the pool of media items, for inclusion in the playlist, a subset of remaining media items included in the pool of media items but not selected as first media items, wherein the at least one instruction to select the subset of remaining media items includes at least one instruction to pseudo-randomly exclude particular remaining media items based on a performance history associated with the particular remaining media items; at least one instruction to create the playlist using the first media items and the subset of remaining media items; at least one instruction to insert station identifiers into the playlist, wherein the station identifiers include tags indicating, to a playout server, a location of information to be inserted within the playlist prior to broadcast; and at least one instruction to transmit the playlist to the playout server for broadcast. 16. The non-transitory computer readable medium of claim 15, wherein the at least one instruction to pseudo-randomly exclude includes:
at least one instruction to generate a pseudo-random number for a remaining media item being considered for inclusion in the playlist; and at least one instruction to exclude the remaining media item from the playlist based on the pseudo-random number. 17. The non-transitory computer readable medium of claim 16, the program of instructions further including:
at least one instruction to skew the pseudo-random number, based on a parameter associated with the remaining media item, to alter a likelihood that the remaining media item will be excluded from the playlist. 18. The non-transitory computer readable medium of claim 17, wherein the at least one instruction to skew the pseudo-random number includes:
at least one instruction to skew the pseudo-random number based on one or more of: a current ranking of a media item, a historical performance of the media item over a period of time, or a number of playlists in which the media item has been included over a period of time. 19. The non-transitory computer readable medium of claim 15, the program of instructions further comprising:
at least one instruction to determine the performance history of the particular remaining media items based on a plurality of media ranking charts. 20. The non-transitory computer readable medium of claim 19, wherein the at least one instruction to determine the performance history includes:
at least one instruction to determine whether a ranking of a particular media item indicated by a second media ranking chart has declined relative to a ranking of the particular media item indicated by a first media ranking chart; and at least one instruction to pseudo-randomly exclude the particular media item in response to determining that the ranking of the particular media item has declined. | 2,100 |
6,681 | 6,681 | 15,420,873 | 2,117 | Systems and processes for flexible and/or low volume product manufacture, including cost effective ways to manufacture low volume system level devices. In one aspect, this disclosure enables the manufacture of a plurality of System in Package (SiP) devices. In one aspect, the devices include one or more of an optical and electrical identifier, corresponding to substrates and/or product designs. The identifiers can be used in the assembly of the devices. | 1. A method for manufacturing a plurality of System in Package, SiP, devices on a production line system, comprising:
assembling a first device of said plurality of SiP devices, wherein assembling said first device comprises:
arranging a first plurality of components on a first substrate according to a first design, wherein said first substrate has a first optical identifier on a surface of said first substrate, and
creating a first electrical identifier related to said first design; and
assembling a second device of said plurality of SiP devices, wherein assembling said second device comprises:
arranging a second plurality of components on a second substrate according to a second, different design, wherein said second substrate has a second optical identifier on a surface of said second substrate, and
creating a second electrical identifier related to said second design. 2. The method of claim 1, wherein said first and second substrates have an identical layout and said first and second optical identifiers are the same. 3. The method of claim 1, wherein said first and second substrates have a different layout from each other and said first and second optical identifiers are different from each other. 4. The method of claim 1, wherein said first and second substrates are part of a common panel. 5. The method of claim 1, wherein creating said first electrical identifier comprises placing one or more resistive elements, capacitive elements, or wire bonds on said first substrate, and creating said second electrical identifier comprises placing one or more resistive elements, capacitive elements, or wire bonds on said second substrate. 6. The method of claim 1, wherein at least one of said first and second optical identifier is formed by one or more resistive elements, capacitive elements, and wire bonds. 7. The method of claim 1, wherein assembling said first and second devices occurs in a single production run. 8. The method of claim 1, further comprising:
importing said first and second designs into one or more memories of said production line system, wherein said first imported design refers to said first optical identifier and said first electrical identifier and said second imported design refers to said second optical identifier and said second electrical identifier. 9. The method of claim 1, further comprising:
adjusting one or more settings of one or more machines in said production line system based at least in part on said first or second electrical identifier. 10. The method of claim 1, wherein said first and second substrates are selected from a set of standard substrates. 11. The method of claim 1, further comprising:
loading said first and second components together on said production line system, wherein said first set of components and said second components are selected from a single group of components; 12. The method of claim 1, further comprising:
loading said first and second substrates on said production line system, wherein each of said first and said second substrates is a common substrate for one or more families of devices. 13. A System in Package, SiP, device, comprising:
a substrate, wherein said substrate comprises an optical identifier for said substrate, on a surface of said substrate; and a plurality of components arranged on said substrate to define an electrical identifier corresponding to said SiP device. 14. The device of claim 13, wherein at least one of said plurality of components is a SiP device. 15. The device of claim 13, wherein said electrical identifier corresponds to a unique design of said SiP device. 16. A production line system for manufacturing a plurality of System in Package, SiP, devices, comprising:
one or more memories of said production line system for storing at least a first design of a first of said plurality of SiP devices and at least a second design of a second of said plurality of SiP devices, such that said first and second designs are both contained in said one or memories of said production line system; production line storage equipment, wherein said storage equipment is configured to store a set of preselected components on said production line system and to store first and second substrates on said production line system; and one or more processors configured to control one or more machines of said production line system to assemble said first and second SiP devices in a single production run, wherein said first design uses at least one component from said set of preselected components and said first substrate, and said second design uses at least one component from said set of preselected components and said second substrate. 17. The production line system of claim 16, wherein said one or more processors are further configured read one or more electrical identifiers corresponding to said first and second designs and control said production line to assemble said first and second SiP devices according to said electrical identifiers. 18. The production line system of claim 16, wherein said plurality of SiP device designs use only components and substrates from the sets of preselected components and substrates. 19. The production line system of claim 16, further comprising:
one or more pieces of equipment programmed to automatically adjust its settings to perform unique activities needed for each substrate based on a unique identifier for each of said substrates in conjunction with said design when said substrate is loaded on each said piece of equipment. | Systems and processes for flexible and/or low volume product manufacture, including cost effective ways to manufacture low volume system level devices. In one aspect, this disclosure enables the manufacture of a plurality of System in Package (SiP) devices. In one aspect, the devices include one or more of an optical and electrical identifier, corresponding to substrates and/or product designs. The identifiers can be used in the assembly of the devices.1. A method for manufacturing a plurality of System in Package, SiP, devices on a production line system, comprising:
assembling a first device of said plurality of SiP devices, wherein assembling said first device comprises:
arranging a first plurality of components on a first substrate according to a first design, wherein said first substrate has a first optical identifier on a surface of said first substrate, and
creating a first electrical identifier related to said first design; and
assembling a second device of said plurality of SiP devices, wherein assembling said second device comprises:
arranging a second plurality of components on a second substrate according to a second, different design, wherein said second substrate has a second optical identifier on a surface of said second substrate, and
creating a second electrical identifier related to said second design. 2. The method of claim 1, wherein said first and second substrates have an identical layout and said first and second optical identifiers are the same. 3. The method of claim 1, wherein said first and second substrates have a different layout from each other and said first and second optical identifiers are different from each other. 4. The method of claim 1, wherein said first and second substrates are part of a common panel. 5. The method of claim 1, wherein creating said first electrical identifier comprises placing one or more resistive elements, capacitive elements, or wire bonds on said first substrate, and creating said second electrical identifier comprises placing one or more resistive elements, capacitive elements, or wire bonds on said second substrate. 6. The method of claim 1, wherein at least one of said first and second optical identifier is formed by one or more resistive elements, capacitive elements, and wire bonds. 7. The method of claim 1, wherein assembling said first and second devices occurs in a single production run. 8. The method of claim 1, further comprising:
importing said first and second designs into one or more memories of said production line system, wherein said first imported design refers to said first optical identifier and said first electrical identifier and said second imported design refers to said second optical identifier and said second electrical identifier. 9. The method of claim 1, further comprising:
adjusting one or more settings of one or more machines in said production line system based at least in part on said first or second electrical identifier. 10. The method of claim 1, wherein said first and second substrates are selected from a set of standard substrates. 11. The method of claim 1, further comprising:
loading said first and second components together on said production line system, wherein said first set of components and said second components are selected from a single group of components; 12. The method of claim 1, further comprising:
loading said first and second substrates on said production line system, wherein each of said first and said second substrates is a common substrate for one or more families of devices. 13. A System in Package, SiP, device, comprising:
a substrate, wherein said substrate comprises an optical identifier for said substrate, on a surface of said substrate; and a plurality of components arranged on said substrate to define an electrical identifier corresponding to said SiP device. 14. The device of claim 13, wherein at least one of said plurality of components is a SiP device. 15. The device of claim 13, wherein said electrical identifier corresponds to a unique design of said SiP device. 16. A production line system for manufacturing a plurality of System in Package, SiP, devices, comprising:
one or more memories of said production line system for storing at least a first design of a first of said plurality of SiP devices and at least a second design of a second of said plurality of SiP devices, such that said first and second designs are both contained in said one or memories of said production line system; production line storage equipment, wherein said storage equipment is configured to store a set of preselected components on said production line system and to store first and second substrates on said production line system; and one or more processors configured to control one or more machines of said production line system to assemble said first and second SiP devices in a single production run, wherein said first design uses at least one component from said set of preselected components and said first substrate, and said second design uses at least one component from said set of preselected components and said second substrate. 17. The production line system of claim 16, wherein said one or more processors are further configured read one or more electrical identifiers corresponding to said first and second designs and control said production line to assemble said first and second SiP devices according to said electrical identifiers. 18. The production line system of claim 16, wherein said plurality of SiP device designs use only components and substrates from the sets of preselected components and substrates. 19. The production line system of claim 16, further comprising:
one or more pieces of equipment programmed to automatically adjust its settings to perform unique activities needed for each substrate based on a unique identifier for each of said substrates in conjunction with said design when said substrate is loaded on each said piece of equipment. | 2,100 |
6,682 | 6,682 | 16,135,576 | 2,139 | A data storage system can have one or more hosts connected to a data storage subsystem with the host having a host processor and the data storage subsystem having a controller. Write back data generated at the host triggers the host processor to allocate a cache location in the data storage subsystem where the generated data is subsequently stored. The generated write back data is written in a non-volatile destination address as directed by the controller prior to waiting for a secondary event with the generated data stored in both the cache location and the non-volatile destination address. Detection of the secondary event prompts the controller to signal the host processor that the cache location is free for new data. | 1. A data storage system comprising a host connected to a data storage subsystem, the host comprising a host processor configured to generate a write command and allocate a cache location for write back data, the data storage subsystem comprising a controller configured to complete the write command and wait for a secondary event prior to signaling the host processor the cache location is free for new data. 2. The data storage system of claim 1, wherein the cache location is a non-volatile type of memory. 3. The data storage system of claim 1, wherein the cache location is a volatile type of memory. 4. The data storage system of claim 1, wherein the host is physically located in a different city than the data storage subsystem. 5. The data storage system of claim 1, wherein the host comprises a logical device interface and an object interface. 6. The data storage system of claim 5, wherein the logical device interface employs a non-volatile memory express over fabrics (NVMeoF) protocol. 7. The data storage system of claim 1, wherein the host is connected to the data storage subsystem via a peripheral component interconnect express (PCIe) bus employing a non-volatile memory express (NVMe) protocol. 8. A method comprising:
connecting a host to a data storage subsystem, the host comprising a host processor, the data storage subsystem comprising a controller; generating write back data at the host; allocating a cache location in the data storage subsystem for the write back data with the host processor; storing the generated write back data in the allocated cache location; writing the generated write back data in a non-volatile destination address as directed by the controller; waiting for a first secondary event with the generated data stored in both the cache location and the non-volatile destination address; and signaling the host processor that the cache location is free for new data in response to detection of the first secondary event. 9. The method of claim 8, wherein the controller signals the host processor the write command has been completed after writing the generated data to the non-volatile destination address. 10. The method of claim 9, wherein the write command has been completed signal occurs prior to the first secondary event occurring. 11. The method of claim 9, wherein the write command has been completed signal is separate from the signal the cache location is free for new data. 12. The method of claim 8, wherein the controller signals the host processor that the cache location is free after the first secondary event is detected and a second secondary event is detected, the first and second secondary events being different. 13. The method of claim 8, wherein the first secondary event is an out-of-band handshake. 14. The method of claim 8, wherein the first secondary event is a cache memory of the data storage subsystem having a predetermined volume of current data. 15. The method of claim 8, wherein the first secondary event is a 16. The method of claim 8, wherein the controller alters the first secondary event in response to detected data storage conditions in the data storage subsystem. 17. The method of claim 8, wherein the controller adds a second secondary event in response to predicted data storage conditions in the data storage subsystem. 18. A method comprising:
generating write back data for a data storage subsystem; allocating a first cache location in the data storage subsystem for the write back data with a hardware processor; storing the generated write back data in the first cache location; mirroring the write back data to a second cache location of the data storage subsystem; writing the write back data in a non-volatile destination address as directed by a data storage subsystem controller; waiting for a first secondary event with the write back data stored in the first cache location or the second cache location and the non-volatile destination address; and signaling the hardware processor that the first or second cache location is free for new data in response to detection of the first secondary event. 19. The method of claim 18, wherein the mirroring of data is conducted passively and in response to writing data to the first cache location. 20. The method of claim 18, wherein the first cache location is a volatile type of memory and the second cache location is a non-volatile type of memory and the hardware processor is located in a host system connected to the data storage subsystem. | A data storage system can have one or more hosts connected to a data storage subsystem with the host having a host processor and the data storage subsystem having a controller. Write back data generated at the host triggers the host processor to allocate a cache location in the data storage subsystem where the generated data is subsequently stored. The generated write back data is written in a non-volatile destination address as directed by the controller prior to waiting for a secondary event with the generated data stored in both the cache location and the non-volatile destination address. Detection of the secondary event prompts the controller to signal the host processor that the cache location is free for new data.1. A data storage system comprising a host connected to a data storage subsystem, the host comprising a host processor configured to generate a write command and allocate a cache location for write back data, the data storage subsystem comprising a controller configured to complete the write command and wait for a secondary event prior to signaling the host processor the cache location is free for new data. 2. The data storage system of claim 1, wherein the cache location is a non-volatile type of memory. 3. The data storage system of claim 1, wherein the cache location is a volatile type of memory. 4. The data storage system of claim 1, wherein the host is physically located in a different city than the data storage subsystem. 5. The data storage system of claim 1, wherein the host comprises a logical device interface and an object interface. 6. The data storage system of claim 5, wherein the logical device interface employs a non-volatile memory express over fabrics (NVMeoF) protocol. 7. The data storage system of claim 1, wherein the host is connected to the data storage subsystem via a peripheral component interconnect express (PCIe) bus employing a non-volatile memory express (NVMe) protocol. 8. A method comprising:
connecting a host to a data storage subsystem, the host comprising a host processor, the data storage subsystem comprising a controller; generating write back data at the host; allocating a cache location in the data storage subsystem for the write back data with the host processor; storing the generated write back data in the allocated cache location; writing the generated write back data in a non-volatile destination address as directed by the controller; waiting for a first secondary event with the generated data stored in both the cache location and the non-volatile destination address; and signaling the host processor that the cache location is free for new data in response to detection of the first secondary event. 9. The method of claim 8, wherein the controller signals the host processor the write command has been completed after writing the generated data to the non-volatile destination address. 10. The method of claim 9, wherein the write command has been completed signal occurs prior to the first secondary event occurring. 11. The method of claim 9, wherein the write command has been completed signal is separate from the signal the cache location is free for new data. 12. The method of claim 8, wherein the controller signals the host processor that the cache location is free after the first secondary event is detected and a second secondary event is detected, the first and second secondary events being different. 13. The method of claim 8, wherein the first secondary event is an out-of-band handshake. 14. The method of claim 8, wherein the first secondary event is a cache memory of the data storage subsystem having a predetermined volume of current data. 15. The method of claim 8, wherein the first secondary event is a 16. The method of claim 8, wherein the controller alters the first secondary event in response to detected data storage conditions in the data storage subsystem. 17. The method of claim 8, wherein the controller adds a second secondary event in response to predicted data storage conditions in the data storage subsystem. 18. A method comprising:
generating write back data for a data storage subsystem; allocating a first cache location in the data storage subsystem for the write back data with a hardware processor; storing the generated write back data in the first cache location; mirroring the write back data to a second cache location of the data storage subsystem; writing the write back data in a non-volatile destination address as directed by a data storage subsystem controller; waiting for a first secondary event with the write back data stored in the first cache location or the second cache location and the non-volatile destination address; and signaling the hardware processor that the first or second cache location is free for new data in response to detection of the first secondary event. 19. The method of claim 18, wherein the mirroring of data is conducted passively and in response to writing data to the first cache location. 20. The method of claim 18, wherein the first cache location is a volatile type of memory and the second cache location is a non-volatile type of memory and the hardware processor is located in a host system connected to the data storage subsystem. | 2,100 |
6,683 | 6,683 | 15,443,547 | 2,122 | Techniques for training a deep neural network from user interaction workflow activities occurring among distributed computing devices are disclosed herein. In an example, processing of input data (such as input medical imaging data) is performed at a client computing device with the execution of an algorithm of a deep neural network. A set of updated training parameters are generated to update the algorithm of the deep neural network, based on user interaction activities (such as user acceptance and user modification in a graphical user interface) that occur with the results of the executed algorithm. The generation and collection of the updated training parameters at a server, received from a plurality of distributed client sites, can be used to refine, improve, and train the algorithm of the deep neural network for subsequent processing and execution. | 1. A method for training a deep neural network from workflow activities in a computing device, performed by electronic operations executed by the computing device, with the computing device having at least one processor and at least one memory, and with the electronic operations comprising:
generating model output of source data in a graphical user interface of the computing device, wherein the model output of the source data is produced using execution of an algorithm of a deep neural network on a set of source data; receiving, in the graphical user interface, user acceptance of the model output of the source data generated in the graphical user interface; generating updated parameters to update the algorithm of the deep neural network, wherein the updated parameters to update the algorithm are based on the user acceptance that is received in the graphical user interface; and transmitting, to a parameter server, the updated parameters to update the algorithm of the deep neural network. 2. The method of claim 1,
wherein the updated parameters to update the algorithm of the deep neural network provide reinforcement of weights used by the algorithm, in response to user input received with the computing device, and wherein the user input indicates the user acceptance that is received in the graphical user interface. 3. The method of claim 1, the electronic operations comprising:
receiving, in the graphical user interface, user modification of the model output of the source data generated in the graphical user interface; wherein the updated parameters to update the algorithm of the deep neural network provide changes of weights used by the algorithm, in response to user input received with the computing device; and wherein the user input indicates the user modification that is received in the graphical user interface. 4. The method of claim 3, the electronic operations comprising:
calculating a difference between the model output of the source data and updated output of the source data, wherein the updated output of the source data is provided from the user modification of the model output; wherein the updated parameters to update the algorithm of the deep neural network provide an indication of the calculated difference between the model output of the source data and the updated output of the source data. 5. The method of claim 4,
wherein calculating the difference between the model output of the source data and updated output includes calculating changes to a plurality of weights applied by the algorithm of the deep neural network, and wherein the updated parameters to update the algorithm of the deep neural network indicate the changes to the plurality of weights. 6. The method of claim 1, the electronic operations comprising:
executing a user interaction workflow, the user interaction workflow including the operations of generating the model output of the source data, the user interaction workflow performed with an execution of a first version of the algorithm of the deep neural network; executing a parallel algorithm workflow concurrently with the user interaction workflow, the parallel algorithm workflow including the operations of generating an expected model output of the source data, wherein the expected model output of the source data is produced using an execution of a second version of the algorithm of the deep neural network, wherein the second version of the algorithm of the deep neural network operates with received parameters provided from the parameter server; receiving, in the graphical user interface, user modifications of the model output of the source data generated in the graphical user interface, prior to receiving the user acceptance; and determining a difference in parameters used in the first version of the algorithm of the deep neural network and the parameters used in the second version of the algorithm of the deep neural network; wherein transmitting the updated parameters for training of the deep neural network includes transmitting the determined difference in parameters. 7. The method of claim 1,
wherein the source data is medical imaging data that represents one or more human anatomical features in one or more medical images, wherein the algorithm of the deep neural network performs automated workflow operations, including at least one of: detection, segmentation, quantification, or prediction operations, and wherein the automated workflow operations are performed on identified characteristics of one or more of the human anatomical features in the one or more medical images. 8. The method of claim 7,
wherein the model output of the source data includes a change in visualization to a display of the one or more human anatomical features in the one or more medical images, and wherein the change in visualization to the display of the one or more human anatomical features in the one or more medical images is further changed by a user modification received with the computing device, wherein the user modification received with the computing device causes a further change to the visualization to the display of the one or more of the human anatomical features, the user modification received from a first user input received with the computing device via a human input device, and wherein the user acceptance received with the computing device causes an acceptance of the further change to the visualization of the display of the one or more of the human anatomical features, the user acceptance received from a second user input received with the computing device via the human input device. 9. The method of claim 1, the electronic operations comprising:
receiving, from the parameter server, subsequent received parameters for subsequent operation of the algorithm for the deep neural network; and operating the algorithm for the deep neural network on a subsequent set of source data, based on use of the subsequent received parameters with the algorithm of the deep neural network. 10. At least non-transitory machine-readable medium, the machine-readable medium including instructions, which when executed by a machine having a hardware processor, causes the machine to perform operations that:
generate model output of source data in a graphical user interface, wherein the model output of the source data is produced using execution of an algorithm of a deep neural network on a set of source data; receive, in the graphical user interface, user acceptance of the model output of the source data generated in the graphical user interface; generate updated parameters to update the algorithm of the deep neural network, wherein the updated parameters to update the algorithm are based on the user acceptance that is received in the graphical user interface; and transmit, to a parameter server, the updated parameters to update the algorithm of the deep neural network. 11. The machine-readable medium of claim 10,
wherein the updated parameters to update the algorithm of the deep neural network provide reinforcement of weights used by the algorithm, in response to received user input, and wherein the received user input indicates the user acceptance that is received in the graphical user interface. 12. The machine-readable medium of claim 10, the medium further including instructions that cause the machine to perform operations that:
receive, in the graphical user interface, user modification of the model output of the source data generated in the graphical user interface; wherein the updated parameters to update the algorithm of the deep neural network provide changes of weights used by the algorithm, in response to received user input; and wherein the received user input indicates the user modification that is received in the graphical user interface. 13. The machine-readable medium of claim 12, the medium further including instructions that cause the machine to perform operations that:
calculate a difference between the model output of the source data and updated output of the source data, wherein the updated output of the source data is provided from the user modification of the model output; wherein the updated parameters to update the algorithm of the deep neural network provide an indication of the calculated difference between the model output of the source data and the updated output of the source data. 14. The machine-readable medium of claim 13,
wherein calculating the difference between the model output of the source data and updated output includes calculating changes to a plurality of weights applied by the algorithm of the deep neural network, and wherein the updated parameters to update the algorithm of the deep neural network indicate the changes to the plurality of weights. 15. The machine-readable medium of claim 10, the medium including instructions that cause the machine to perform operations that:
execute a user interaction workflow, the user interaction workflow including the operations of generating the model output of the source data, the user interaction workflow performed with an execution of a first version of the algorithm of the deep neural network; execute a parallel algorithm workflow concurrently with the user interaction workflow, the parallel algorithm workflow including the operations of generating an expected model output of the source data, wherein the expected model output of the source data is produced using an execution of a second version of the algorithm of the deep neural network, wherein the second version of the algorithm of the deep neural network operates with received parameters provided from the parameter server; receive, in the graphical user interface, user modifications of the model output of the source data generated in the graphical user interface, prior to receiving the user acceptance; and determine a difference in parameters used in the first version of the algorithm of the deep neural network and the parameters used in the second version of the algorithm of the deep neural network; wherein transmitting the updated parameters for training of the deep neural network includes transmitting the determined difference in parameters. 16. The machine-readable medium of claim 10,
wherein the source data is medical imaging data that represents one or more human anatomical features in one or more medical images, and wherein the algorithm of the deep neural network performs automated workflow operations, including at least one of: detection, segmentation, quantification, or prediction operations, and wherein the automated workflow operations are performed on identified characteristics of one or more of the human anatomical features in the one or more medical images. 17. The machine-readable medium of claim 16,
wherein the model output of the source data includes a change in visualization to a display of the one or more human anatomical features in the one or more medical images, and wherein the change in visualization to the display of the one or more human anatomical features in the one or more medical images is further changed by user modification, wherein the user modification causes a further change to the visualization to the display of the one or more of the human anatomical features, the user modification received from a first user input received via a human input device, and wherein the user acceptance causes an acceptance of the further change to the visualization of the display of the one or more of the human anatomical features, the user acceptance received from a second user input received via the human input device. 18. The machine-readable medium of claim 10, the medium including instructions that cause the machine to perform operations that:
receive, from the parameter server, subsequent received parameters for subsequent operation of the algorithm for the deep neural network; and operate the algorithm for the deep neural network on a subsequent set of source data, based on use of the subsequent received parameters with the algorithm of the deep neural network. 19. A system, comprising:
a medical imaging viewing system, comprising processing circuitry having at least one processor and at least one memory, the processing circuitry to execute instructions with the at least one processor and the at least one memory to: generate model output of source data in a graphical user interface, wherein the model output of the source data is produced using execution of an algorithm of a deep neural network on a set of source data; receive, in the graphical user interface, user acceptance of the model output of the source data generated in the graphical user interface; generate updated parameters to update the algorithm of the deep neural network, wherein the updated parameters to update the algorithm are based on the user acceptance that is received in the graphical user interface; and transmit, to a parameter server, the updated parameters to update the algorithm of the deep neural network. 20. The system of claim 19, the processing circuitry to execute further instructions with the at least one processor and the at least one memory to:
calculate the updated parameters to update the algorithm of the deep neural network to provide reinforcement of weights used by the algorithm, in response to received user input; wherein the user input indicates the user acceptance that is received in the graphical user interface; wherein the source data is medical imaging data that represents human anatomical features in one or more medical images; wherein the algorithm of the deep neural network performs automated workflow operations, including at least one of: detection, segmentation, quantification, or prediction operations; and wherein the automated workflow operations are performed on identified characteristics of one or more of the human anatomical features in the one or more medical images. 21. The system of claim 19, the processing circuitry to execute further instructions with the at least one processor and the at least one memory to:
receive, in the graphical user interface, user modification of the model output of the source data generated in the graphical user interface; and calculate a difference between the model output of the source data and updated output of the source data, wherein the updated output of the source data is provided from the user modification of the model output; wherein the updated parameters to update the algorithm of the deep neural network provide changes of weights used by the algorithm, in response to received user input; wherein the updated parameters to update the algorithm of the deep neural network provide an indication of the calculated difference between the model output of the source data and the updated output of the source data; and wherein the user input indicates the user modification that is received in the graphical user interface. 22. The system of claim 19, the processing circuitry to execute further instructions with the at least one processor and the at least one memory to:
execute a user interaction workflow, the user interaction workflow including operations to generate the model output of the source data, the user interaction workflow performed with an execution of a first version of the algorithm of the deep neural network; execute a parallel algorithm workflow concurrently with the user interaction workflow, the parallel algorithm workflow including the operations of generating an expected model output of the source data, wherein the expected model output of the source data is produced using an execution of a second version of the algorithm of the deep neural network, wherein the second version of the algorithm of the deep neural network operates with received parameters provided from the parameter server; receive, in the graphical user interface, user modifications of the model output of the source data generated in the graphical user interface, prior to receiving the user acceptance; and determine a difference in parameters used in the first version of the algorithm of the deep neural network and the parameters used in the second version of the algorithm of the deep neural network; wherein transmission of the updated parameters for training of the deep neural network includes transmission of the determined difference in parameters. | Techniques for training a deep neural network from user interaction workflow activities occurring among distributed computing devices are disclosed herein. In an example, processing of input data (such as input medical imaging data) is performed at a client computing device with the execution of an algorithm of a deep neural network. A set of updated training parameters are generated to update the algorithm of the deep neural network, based on user interaction activities (such as user acceptance and user modification in a graphical user interface) that occur with the results of the executed algorithm. The generation and collection of the updated training parameters at a server, received from a plurality of distributed client sites, can be used to refine, improve, and train the algorithm of the deep neural network for subsequent processing and execution.1. A method for training a deep neural network from workflow activities in a computing device, performed by electronic operations executed by the computing device, with the computing device having at least one processor and at least one memory, and with the electronic operations comprising:
generating model output of source data in a graphical user interface of the computing device, wherein the model output of the source data is produced using execution of an algorithm of a deep neural network on a set of source data; receiving, in the graphical user interface, user acceptance of the model output of the source data generated in the graphical user interface; generating updated parameters to update the algorithm of the deep neural network, wherein the updated parameters to update the algorithm are based on the user acceptance that is received in the graphical user interface; and transmitting, to a parameter server, the updated parameters to update the algorithm of the deep neural network. 2. The method of claim 1,
wherein the updated parameters to update the algorithm of the deep neural network provide reinforcement of weights used by the algorithm, in response to user input received with the computing device, and wherein the user input indicates the user acceptance that is received in the graphical user interface. 3. The method of claim 1, the electronic operations comprising:
receiving, in the graphical user interface, user modification of the model output of the source data generated in the graphical user interface; wherein the updated parameters to update the algorithm of the deep neural network provide changes of weights used by the algorithm, in response to user input received with the computing device; and wherein the user input indicates the user modification that is received in the graphical user interface. 4. The method of claim 3, the electronic operations comprising:
calculating a difference between the model output of the source data and updated output of the source data, wherein the updated output of the source data is provided from the user modification of the model output; wherein the updated parameters to update the algorithm of the deep neural network provide an indication of the calculated difference between the model output of the source data and the updated output of the source data. 5. The method of claim 4,
wherein calculating the difference between the model output of the source data and updated output includes calculating changes to a plurality of weights applied by the algorithm of the deep neural network, and wherein the updated parameters to update the algorithm of the deep neural network indicate the changes to the plurality of weights. 6. The method of claim 1, the electronic operations comprising:
executing a user interaction workflow, the user interaction workflow including the operations of generating the model output of the source data, the user interaction workflow performed with an execution of a first version of the algorithm of the deep neural network; executing a parallel algorithm workflow concurrently with the user interaction workflow, the parallel algorithm workflow including the operations of generating an expected model output of the source data, wherein the expected model output of the source data is produced using an execution of a second version of the algorithm of the deep neural network, wherein the second version of the algorithm of the deep neural network operates with received parameters provided from the parameter server; receiving, in the graphical user interface, user modifications of the model output of the source data generated in the graphical user interface, prior to receiving the user acceptance; and determining a difference in parameters used in the first version of the algorithm of the deep neural network and the parameters used in the second version of the algorithm of the deep neural network; wherein transmitting the updated parameters for training of the deep neural network includes transmitting the determined difference in parameters. 7. The method of claim 1,
wherein the source data is medical imaging data that represents one or more human anatomical features in one or more medical images, wherein the algorithm of the deep neural network performs automated workflow operations, including at least one of: detection, segmentation, quantification, or prediction operations, and wherein the automated workflow operations are performed on identified characteristics of one or more of the human anatomical features in the one or more medical images. 8. The method of claim 7,
wherein the model output of the source data includes a change in visualization to a display of the one or more human anatomical features in the one or more medical images, and wherein the change in visualization to the display of the one or more human anatomical features in the one or more medical images is further changed by a user modification received with the computing device, wherein the user modification received with the computing device causes a further change to the visualization to the display of the one or more of the human anatomical features, the user modification received from a first user input received with the computing device via a human input device, and wherein the user acceptance received with the computing device causes an acceptance of the further change to the visualization of the display of the one or more of the human anatomical features, the user acceptance received from a second user input received with the computing device via the human input device. 9. The method of claim 1, the electronic operations comprising:
receiving, from the parameter server, subsequent received parameters for subsequent operation of the algorithm for the deep neural network; and operating the algorithm for the deep neural network on a subsequent set of source data, based on use of the subsequent received parameters with the algorithm of the deep neural network. 10. At least non-transitory machine-readable medium, the machine-readable medium including instructions, which when executed by a machine having a hardware processor, causes the machine to perform operations that:
generate model output of source data in a graphical user interface, wherein the model output of the source data is produced using execution of an algorithm of a deep neural network on a set of source data; receive, in the graphical user interface, user acceptance of the model output of the source data generated in the graphical user interface; generate updated parameters to update the algorithm of the deep neural network, wherein the updated parameters to update the algorithm are based on the user acceptance that is received in the graphical user interface; and transmit, to a parameter server, the updated parameters to update the algorithm of the deep neural network. 11. The machine-readable medium of claim 10,
wherein the updated parameters to update the algorithm of the deep neural network provide reinforcement of weights used by the algorithm, in response to received user input, and wherein the received user input indicates the user acceptance that is received in the graphical user interface. 12. The machine-readable medium of claim 10, the medium further including instructions that cause the machine to perform operations that:
receive, in the graphical user interface, user modification of the model output of the source data generated in the graphical user interface; wherein the updated parameters to update the algorithm of the deep neural network provide changes of weights used by the algorithm, in response to received user input; and wherein the received user input indicates the user modification that is received in the graphical user interface. 13. The machine-readable medium of claim 12, the medium further including instructions that cause the machine to perform operations that:
calculate a difference between the model output of the source data and updated output of the source data, wherein the updated output of the source data is provided from the user modification of the model output; wherein the updated parameters to update the algorithm of the deep neural network provide an indication of the calculated difference between the model output of the source data and the updated output of the source data. 14. The machine-readable medium of claim 13,
wherein calculating the difference between the model output of the source data and updated output includes calculating changes to a plurality of weights applied by the algorithm of the deep neural network, and wherein the updated parameters to update the algorithm of the deep neural network indicate the changes to the plurality of weights. 15. The machine-readable medium of claim 10, the medium including instructions that cause the machine to perform operations that:
execute a user interaction workflow, the user interaction workflow including the operations of generating the model output of the source data, the user interaction workflow performed with an execution of a first version of the algorithm of the deep neural network; execute a parallel algorithm workflow concurrently with the user interaction workflow, the parallel algorithm workflow including the operations of generating an expected model output of the source data, wherein the expected model output of the source data is produced using an execution of a second version of the algorithm of the deep neural network, wherein the second version of the algorithm of the deep neural network operates with received parameters provided from the parameter server; receive, in the graphical user interface, user modifications of the model output of the source data generated in the graphical user interface, prior to receiving the user acceptance; and determine a difference in parameters used in the first version of the algorithm of the deep neural network and the parameters used in the second version of the algorithm of the deep neural network; wherein transmitting the updated parameters for training of the deep neural network includes transmitting the determined difference in parameters. 16. The machine-readable medium of claim 10,
wherein the source data is medical imaging data that represents one or more human anatomical features in one or more medical images, and wherein the algorithm of the deep neural network performs automated workflow operations, including at least one of: detection, segmentation, quantification, or prediction operations, and wherein the automated workflow operations are performed on identified characteristics of one or more of the human anatomical features in the one or more medical images. 17. The machine-readable medium of claim 16,
wherein the model output of the source data includes a change in visualization to a display of the one or more human anatomical features in the one or more medical images, and wherein the change in visualization to the display of the one or more human anatomical features in the one or more medical images is further changed by user modification, wherein the user modification causes a further change to the visualization to the display of the one or more of the human anatomical features, the user modification received from a first user input received via a human input device, and wherein the user acceptance causes an acceptance of the further change to the visualization of the display of the one or more of the human anatomical features, the user acceptance received from a second user input received via the human input device. 18. The machine-readable medium of claim 10, the medium including instructions that cause the machine to perform operations that:
receive, from the parameter server, subsequent received parameters for subsequent operation of the algorithm for the deep neural network; and operate the algorithm for the deep neural network on a subsequent set of source data, based on use of the subsequent received parameters with the algorithm of the deep neural network. 19. A system, comprising:
a medical imaging viewing system, comprising processing circuitry having at least one processor and at least one memory, the processing circuitry to execute instructions with the at least one processor and the at least one memory to: generate model output of source data in a graphical user interface, wherein the model output of the source data is produced using execution of an algorithm of a deep neural network on a set of source data; receive, in the graphical user interface, user acceptance of the model output of the source data generated in the graphical user interface; generate updated parameters to update the algorithm of the deep neural network, wherein the updated parameters to update the algorithm are based on the user acceptance that is received in the graphical user interface; and transmit, to a parameter server, the updated parameters to update the algorithm of the deep neural network. 20. The system of claim 19, the processing circuitry to execute further instructions with the at least one processor and the at least one memory to:
calculate the updated parameters to update the algorithm of the deep neural network to provide reinforcement of weights used by the algorithm, in response to received user input; wherein the user input indicates the user acceptance that is received in the graphical user interface; wherein the source data is medical imaging data that represents human anatomical features in one or more medical images; wherein the algorithm of the deep neural network performs automated workflow operations, including at least one of: detection, segmentation, quantification, or prediction operations; and wherein the automated workflow operations are performed on identified characteristics of one or more of the human anatomical features in the one or more medical images. 21. The system of claim 19, the processing circuitry to execute further instructions with the at least one processor and the at least one memory to:
receive, in the graphical user interface, user modification of the model output of the source data generated in the graphical user interface; and calculate a difference between the model output of the source data and updated output of the source data, wherein the updated output of the source data is provided from the user modification of the model output; wherein the updated parameters to update the algorithm of the deep neural network provide changes of weights used by the algorithm, in response to received user input; wherein the updated parameters to update the algorithm of the deep neural network provide an indication of the calculated difference between the model output of the source data and the updated output of the source data; and wherein the user input indicates the user modification that is received in the graphical user interface. 22. The system of claim 19, the processing circuitry to execute further instructions with the at least one processor and the at least one memory to:
execute a user interaction workflow, the user interaction workflow including operations to generate the model output of the source data, the user interaction workflow performed with an execution of a first version of the algorithm of the deep neural network; execute a parallel algorithm workflow concurrently with the user interaction workflow, the parallel algorithm workflow including the operations of generating an expected model output of the source data, wherein the expected model output of the source data is produced using an execution of a second version of the algorithm of the deep neural network, wherein the second version of the algorithm of the deep neural network operates with received parameters provided from the parameter server; receive, in the graphical user interface, user modifications of the model output of the source data generated in the graphical user interface, prior to receiving the user acceptance; and determine a difference in parameters used in the first version of the algorithm of the deep neural network and the parameters used in the second version of the algorithm of the deep neural network; wherein transmission of the updated parameters for training of the deep neural network includes transmission of the determined difference in parameters. | 2,100 |
6,684 | 6,684 | 15,637,134 | 2,159 | The current document is directed a resource-exchange system that facilitates resource exchange and sharing among computing facilities. The currently disclosed methods and systems employ efficient, distributed-search-based auction methods and subsystems within distributed computer systems that include large numbers of geographically distributed data centers to locate resource-provider computing facilities that match the resource needs of resource-consumer computing facilities. In one implementation, resource-provider computing facilities automatically generate hosting fees for hosting computational-resources-consuming entities on behalf of resource-consumer computing facilities that are included in bid-response messages returned by the resource-provider computing facilities in response to receiving bid-request messages. In another implementation, a cloud-exchange system automatically generates hosting fees on behalf of resource-provider computing facilities. | 1. An automated resource-exchange system comprising:
multiple resource-provider computing facilities that each
includes multiple computers, each having one or more processors and one or more memories, and
includes a local cloud-exchange instance;
multiple resource-consumer computing facilities that each
includes multiple computers, each having one or more processors and one or more memories,
includes a local cloud-exchange instance, and
transmits a hosting request for hosting a computational-resources-consuming entity; and
a cloud-exchange system that
is implemented on one or more physical computers, each including one or more processors and one or more memories,
includes a cloud-exchange engine,
receives hosting requests from the resource-consumer computing facilities,
determines a set of one or more candidate resource-provider computing facilities for each received hosting request;
determines a hosting fee for each candidate resource-provider computing facility for each received hosting request; and
uses the determined hosting fees to select one or more resource-provider computing facilities for each received hosting request. 2. The automated resource-exchange system of claim 1 wherein a resource-consumer computing facility includes, in a bid request for hosting one or more computational-resources-consuming entities, a buy policy that includes:
one or more constraints;
configuration parameters that include specifications of multiple computational resources for hosting the one or more computational-resources-consuming entities; and
a search evaluation expression. 3. The automated resource-exchange system of claim 1, wherein the hosting fee is one of:
a fixed fee for hosting one or more computational-resources-consuming entities; and an estimated fee for hosting one or more computational-resources-consuming entities. 4. The automated resource-exchange system of claim 1 wherein an estimated hosting fee is determined according to one of multiple pricing modes. 5. The automated resource-exchange system of claim 4 wherein the multiple pricing modes include:
a fixed-fee pricing mode;
a class-based pricing mode;
a configuration-based pricing mode;
a resource-consumption-class-based pricing mode; and
an estimated resource-consumption-class-based pricing mode. 6. The automated resource-exchange system of claim 5 wherein the estimated fee for hosting a computational-resources-consuming entity is determined according to the fixed-fee pricing mode by:
multiplying a fixed-fee price per unit of time for hosting a computational-resources-consuming entity by a hosting duration, in units of time. 7. The automated resource-exchange system of claim 5 wherein the estimated fee for hosting a computational-resources-consuming entity is determined according to the class-based pricing mode by:
assigning the computational-resources-consuming entity to a computational-resources-consuming-entity class based on configuration-parameter values specified in a buy policy associated with a bid request transmitted to the cloud-exchange engine by a resource-consumer computing facility; and
multiplying a fixed-fee price per unit of time for hosting a computational-resources-consuming entity belonging to the computational-resources-consuming entity class by a hosting duration, in units of time. 8. The automated resource-exchange system of claim 5 wherein the estimated fee for hosting a computational-resources-consuming entity is determined according to the configuration-based pricing mode by:
initializing a variable fee;
for each computational resource for which a configuration parameter value is specified in a buy policy associated with a bid request transmitted to the cloud-exchange engine by a resource-consumer computing facility,
multiplying a fee price per unit of time for the computational resource by a hosting duration, in units of time to generate a computational-resource fee, and
adding the computational-resource fee to the contents of the variable fee. 9. The automated resource-exchange system of claim 5 wherein the estimated hosting fee is determined according to the resource-consumption-class-based pricing mode by:
for each of multiple computational resources to be provided by the resource-provider computational facility for hosting the computational-resources-consuming entity,
estimating consumption by the computational-resources-consuming entity of the computational resource over a hosting period;
assigning the computational-resources-consuming entity to a computational-resources-consuming entity class based on the estimates of computational-resource consumption; and
multiplying a fixed-fee price per unit of time for hosting a computational-resources-consuming entity belonging to the computational-resources-consuming entity class by a hosting duration, in units of time. 10. The automated resource-exchange system of claim 5 wherein the estimated fee for hosting a computational-resources-consuming entity is determined according to the estimated resource-consumption-class-based pricing mode by:
initializing a variable fee; and
for each of multiple computational resources to be provided by the resource-provider computational facility for hosting the computational-resources-consuming entity,
estimating consumption by the computational-resources-consuming entity of the computational resource over a hosting period,
multiplying a fee price per unit of time for the estimated computational-resource consumption by a hosting duration, in units of time, to generate a computational-resource fee, and
adding the computational-resource fee to the contents of the variable fee. 11. The automated resource-exchange system of claim 4 wherein one or more additional fees are added to the estimated hosting fee to generate a final hosting fee for entry into a bid-response message. 12. The automated resource-exchange system of claim 11 wherein an additional uplift fee is added to the estimated hosting fee for one or more additional requested services, the uplift fee for each additional service selected from among:
a fixed fee for the additional service;
a percentage uplift over the initial hosting fee for the additional service; and
a fixed fee and a percentage uplift over the initial hosting fee for the additional service. 13. The automated resource-exchange system of claim 11 wherein an additional migration fee is added to the estimated hosting fee for migrating a computational-resources-consuming entity to the resource-provider computing facility, the migration fee computed by:
initializing a variable fee;
for each computational resource used in migrating the computational-resources-consuming entity to the resource-provider computing facility,
estimating consumption of the computational resource during migration,
multiplying a fee price per unit of time for the estimated computational-resource consumption, in units of time, to generate a computational-resource fee, and
adding the computational-resource fee to the contents of the variable fee. 14. The automated resource-exchange system of claim 2 wherein the configuration parameters that include specifications of multiple computational resources for hosting the one or more computational-resources-consuming entities include configuration parameters for computational resources selected from among:
memory;
CPU bandwidth;
networking bandwidth;
data-storage-access bandwidth; and
data storage. 15. A method that automatically computes a hosting-fee within an automated resource-exchange system having multiple resource-provider computing facilities, multiple resource-consumer computing facilities, and a cloud-exchange system, the method comprising:
receiving, by the cloud-exchange engine from a resource-consumer computing facility, a hosting request for hosting one or more computational-resources-consuming entities within a resource-provider computing facility; determining, by the cloud-exchange engine, a set of one or more candidate resource-provider computing facilities for the received hosting request; determining, by the cloud-exchange engine, a hosting fee for each candidate resource-provider computing facility for each received hosting request; and using, by the cloud-exchange engine, the determined hosting fees to select one or more resource-provider computing facilities for each received hosting request. 16. The method of claim 15 a hosting fee is one of:
a fixed fee for hosting one or more computational-resources-consuming entities; and
an estimated fee for hosting one or more computational-resources-consuming entities. 17. The method of claim 15 wherein an estimated hosting fee is determined according to one of multiple pricing modes, the pricing modes including:
a fixed-fee pricing mode;
a class-based pricing mode;
a configuration-based pricing mode;
a resource-consumption-class-based pricing mode; and
an estimated resource-consumption-class-based pricing mode. 18. The method of claim 17 wherein an additional uplift fee is added to the hosting fee for one or more additional requested services, the uplift fee for each additional service selected from among:
a fixed fee for the additional service;
a percentage uplift over the estimated hosting fee for the additional service; and
a fixed fee and a percentage uplift over the estimated hosting fee for the additional service. 19. The method of claim 17 wherein an additional migration fee is added to the hosting fee for migrating a computational-resources-consuming entity to the resource-provider computing facility, the migration fee computed by:
initializing a variable fee;
for each computational resource used in migrating the computational-resources-consuming entity to the resource-provider computing facility,
estimating consumption of the computational resource during migration,
multiplying a fee price per unit of time for the estimated computational-resource consumption, in units of time, to generate a computational-resource fee, and
adding the computational-resource fee to the contents of the variable fee. 20. A physical data-storage device encoded with computer instructions that, when executed by processors with an automated resource-exchange system comprising resource-provider computing facilities, resource-consumer computing facilities, and a cloud-exchange engine, control the automated resource-exchange system to automatically compute a hosting-fee by:
receiving, by the cloud-exchange engine from a resource-consumer computing facility, a hosting request for hosting one or more computational-resources-consuming entities within a resource-provider computing facility; determining, by the cloud-exchange engine, a set of one or more candidate resource-provider computing facilities for the received hosting request; determining, by the cloud-exchange engine, a hosting fee for each candidate resource-provider computing facility for each received hosting request; and using, by the cloud-exchange engine, the determined hosting fees to select one or more resource-provider computing facilities for each received hosting request. | The current document is directed a resource-exchange system that facilitates resource exchange and sharing among computing facilities. The currently disclosed methods and systems employ efficient, distributed-search-based auction methods and subsystems within distributed computer systems that include large numbers of geographically distributed data centers to locate resource-provider computing facilities that match the resource needs of resource-consumer computing facilities. In one implementation, resource-provider computing facilities automatically generate hosting fees for hosting computational-resources-consuming entities on behalf of resource-consumer computing facilities that are included in bid-response messages returned by the resource-provider computing facilities in response to receiving bid-request messages. In another implementation, a cloud-exchange system automatically generates hosting fees on behalf of resource-provider computing facilities.1. An automated resource-exchange system comprising:
multiple resource-provider computing facilities that each
includes multiple computers, each having one or more processors and one or more memories, and
includes a local cloud-exchange instance;
multiple resource-consumer computing facilities that each
includes multiple computers, each having one or more processors and one or more memories,
includes a local cloud-exchange instance, and
transmits a hosting request for hosting a computational-resources-consuming entity; and
a cloud-exchange system that
is implemented on one or more physical computers, each including one or more processors and one or more memories,
includes a cloud-exchange engine,
receives hosting requests from the resource-consumer computing facilities,
determines a set of one or more candidate resource-provider computing facilities for each received hosting request;
determines a hosting fee for each candidate resource-provider computing facility for each received hosting request; and
uses the determined hosting fees to select one or more resource-provider computing facilities for each received hosting request. 2. The automated resource-exchange system of claim 1 wherein a resource-consumer computing facility includes, in a bid request for hosting one or more computational-resources-consuming entities, a buy policy that includes:
one or more constraints;
configuration parameters that include specifications of multiple computational resources for hosting the one or more computational-resources-consuming entities; and
a search evaluation expression. 3. The automated resource-exchange system of claim 1, wherein the hosting fee is one of:
a fixed fee for hosting one or more computational-resources-consuming entities; and an estimated fee for hosting one or more computational-resources-consuming entities. 4. The automated resource-exchange system of claim 1 wherein an estimated hosting fee is determined according to one of multiple pricing modes. 5. The automated resource-exchange system of claim 4 wherein the multiple pricing modes include:
a fixed-fee pricing mode;
a class-based pricing mode;
a configuration-based pricing mode;
a resource-consumption-class-based pricing mode; and
an estimated resource-consumption-class-based pricing mode. 6. The automated resource-exchange system of claim 5 wherein the estimated fee for hosting a computational-resources-consuming entity is determined according to the fixed-fee pricing mode by:
multiplying a fixed-fee price per unit of time for hosting a computational-resources-consuming entity by a hosting duration, in units of time. 7. The automated resource-exchange system of claim 5 wherein the estimated fee for hosting a computational-resources-consuming entity is determined according to the class-based pricing mode by:
assigning the computational-resources-consuming entity to a computational-resources-consuming-entity class based on configuration-parameter values specified in a buy policy associated with a bid request transmitted to the cloud-exchange engine by a resource-consumer computing facility; and
multiplying a fixed-fee price per unit of time for hosting a computational-resources-consuming entity belonging to the computational-resources-consuming entity class by a hosting duration, in units of time. 8. The automated resource-exchange system of claim 5 wherein the estimated fee for hosting a computational-resources-consuming entity is determined according to the configuration-based pricing mode by:
initializing a variable fee;
for each computational resource for which a configuration parameter value is specified in a buy policy associated with a bid request transmitted to the cloud-exchange engine by a resource-consumer computing facility,
multiplying a fee price per unit of time for the computational resource by a hosting duration, in units of time to generate a computational-resource fee, and
adding the computational-resource fee to the contents of the variable fee. 9. The automated resource-exchange system of claim 5 wherein the estimated hosting fee is determined according to the resource-consumption-class-based pricing mode by:
for each of multiple computational resources to be provided by the resource-provider computational facility for hosting the computational-resources-consuming entity,
estimating consumption by the computational-resources-consuming entity of the computational resource over a hosting period;
assigning the computational-resources-consuming entity to a computational-resources-consuming entity class based on the estimates of computational-resource consumption; and
multiplying a fixed-fee price per unit of time for hosting a computational-resources-consuming entity belonging to the computational-resources-consuming entity class by a hosting duration, in units of time. 10. The automated resource-exchange system of claim 5 wherein the estimated fee for hosting a computational-resources-consuming entity is determined according to the estimated resource-consumption-class-based pricing mode by:
initializing a variable fee; and
for each of multiple computational resources to be provided by the resource-provider computational facility for hosting the computational-resources-consuming entity,
estimating consumption by the computational-resources-consuming entity of the computational resource over a hosting period,
multiplying a fee price per unit of time for the estimated computational-resource consumption by a hosting duration, in units of time, to generate a computational-resource fee, and
adding the computational-resource fee to the contents of the variable fee. 11. The automated resource-exchange system of claim 4 wherein one or more additional fees are added to the estimated hosting fee to generate a final hosting fee for entry into a bid-response message. 12. The automated resource-exchange system of claim 11 wherein an additional uplift fee is added to the estimated hosting fee for one or more additional requested services, the uplift fee for each additional service selected from among:
a fixed fee for the additional service;
a percentage uplift over the initial hosting fee for the additional service; and
a fixed fee and a percentage uplift over the initial hosting fee for the additional service. 13. The automated resource-exchange system of claim 11 wherein an additional migration fee is added to the estimated hosting fee for migrating a computational-resources-consuming entity to the resource-provider computing facility, the migration fee computed by:
initializing a variable fee;
for each computational resource used in migrating the computational-resources-consuming entity to the resource-provider computing facility,
estimating consumption of the computational resource during migration,
multiplying a fee price per unit of time for the estimated computational-resource consumption, in units of time, to generate a computational-resource fee, and
adding the computational-resource fee to the contents of the variable fee. 14. The automated resource-exchange system of claim 2 wherein the configuration parameters that include specifications of multiple computational resources for hosting the one or more computational-resources-consuming entities include configuration parameters for computational resources selected from among:
memory;
CPU bandwidth;
networking bandwidth;
data-storage-access bandwidth; and
data storage. 15. A method that automatically computes a hosting-fee within an automated resource-exchange system having multiple resource-provider computing facilities, multiple resource-consumer computing facilities, and a cloud-exchange system, the method comprising:
receiving, by the cloud-exchange engine from a resource-consumer computing facility, a hosting request for hosting one or more computational-resources-consuming entities within a resource-provider computing facility; determining, by the cloud-exchange engine, a set of one or more candidate resource-provider computing facilities for the received hosting request; determining, by the cloud-exchange engine, a hosting fee for each candidate resource-provider computing facility for each received hosting request; and using, by the cloud-exchange engine, the determined hosting fees to select one or more resource-provider computing facilities for each received hosting request. 16. The method of claim 15 a hosting fee is one of:
a fixed fee for hosting one or more computational-resources-consuming entities; and
an estimated fee for hosting one or more computational-resources-consuming entities. 17. The method of claim 15 wherein an estimated hosting fee is determined according to one of multiple pricing modes, the pricing modes including:
a fixed-fee pricing mode;
a class-based pricing mode;
a configuration-based pricing mode;
a resource-consumption-class-based pricing mode; and
an estimated resource-consumption-class-based pricing mode. 18. The method of claim 17 wherein an additional uplift fee is added to the hosting fee for one or more additional requested services, the uplift fee for each additional service selected from among:
a fixed fee for the additional service;
a percentage uplift over the estimated hosting fee for the additional service; and
a fixed fee and a percentage uplift over the estimated hosting fee for the additional service. 19. The method of claim 17 wherein an additional migration fee is added to the hosting fee for migrating a computational-resources-consuming entity to the resource-provider computing facility, the migration fee computed by:
initializing a variable fee;
for each computational resource used in migrating the computational-resources-consuming entity to the resource-provider computing facility,
estimating consumption of the computational resource during migration,
multiplying a fee price per unit of time for the estimated computational-resource consumption, in units of time, to generate a computational-resource fee, and
adding the computational-resource fee to the contents of the variable fee. 20. A physical data-storage device encoded with computer instructions that, when executed by processors with an automated resource-exchange system comprising resource-provider computing facilities, resource-consumer computing facilities, and a cloud-exchange engine, control the automated resource-exchange system to automatically compute a hosting-fee by:
receiving, by the cloud-exchange engine from a resource-consumer computing facility, a hosting request for hosting one or more computational-resources-consuming entities within a resource-provider computing facility; determining, by the cloud-exchange engine, a set of one or more candidate resource-provider computing facilities for the received hosting request; determining, by the cloud-exchange engine, a hosting fee for each candidate resource-provider computing facility for each received hosting request; and using, by the cloud-exchange engine, the determined hosting fees to select one or more resource-provider computing facilities for each received hosting request. | 2,100 |
6,685 | 6,685 | 16,134,359 | 2,117 | Disclosed is a system for controlling pool/spa components. More particularly, disclosed is a system for controlling pool/spa components including a display screen and one or more processors presenting a control user interface for display on the display screen, wherein the control user interface includes a home screen comprising a first portion containing a first plurality of buttons and/or controls for controlling a first group of the plurality of pool/spa components associated with a first body of water, and a second portion containing a second plurality of buttons and/or controls for controlling a second group of the plurality of pool/spa components associated with a second body of water. | 1. A control system for a pool or spa installation, comprising:
a housing mountable at a pool or spa installation, the housing including a high voltage compartment and a low voltage compartment; a processor positioned in the housing; a WiFi transceiver in communication with the processor and positioned in the housing; a relay positioned in the housing and controlled by the processor, the relay connectable to a power line that provides power to a first pool or spa component; an RS-485 transceiver positioned in the housing and controlled by the processor, the RS-485 transceiver connectable to a second pool or spa component, wherein the relay is positioned in the high voltage compartment, the RS-485 transceiver is positioned in the low voltage compartment, and the processor controls operation of the first and second pool or spa components in response to a control command received by the WiFi transceiver. 2. The control system of claim 1, comprising a second relay positioned in the high voltage compartment of the housing and controlled by the processor, the second relay connectable to a power line that provides power to a third pool or spa component. 3. The control system of claim 1, comprising a printed circuit board,
wherein the processor, the relay, and the RS-485 transceiver are provided on the printed circuit board. 4. The control system of claim 1, wherein the WiFi transceiver is connected to the RS-485 transceiver. 5. The control system of claim 1, wherein the WiFi transceiver wirelessly communicates with a home network. 6. The control system of claim 5, wherein the WiFi transceiver receives a control command for at least one of the first and second pool or spa components from a wireless remote control unit via the home network and transmits the received control command to the processor, the processor controlling the at least one first and second pool or spa component according to the control command. 7. The control system of claim 6, wherein the wireless remote control unit is a cellular device. 8. The control system of claim 1, wherein the WiFi transceiver wirelessly communicates with a cellular device. 9. The control system of claim 8, wherein the WiFi transceiver receives a control command for at least one of the first and second pool or spa components from the cellular device and transmits the received control command to the processor, and the processor controls the at least one first and second pool or spa component according to the control command. 10. The control system of claim 1, wherein the processor is configured to receive control commands from a power company on the demand side during a peak demand period. 11. A control system for a pool or spa installation, comprising:
a housing mountable at a pool or spa installation; a processor positioned in the housing; a WiFi transceiver in communication with the processor and positioned in the housing; a relay positioned in the housing and controlled by the processor, the relay connectable to a power line that provides power to a first pool or spa component; an RS-485 transceiver positioned in the housing and controlled by the processor, the RS-485 transceiver connectable to a second pool or spa component, wherein the processor (i) controls operation of the first and second pool or spa components in response to a control command received by the WiFi transceiver, and (ii) receives control commands from a power company on the demand side during a peak demand period. 12. The control system of claim 11, comprising a second relay positioned in the housing and controlled by the processor, the second relay connectable to a power line that provides power to a third pool or spa component. 13. The control system of claim 11, comprising a printed circuit board,
wherein the processor, the relay, and the RS-485 transceiver are provided on the printed circuit board. 14. The control system of claim 11, wherein the WiFi transceiver is connected to the RS-485 transceiver. 15. The control system of claim 11, wherein the WiFi transceiver wirelessly communicates with a home network. 16. The control system of claim 15, wherein the WiFi transceiver receives a control command for at least one of the first and second pool or spa components from a wireless remote control unit via the home network and transmits the received control command to the processor, the processor controlling the at least one first and second pool or spa component according to the control command. 17. The control system of claim 16, wherein the wireless remote control unit is a cellular device. 18. The control system of claim 11, wherein the WiFi transceiver wirelessly communicates with a cellular device. 19. The control system of claim 18, wherein the WiFi transceiver receives a control command for at least one of the first and second pool or spa components from the cellular device and transmits the received control command to the processor, and the processor controls the at least one first and second pool or spa component according to the control command. 20. The control system of claim 11, wherein the housing includes a high voltage compartment and a low voltage compartment. 21. The control system of claim 20, wherein the relay is positioned in the high voltage compartment, and the RS-485 transceiver is positioned in the low voltage compartment. | Disclosed is a system for controlling pool/spa components. More particularly, disclosed is a system for controlling pool/spa components including a display screen and one or more processors presenting a control user interface for display on the display screen, wherein the control user interface includes a home screen comprising a first portion containing a first plurality of buttons and/or controls for controlling a first group of the plurality of pool/spa components associated with a first body of water, and a second portion containing a second plurality of buttons and/or controls for controlling a second group of the plurality of pool/spa components associated with a second body of water.1. A control system for a pool or spa installation, comprising:
a housing mountable at a pool or spa installation, the housing including a high voltage compartment and a low voltage compartment; a processor positioned in the housing; a WiFi transceiver in communication with the processor and positioned in the housing; a relay positioned in the housing and controlled by the processor, the relay connectable to a power line that provides power to a first pool or spa component; an RS-485 transceiver positioned in the housing and controlled by the processor, the RS-485 transceiver connectable to a second pool or spa component, wherein the relay is positioned in the high voltage compartment, the RS-485 transceiver is positioned in the low voltage compartment, and the processor controls operation of the first and second pool or spa components in response to a control command received by the WiFi transceiver. 2. The control system of claim 1, comprising a second relay positioned in the high voltage compartment of the housing and controlled by the processor, the second relay connectable to a power line that provides power to a third pool or spa component. 3. The control system of claim 1, comprising a printed circuit board,
wherein the processor, the relay, and the RS-485 transceiver are provided on the printed circuit board. 4. The control system of claim 1, wherein the WiFi transceiver is connected to the RS-485 transceiver. 5. The control system of claim 1, wherein the WiFi transceiver wirelessly communicates with a home network. 6. The control system of claim 5, wherein the WiFi transceiver receives a control command for at least one of the first and second pool or spa components from a wireless remote control unit via the home network and transmits the received control command to the processor, the processor controlling the at least one first and second pool or spa component according to the control command. 7. The control system of claim 6, wherein the wireless remote control unit is a cellular device. 8. The control system of claim 1, wherein the WiFi transceiver wirelessly communicates with a cellular device. 9. The control system of claim 8, wherein the WiFi transceiver receives a control command for at least one of the first and second pool or spa components from the cellular device and transmits the received control command to the processor, and the processor controls the at least one first and second pool or spa component according to the control command. 10. The control system of claim 1, wherein the processor is configured to receive control commands from a power company on the demand side during a peak demand period. 11. A control system for a pool or spa installation, comprising:
a housing mountable at a pool or spa installation; a processor positioned in the housing; a WiFi transceiver in communication with the processor and positioned in the housing; a relay positioned in the housing and controlled by the processor, the relay connectable to a power line that provides power to a first pool or spa component; an RS-485 transceiver positioned in the housing and controlled by the processor, the RS-485 transceiver connectable to a second pool or spa component, wherein the processor (i) controls operation of the first and second pool or spa components in response to a control command received by the WiFi transceiver, and (ii) receives control commands from a power company on the demand side during a peak demand period. 12. The control system of claim 11, comprising a second relay positioned in the housing and controlled by the processor, the second relay connectable to a power line that provides power to a third pool or spa component. 13. The control system of claim 11, comprising a printed circuit board,
wherein the processor, the relay, and the RS-485 transceiver are provided on the printed circuit board. 14. The control system of claim 11, wherein the WiFi transceiver is connected to the RS-485 transceiver. 15. The control system of claim 11, wherein the WiFi transceiver wirelessly communicates with a home network. 16. The control system of claim 15, wherein the WiFi transceiver receives a control command for at least one of the first and second pool or spa components from a wireless remote control unit via the home network and transmits the received control command to the processor, the processor controlling the at least one first and second pool or spa component according to the control command. 17. The control system of claim 16, wherein the wireless remote control unit is a cellular device. 18. The control system of claim 11, wherein the WiFi transceiver wirelessly communicates with a cellular device. 19. The control system of claim 18, wherein the WiFi transceiver receives a control command for at least one of the first and second pool or spa components from the cellular device and transmits the received control command to the processor, and the processor controls the at least one first and second pool or spa component according to the control command. 20. The control system of claim 11, wherein the housing includes a high voltage compartment and a low voltage compartment. 21. The control system of claim 20, wherein the relay is positioned in the high voltage compartment, and the RS-485 transceiver is positioned in the low voltage compartment. | 2,100 |
6,686 | 6,686 | 15,823,736 | 2,153 | A document of written content may be obtained. The document may be a candidate for inclusion in a corpus. A first entity associated with the document may be identified. A first discrete entity associated with the first entity may be identified. The relationship associated with the first entity and the first discrete entity may be analyzed. Based on the analyzing, a likelihood that the document contains content that would be detrimental for inclusion in the corpus may be determined. | 1. A method comprising:
obtaining a document of written content, wherein the document is a candidate for inclusion in a corpus; identifying a first entity associated with the document; identifying a first discrete entity associated with the first entity; analyzing a relationship between the first entity and the first discrete entity; and determining, based on the analyzing, a likelihood that the document contains content that would be detrimental for inclusion in the corpus. 2. The method of claim 1, wherein the first entity is an author of the document. 3. The method of claim 1, wherein the first discrete entity is associated with the first entity based on an association of the first entity and the first discrete entity with a second discrete entity. 4. The method of claim 1, wherein the analyzing comprises vectorizing the entity-relationship information related to the first entity and processing the vectorized information in a neural network. 5. The method of claim 1, wherein the analyzing incorporates content analysis of the document. 6. The method of claim 1, wherein the analyzing comprises determining the strength of the relationship. 7. The method of claim 1, further comprising attaching a negative value to the relationship, wherein a negative value reflects a negative sentiment between the first entity and the first discrete entity. 8. A computer program product, the computer program product comprising a computer readable storage medium having program instructions embodied therewith, the program instructions executable by a computer to cause the computer to:
obtain a document of written content, wherein the document is a candidate for inclusion in a corpus; identify a first entity associated with the document; identify a first discrete entity associated with the first entity; analyze a relationship between the first entity and the first discrete entity; and determine, based on the analyzing, a likelihood that the document contains content that would be detrimental for inclusion in the corpus. 9. The computer program product of claim 8, wherein the first entity is an owner of a website on which the document is posted. 10. The computer program product of claim 8, wherein the first discrete entity is associated with the first entity based on an association of the first entity and the first discrete entity with a second discrete entity. 11. The computer program product of claim 8, wherein the analyzing comprises vectorizing the entity-relationship information related to the first entity and processing the vectorized information in a neural network. 12. The computer program product of claim 8, wherein the analyzing incorporates content analysis of the document. 13. The computer program product of claim 8, wherein the analyzing comprises determining the strength of the relationship. 14. The computer program product of claim 8, wherein the program instructions further cause the computer to attach a negative value to the relationship, wherein a negative value reflects a negative sentiment between the first entity and the first discrete entity. 15. A system comprising:
a processor; and a memory in communication with the processor, the memory containing program instructions that, when executed by the processor, are configured to cause the processor to perform a method, the method comprising: obtaining a document of written content, wherein the document is a candidate for inclusion in a corpus; identifying a first entity associated with the document; identifying a first discrete entity associated with the first entity; analyzing a relationship between the first entity and the first discrete entity; and determining, based on the analyzing, a likelihood that the document contains content that would be detrimental for inclusion in the corpus. 16. The system of claim 15, wherein the first entity is an owner of the document. 17. The system of claim 15, wherein the first discrete entity is associated with the first entity based on an association of the first entity and the first discrete entity with a second discrete entity. 18. The system of claim 15, wherein the analyzing comprises vectorizing the entity-relationship information related to the first entity and processing the vectorized information in a neural network. 19. The system of claim 15, wherein the analyzing comprises determining the strength of the relationship. 20. The system of claim 15, where the method further comprises attaching a negative value to the relationship, wherein a negative value reflects a negative sentiment between the first entity and the first discrete entity. | A document of written content may be obtained. The document may be a candidate for inclusion in a corpus. A first entity associated with the document may be identified. A first discrete entity associated with the first entity may be identified. The relationship associated with the first entity and the first discrete entity may be analyzed. Based on the analyzing, a likelihood that the document contains content that would be detrimental for inclusion in the corpus may be determined.1. A method comprising:
obtaining a document of written content, wherein the document is a candidate for inclusion in a corpus; identifying a first entity associated with the document; identifying a first discrete entity associated with the first entity; analyzing a relationship between the first entity and the first discrete entity; and determining, based on the analyzing, a likelihood that the document contains content that would be detrimental for inclusion in the corpus. 2. The method of claim 1, wherein the first entity is an author of the document. 3. The method of claim 1, wherein the first discrete entity is associated with the first entity based on an association of the first entity and the first discrete entity with a second discrete entity. 4. The method of claim 1, wherein the analyzing comprises vectorizing the entity-relationship information related to the first entity and processing the vectorized information in a neural network. 5. The method of claim 1, wherein the analyzing incorporates content analysis of the document. 6. The method of claim 1, wherein the analyzing comprises determining the strength of the relationship. 7. The method of claim 1, further comprising attaching a negative value to the relationship, wherein a negative value reflects a negative sentiment between the first entity and the first discrete entity. 8. A computer program product, the computer program product comprising a computer readable storage medium having program instructions embodied therewith, the program instructions executable by a computer to cause the computer to:
obtain a document of written content, wherein the document is a candidate for inclusion in a corpus; identify a first entity associated with the document; identify a first discrete entity associated with the first entity; analyze a relationship between the first entity and the first discrete entity; and determine, based on the analyzing, a likelihood that the document contains content that would be detrimental for inclusion in the corpus. 9. The computer program product of claim 8, wherein the first entity is an owner of a website on which the document is posted. 10. The computer program product of claim 8, wherein the first discrete entity is associated with the first entity based on an association of the first entity and the first discrete entity with a second discrete entity. 11. The computer program product of claim 8, wherein the analyzing comprises vectorizing the entity-relationship information related to the first entity and processing the vectorized information in a neural network. 12. The computer program product of claim 8, wherein the analyzing incorporates content analysis of the document. 13. The computer program product of claim 8, wherein the analyzing comprises determining the strength of the relationship. 14. The computer program product of claim 8, wherein the program instructions further cause the computer to attach a negative value to the relationship, wherein a negative value reflects a negative sentiment between the first entity and the first discrete entity. 15. A system comprising:
a processor; and a memory in communication with the processor, the memory containing program instructions that, when executed by the processor, are configured to cause the processor to perform a method, the method comprising: obtaining a document of written content, wherein the document is a candidate for inclusion in a corpus; identifying a first entity associated with the document; identifying a first discrete entity associated with the first entity; analyzing a relationship between the first entity and the first discrete entity; and determining, based on the analyzing, a likelihood that the document contains content that would be detrimental for inclusion in the corpus. 16. The system of claim 15, wherein the first entity is an owner of the document. 17. The system of claim 15, wherein the first discrete entity is associated with the first entity based on an association of the first entity and the first discrete entity with a second discrete entity. 18. The system of claim 15, wherein the analyzing comprises vectorizing the entity-relationship information related to the first entity and processing the vectorized information in a neural network. 19. The system of claim 15, wherein the analyzing comprises determining the strength of the relationship. 20. The system of claim 15, where the method further comprises attaching a negative value to the relationship, wherein a negative value reflects a negative sentiment between the first entity and the first discrete entity. | 2,100 |
6,687 | 6,687 | 15,614,078 | 2,177 | Systems, methods, and software can be used to generate predictive text on an electronic device. In some aspects, one computer-implemented method includes receiving, at an input method editor (IME) operating on an electronic device, a text prediction indicator for an editable field outputted by an application; determining, based on the text prediction indicator, whether a text input from the editable field is allowed to be stored; in response to determining that the text input from the editable field is allowed to be stored, storing the text input; determining a predictive text; and outputting the predictive text on the electronic device. | 1. A method, comprising:
receiving, at an input method editor (IME) operating on an electronic device, a first text prediction indicator for a first editable field outputted by a first application; determining, based on the first text prediction indicator, whether a first text input from the first editable field is allowed to be stored; in response to determining that the first text input from the first editable field is allowed to be stored, storing the first text input; determining a first predictive text; outputting the first predictive text on the electronic device; receiving, at the IME, a second text prediction indicator for a second editable field outputted by a second application, wherein the second application is different than the first application; determining, based on the second text prediction indicator, that a second text input from the second editable field is not allowed to be stored; in response to determining that the second text input from the second editable field is not allowed to be stored, refraining from storing the second text input determining a second predictive text; and outputting the second predictive text on the electronic device. 2. (canceled) 3. The method of claim 1, wherein the first text prediction indicator is specified in an application manifest associated with the first application. 4. The method of claim 1, wherein the first text input is stored in an application-specific dictionary associated with the first application, and determining that the first text input from the first editable field is allowed to be stored comprises:
receiving an application identifier of the first application; and determining, based on the application identifier and the first text prediction indicator, that the first text input from the first editable field is allowed to be stored in the application-specific dictionary associated with the first application. 5. The method of claim 4, wherein the application-specific dictionary is stored by the IME. 6. The method of claim 4, wherein the application-specific dictionary is stored by the first application. 7. The method of claim 4, wherein the first predictive text is determined based on the application-specific dictionary associated with the first application and a global dictionary. 8. An electronic device, comprising:
at least one hardware processor; a non-transitory computer-readable storage medium coupled to the at least one hardware processor and storing programming instructions for execution by the at least one hardware processor, wherein the programming instructions, when executed, cause the at least one hardware processor to perform operations comprising:
receiving, at an input method editor (IME) operating on the electronic device, a first text prediction indicator for a first editable field outputted by a first application;
determining, based on the first text prediction indicator, whether a first text input from the first editable field is allowed to be stored;
in response to determining that the first text input from the first editable field is allowed to be stored, storing the first text input;
determining a first predictive text;
outputting the first predictive text on the electronic device;
receiving, at the IME, a second text prediction indicator for a second editable field outputted by a second application, wherein the second application is different than the first application;
determining, based on the second text prediction indicator, that a second text input from the second editable field is not allowed to be stored;
in response to determining that the second text input from the second editable field is not allowed to be stored, refraining from storing the second text input;
determining a second predictive text; and
outputting the second predictive text on the electronic device. 9. (canceled) 10. The electronic device of claim 8, wherein the first text prediction indicator is specified in an application manifest associated with the first application. 11. The electronic device of claim 8, wherein the first text input is stored in an application-specific dictionary associated with the first application, and determining that the first text input from the first editable field is allowed to be stored comprises:
receiving an application identifier of the first application; and determining, based on the application identifier and the first text prediction indicator, that the first text input from the first editable field is allowed to be stored in the application-specific dictionary associated with the first application. 12. The electronic device of claim 11, wherein the application-specific dictionary is stored by the IME. 13. The electronic device of claim 11, wherein the application-specific dictionary is stored by the first application. 14. The electronic device of claim 11, wherein the first predictive text is determined based on the application-specific dictionary associated with the first application and a global dictionary. 15. One or more non-transitory computer-readable media containing instructions which, when executed, cause an electronic device to perform operations comprising:
receiving, at an input method editor (IME) operating on the electronic device, a first text prediction indicator for a first editable field outputted by a first application; determining, based on the first text prediction indicator, whether a first text input from the first editable field is allowed to be stored; in response to determining that the first text input from the first editable field is allowed to be stored, storing the first text input; determining a first predictive text; outputting the first predictive text on the electronic device; receiving, at the IME, a second text prediction indicator for a second editable field outputted by a second application, wherein the second application is different than the first application; determining, based on the second text prediction indicator, that a second text input from the second editable field is not allowed to be stored; in response to determining that the second text input from the second editable field is not allowed to be stored, refraining from storing the second text input; determining a second predictive text; and outputting the second predictive text on the electronic device. 16. (canceled) 17. The one or more non-transitory computer-readable media of claim 15, wherein the first text prediction indicator is specified in an application manifest associated with the first application. 18. The one or more non-transitory computer-readable media of claim 15, wherein the first text input is stored in an application-specific dictionary associated with the first application, and determining that the first text input from the first editable field is allowed to be stored comprises:
receiving an application identifier of the first application; and determining, based on the application identifier and the first text prediction indicator, that the first text input from the first editable field is allowed to be stored in the application-specific dictionary associated with the first application. 19. The one or more non-transitory computer-readable media of claim 18, wherein the application-specific dictionary is stored by the IME. 20. The one or more non-transitory computer-readable media of claim 18, wherein the application-specific dictionary is stored by the first application. | Systems, methods, and software can be used to generate predictive text on an electronic device. In some aspects, one computer-implemented method includes receiving, at an input method editor (IME) operating on an electronic device, a text prediction indicator for an editable field outputted by an application; determining, based on the text prediction indicator, whether a text input from the editable field is allowed to be stored; in response to determining that the text input from the editable field is allowed to be stored, storing the text input; determining a predictive text; and outputting the predictive text on the electronic device.1. A method, comprising:
receiving, at an input method editor (IME) operating on an electronic device, a first text prediction indicator for a first editable field outputted by a first application; determining, based on the first text prediction indicator, whether a first text input from the first editable field is allowed to be stored; in response to determining that the first text input from the first editable field is allowed to be stored, storing the first text input; determining a first predictive text; outputting the first predictive text on the electronic device; receiving, at the IME, a second text prediction indicator for a second editable field outputted by a second application, wherein the second application is different than the first application; determining, based on the second text prediction indicator, that a second text input from the second editable field is not allowed to be stored; in response to determining that the second text input from the second editable field is not allowed to be stored, refraining from storing the second text input determining a second predictive text; and outputting the second predictive text on the electronic device. 2. (canceled) 3. The method of claim 1, wherein the first text prediction indicator is specified in an application manifest associated with the first application. 4. The method of claim 1, wherein the first text input is stored in an application-specific dictionary associated with the first application, and determining that the first text input from the first editable field is allowed to be stored comprises:
receiving an application identifier of the first application; and determining, based on the application identifier and the first text prediction indicator, that the first text input from the first editable field is allowed to be stored in the application-specific dictionary associated with the first application. 5. The method of claim 4, wherein the application-specific dictionary is stored by the IME. 6. The method of claim 4, wherein the application-specific dictionary is stored by the first application. 7. The method of claim 4, wherein the first predictive text is determined based on the application-specific dictionary associated with the first application and a global dictionary. 8. An electronic device, comprising:
at least one hardware processor; a non-transitory computer-readable storage medium coupled to the at least one hardware processor and storing programming instructions for execution by the at least one hardware processor, wherein the programming instructions, when executed, cause the at least one hardware processor to perform operations comprising:
receiving, at an input method editor (IME) operating on the electronic device, a first text prediction indicator for a first editable field outputted by a first application;
determining, based on the first text prediction indicator, whether a first text input from the first editable field is allowed to be stored;
in response to determining that the first text input from the first editable field is allowed to be stored, storing the first text input;
determining a first predictive text;
outputting the first predictive text on the electronic device;
receiving, at the IME, a second text prediction indicator for a second editable field outputted by a second application, wherein the second application is different than the first application;
determining, based on the second text prediction indicator, that a second text input from the second editable field is not allowed to be stored;
in response to determining that the second text input from the second editable field is not allowed to be stored, refraining from storing the second text input;
determining a second predictive text; and
outputting the second predictive text on the electronic device. 9. (canceled) 10. The electronic device of claim 8, wherein the first text prediction indicator is specified in an application manifest associated with the first application. 11. The electronic device of claim 8, wherein the first text input is stored in an application-specific dictionary associated with the first application, and determining that the first text input from the first editable field is allowed to be stored comprises:
receiving an application identifier of the first application; and determining, based on the application identifier and the first text prediction indicator, that the first text input from the first editable field is allowed to be stored in the application-specific dictionary associated with the first application. 12. The electronic device of claim 11, wherein the application-specific dictionary is stored by the IME. 13. The electronic device of claim 11, wherein the application-specific dictionary is stored by the first application. 14. The electronic device of claim 11, wherein the first predictive text is determined based on the application-specific dictionary associated with the first application and a global dictionary. 15. One or more non-transitory computer-readable media containing instructions which, when executed, cause an electronic device to perform operations comprising:
receiving, at an input method editor (IME) operating on the electronic device, a first text prediction indicator for a first editable field outputted by a first application; determining, based on the first text prediction indicator, whether a first text input from the first editable field is allowed to be stored; in response to determining that the first text input from the first editable field is allowed to be stored, storing the first text input; determining a first predictive text; outputting the first predictive text on the electronic device; receiving, at the IME, a second text prediction indicator for a second editable field outputted by a second application, wherein the second application is different than the first application; determining, based on the second text prediction indicator, that a second text input from the second editable field is not allowed to be stored; in response to determining that the second text input from the second editable field is not allowed to be stored, refraining from storing the second text input; determining a second predictive text; and outputting the second predictive text on the electronic device. 16. (canceled) 17. The one or more non-transitory computer-readable media of claim 15, wherein the first text prediction indicator is specified in an application manifest associated with the first application. 18. The one or more non-transitory computer-readable media of claim 15, wherein the first text input is stored in an application-specific dictionary associated with the first application, and determining that the first text input from the first editable field is allowed to be stored comprises:
receiving an application identifier of the first application; and determining, based on the application identifier and the first text prediction indicator, that the first text input from the first editable field is allowed to be stored in the application-specific dictionary associated with the first application. 19. The one or more non-transitory computer-readable media of claim 18, wherein the application-specific dictionary is stored by the IME. 20. The one or more non-transitory computer-readable media of claim 18, wherein the application-specific dictionary is stored by the first application. | 2,100 |
6,688 | 6,688 | 15,848,491 | 2,154 | A function reference for a function is identified in a query. A plurality of processing environments that can provide the function is identified. Function costs for the function to process in the processing environments are obtained. Input data transfer costs are acquired for providing input data identified in the query to each of the functions. A specific one of the functions from a specific processing environment is selected based on the function costs and the input data transfer costs. A query execution plan for executing the query with the specific function is generated. The query execution plan is provided to a database engine for execution. | 1. A method, comprising:
identifying processing environments that provide a function in response to a query having a reference to the function; acquiring function costs for processing the function in each of the processing environments; and selecting an optimal processing environment for executing the function with the query based at least in part on the function costs. 2. The method of claim 1, wherein identifying further includes identifying each of the processing environments as a coupled system to a database associated with the query. 3. The method of claim 2, wherein identifying further includes identifying from the processing environments: the function implemented in hardware within a first processing environment, the function implemented as emulated software within a second processing environment, and the function implemented as software that is natively provided and accessible from a third processing environment. 4. The method of claim 1, wherein acquiring further includes identifying input data transfer costs for transferring input data needed by the function to any of the processing environments that lack the input data. 5. The method of claim 4, wherein identifying further includes obtaining results data transfer costs for transferring results data produced by the function from each of the processing environments back to a query session processing environment where the query was initiated. 6. The method of claim 5, wherein selecting further includes selecting the optimal processing environment based on the input data transfer costs and the results data transfer costs. 7. The method of claim 6, wherein selecting further includes generating a query execution plan for the query that includes processing the function from the optimal processing environment. 8. The method of claim 1, wherein selecting further includes identifying at least one target processing environment that can process the function but lacks the function implemented within that at least one target environment, obtaining function transfer costs for shipping the function to the at least one target environment, and selecting the optimal processing environment based on the function transfer costs. 9. The method of claim 1 further comprising, generating a query execution plan for executing the query and for executing the function from the optimal processing environment. 10. The method of claim 9 further comprising, providing the query execution plan to a database engine for executing the query against a database. 11. The method of claim 9, wherein generating further includes selecting the optimal processing environment as a pre-process to the generating of the query execution plan. 12. A method, comprising:
receiving a query having a reference to a function; identifying a plurality of processing environments that implement the function; obtaining function costs for processing the function from each of the processing environments; and generating a query execution plan for the query that includes processing the function from a select one of the processing environments that has a lowest function cost. 13. The method of claim 12, wherein obtaining further includes obtaining input data transfer costs associated with transferring needed input data to any of the processing environments that lack input data. 14. The method of claim 13, wherein providing obtaining further includes obtaining results data transfer costs associated with transferring results data produced by the function from each of the processing environments back to a query session processing environment that initiated the query. 15. The method of claim 14, wherein generating further includes determining the select one of the processing environments based on the function costs, the input data transfer costs, and the results data transfer costs. 16. The method of claim 14 further comprising, providing the query execution plan to a database engine for execution against a database. 17. The method of claim 12 further comprising, determining the select one of the processing environments before generating the query execution plan for conditions defined in the query. 18. The method of claim 12 further comprising, determining the select one of the processing environments during the generating of the query execution plan with conditions defined in the query. 19. A system, comprising:
a data warehouse including:
an optimizer;
wherein the optimizer is configured to: i) execute on at least one network node of the data warehouse, ii) identify processing environments providing a function that is referenced in a query, iii) obtain function costs for processing the function in each of the processing environments, and iv) generate a query execution plan for executing the query against the data warehouse that includes executing the function from a lowest cost processing environment based on the function costs. 20. The system of claim 19, wherein the optimizer is further configured in, iv), to: determine the lowest cost processing environment based on input data transfer costs associated with transferring input data to any of the processing environments that lack the input data and based on results data transfer costs associated with transferring results data produced by the function from the processing environments back to a query session processing environment that initiated the query. | A function reference for a function is identified in a query. A plurality of processing environments that can provide the function is identified. Function costs for the function to process in the processing environments are obtained. Input data transfer costs are acquired for providing input data identified in the query to each of the functions. A specific one of the functions from a specific processing environment is selected based on the function costs and the input data transfer costs. A query execution plan for executing the query with the specific function is generated. The query execution plan is provided to a database engine for execution.1. A method, comprising:
identifying processing environments that provide a function in response to a query having a reference to the function; acquiring function costs for processing the function in each of the processing environments; and selecting an optimal processing environment for executing the function with the query based at least in part on the function costs. 2. The method of claim 1, wherein identifying further includes identifying each of the processing environments as a coupled system to a database associated with the query. 3. The method of claim 2, wherein identifying further includes identifying from the processing environments: the function implemented in hardware within a first processing environment, the function implemented as emulated software within a second processing environment, and the function implemented as software that is natively provided and accessible from a third processing environment. 4. The method of claim 1, wherein acquiring further includes identifying input data transfer costs for transferring input data needed by the function to any of the processing environments that lack the input data. 5. The method of claim 4, wherein identifying further includes obtaining results data transfer costs for transferring results data produced by the function from each of the processing environments back to a query session processing environment where the query was initiated. 6. The method of claim 5, wherein selecting further includes selecting the optimal processing environment based on the input data transfer costs and the results data transfer costs. 7. The method of claim 6, wherein selecting further includes generating a query execution plan for the query that includes processing the function from the optimal processing environment. 8. The method of claim 1, wherein selecting further includes identifying at least one target processing environment that can process the function but lacks the function implemented within that at least one target environment, obtaining function transfer costs for shipping the function to the at least one target environment, and selecting the optimal processing environment based on the function transfer costs. 9. The method of claim 1 further comprising, generating a query execution plan for executing the query and for executing the function from the optimal processing environment. 10. The method of claim 9 further comprising, providing the query execution plan to a database engine for executing the query against a database. 11. The method of claim 9, wherein generating further includes selecting the optimal processing environment as a pre-process to the generating of the query execution plan. 12. A method, comprising:
receiving a query having a reference to a function; identifying a plurality of processing environments that implement the function; obtaining function costs for processing the function from each of the processing environments; and generating a query execution plan for the query that includes processing the function from a select one of the processing environments that has a lowest function cost. 13. The method of claim 12, wherein obtaining further includes obtaining input data transfer costs associated with transferring needed input data to any of the processing environments that lack input data. 14. The method of claim 13, wherein providing obtaining further includes obtaining results data transfer costs associated with transferring results data produced by the function from each of the processing environments back to a query session processing environment that initiated the query. 15. The method of claim 14, wherein generating further includes determining the select one of the processing environments based on the function costs, the input data transfer costs, and the results data transfer costs. 16. The method of claim 14 further comprising, providing the query execution plan to a database engine for execution against a database. 17. The method of claim 12 further comprising, determining the select one of the processing environments before generating the query execution plan for conditions defined in the query. 18. The method of claim 12 further comprising, determining the select one of the processing environments during the generating of the query execution plan with conditions defined in the query. 19. A system, comprising:
a data warehouse including:
an optimizer;
wherein the optimizer is configured to: i) execute on at least one network node of the data warehouse, ii) identify processing environments providing a function that is referenced in a query, iii) obtain function costs for processing the function in each of the processing environments, and iv) generate a query execution plan for executing the query against the data warehouse that includes executing the function from a lowest cost processing environment based on the function costs. 20. The system of claim 19, wherein the optimizer is further configured in, iv), to: determine the lowest cost processing environment based on input data transfer costs associated with transferring input data to any of the processing environments that lack the input data and based on results data transfer costs associated with transferring results data produced by the function from the processing environments back to a query session processing environment that initiated the query. | 2,100 |
6,689 | 6,689 | 15,397,374 | 2,138 | A method for execution by a dispersed storage network (DSN) managing unit includes receiving access information from a plurality of DST processing units via a network. Cache memory utilization data is generated based on the access information. Configuration instructions are generated for transmission via the network to the plurality of DST processing units based on the cache memory utilization data. | 1. A method for execution by a dispersed storage network (DSN) managing unit that includes a processor, the method comprises:
receiving access information from a plurality of DST processing units via a network; generating cache memory utilization data based on the access information; and generating configuration instructions for transmission via the network to the plurality of DST processing units based on the cache memory utilization data. 2. The method of claim 1, wherein the access information includes at least one of: a data name, a data type, a user device identifier, a data size indicator, a data access frequency level indicator, or a data access time. 3. The method of claim 1, wherein the access information includes at least one of: a cache memory utilization level or a cache miss rate level. 4. The method of claim 1, further comprising:
generating a plurality of access information requests for transmission via the network to the plurality of DST processing units; wherein the access information is received in response to the plurality of access information requests. 5. The method of claim 1, wherein generating the cache memory utilization data includes calculating at least one of: frequency of data access, efficiency of cache memory utilization, or frequency of non-cache memory utilization. 6. The method of claim 1, wherein generating the cache memory utilization data includes calculating at least one of: an aging rate, a data cooling rate, access rate by datatype, or access rate by user identifier. 7. The method of claim 1, wherein the configuration instructions include a request to update at least one of: cache memory size, cache time by slice name, cache time by data name, or a cache memory allocation level. 8. The method of claim 1, further comprising:
receiving additional access information from a plurality of storage units via the network; wherein generating the cache memory utilization data is further based on the additional access information, and wherein the configuration instructions are further transmitted to the plurality of storage units. 9. The method of claim 1, wherein at least one cache memory is utilized by at least one of the plurality of DST processing units for temporary storage of at least one of: data objects or encoded data slices, and wherein the cache memory utilization data is based on utilization of the at least one cache memory. 10. A processing system of a dispersed storage network (DSN) managing unit comprises:
at least one processor; a memory that stores operational instructions, that when executed by the at least one processor cause the processing system to:
receive access information from a plurality of DST processing units via a network;
generate cache memory utilization data based on the access information; and
generate configuration instructions for transmission via the network to the plurality of DST processing units based on the cache memory utilization data. 11. The processing system of claim 10, wherein the access information includes at least one of: a data name, a data type, a user device identifier, a data size indicator, a data access frequency level indicator, or a data access time. 12. The processing system of claim 10, wherein the access information includes at least one of: a cache memory utilization level or a cache miss rate level. 13. The processing system of claim 10, wherein the operational instructions, when executed by the at least one processor, further cause the processing system to:
generate a plurality of access information requests for transmission via the network to the plurality of DST processing units; wherein the access information is received in response to the plurality of access information requests. 14. The processing system of claim 10, wherein generating the cache memory utilization data includes calculating at least one of: frequency of data access, efficiency of cache memory utilization, or frequency of non-cache memory utilization. 15. The processing system of claim 10, wherein generating the cache memory utilization data includes calculating at least one of: an aging rate, a data cooling rate, access rate by datatype, or access rate by user identifier. 16. The processing system of claim 10, wherein the configuration instructions include a request to update at least one of: cache memory size, cache time by slice name, cache time by data name, or a cache memory allocation level. 17. The processing system of claim 10, wherein the operational instructions, when executed by the at least one processor, further cause the processing system to:
receive additional access information from a plurality of storage units via the network; wherein generating the cache memory utilization data is further based on the additional access information, and wherein the configuration instructions are further transmitted to the plurality of storage units. 18. The processing system of claim 10, wherein at least one cache memory is utilized by at least one of the plurality of DST processing units for temporary storage of at least one of: data objects or encoded data slices, and wherein the cache memory utilization data is based on utilization of the at least one cache memory. 19. A non-transitory computer readable storage medium comprises:
at least one memory section that stores operational instructions that, when executed by a processing system of a dispersed storage network (DSN) that includes a processor and a memory, causes the processing system to:
receive access information from a plurality of DST processing units via a network;
generate cache memory utilization data based on the access information; and
generate configuration instructions for transmission via the network to the plurality of DST processing units based on the cache memory utilization data. 20. The non-transitory computer readable storage medium of claim 19, wherein the configuration instructions include a request to update at least one of: cache memory size, cache time by slice name, cache time by data name, or a cache memory allocation level. | A method for execution by a dispersed storage network (DSN) managing unit includes receiving access information from a plurality of DST processing units via a network. Cache memory utilization data is generated based on the access information. Configuration instructions are generated for transmission via the network to the plurality of DST processing units based on the cache memory utilization data.1. A method for execution by a dispersed storage network (DSN) managing unit that includes a processor, the method comprises:
receiving access information from a plurality of DST processing units via a network; generating cache memory utilization data based on the access information; and generating configuration instructions for transmission via the network to the plurality of DST processing units based on the cache memory utilization data. 2. The method of claim 1, wherein the access information includes at least one of: a data name, a data type, a user device identifier, a data size indicator, a data access frequency level indicator, or a data access time. 3. The method of claim 1, wherein the access information includes at least one of: a cache memory utilization level or a cache miss rate level. 4. The method of claim 1, further comprising:
generating a plurality of access information requests for transmission via the network to the plurality of DST processing units; wherein the access information is received in response to the plurality of access information requests. 5. The method of claim 1, wherein generating the cache memory utilization data includes calculating at least one of: frequency of data access, efficiency of cache memory utilization, or frequency of non-cache memory utilization. 6. The method of claim 1, wherein generating the cache memory utilization data includes calculating at least one of: an aging rate, a data cooling rate, access rate by datatype, or access rate by user identifier. 7. The method of claim 1, wherein the configuration instructions include a request to update at least one of: cache memory size, cache time by slice name, cache time by data name, or a cache memory allocation level. 8. The method of claim 1, further comprising:
receiving additional access information from a plurality of storage units via the network; wherein generating the cache memory utilization data is further based on the additional access information, and wherein the configuration instructions are further transmitted to the plurality of storage units. 9. The method of claim 1, wherein at least one cache memory is utilized by at least one of the plurality of DST processing units for temporary storage of at least one of: data objects or encoded data slices, and wherein the cache memory utilization data is based on utilization of the at least one cache memory. 10. A processing system of a dispersed storage network (DSN) managing unit comprises:
at least one processor; a memory that stores operational instructions, that when executed by the at least one processor cause the processing system to:
receive access information from a plurality of DST processing units via a network;
generate cache memory utilization data based on the access information; and
generate configuration instructions for transmission via the network to the plurality of DST processing units based on the cache memory utilization data. 11. The processing system of claim 10, wherein the access information includes at least one of: a data name, a data type, a user device identifier, a data size indicator, a data access frequency level indicator, or a data access time. 12. The processing system of claim 10, wherein the access information includes at least one of: a cache memory utilization level or a cache miss rate level. 13. The processing system of claim 10, wherein the operational instructions, when executed by the at least one processor, further cause the processing system to:
generate a plurality of access information requests for transmission via the network to the plurality of DST processing units; wherein the access information is received in response to the plurality of access information requests. 14. The processing system of claim 10, wherein generating the cache memory utilization data includes calculating at least one of: frequency of data access, efficiency of cache memory utilization, or frequency of non-cache memory utilization. 15. The processing system of claim 10, wherein generating the cache memory utilization data includes calculating at least one of: an aging rate, a data cooling rate, access rate by datatype, or access rate by user identifier. 16. The processing system of claim 10, wherein the configuration instructions include a request to update at least one of: cache memory size, cache time by slice name, cache time by data name, or a cache memory allocation level. 17. The processing system of claim 10, wherein the operational instructions, when executed by the at least one processor, further cause the processing system to:
receive additional access information from a plurality of storage units via the network; wherein generating the cache memory utilization data is further based on the additional access information, and wherein the configuration instructions are further transmitted to the plurality of storage units. 18. The processing system of claim 10, wherein at least one cache memory is utilized by at least one of the plurality of DST processing units for temporary storage of at least one of: data objects or encoded data slices, and wherein the cache memory utilization data is based on utilization of the at least one cache memory. 19. A non-transitory computer readable storage medium comprises:
at least one memory section that stores operational instructions that, when executed by a processing system of a dispersed storage network (DSN) that includes a processor and a memory, causes the processing system to:
receive access information from a plurality of DST processing units via a network;
generate cache memory utilization data based on the access information; and
generate configuration instructions for transmission via the network to the plurality of DST processing units based on the cache memory utilization data. 20. The non-transitory computer readable storage medium of claim 19, wherein the configuration instructions include a request to update at least one of: cache memory size, cache time by slice name, cache time by data name, or a cache memory allocation level. | 2,100 |
6,690 | 6,690 | 16,025,153 | 2,168 | A system and method for determining database relations. The method includes: receiving at least a portion of a transaction log, the transaction log comprising a plurality of data records detailing changes of at least a first table and a second table of a target system; and generating a probability regarding a relation between the first table and the second table within the target system, based on the at least a portion of the transaction log. In an embodiment, the method further includes: sending a query to target system, wherein the query includes instructions related to data within at least one table stored within the target system, wherein the changes of at least the first table and the second table of a target system are related to the query. | 1. A method for determining database relations, comprising:
receiving at least a portion of a transaction log, the transaction log comprising a plurality of data records detailing changes of at least a first table and a second table of a target system; and generating a probability regarding a relation between the first table and the second table within the target system, based on the at least a portion of the transaction log. 2. The method of claim 1, further comprising:
sending a query to target system, wherein the query includes instructions related to data within at least one table stored within the target system, wherein the changes of at least the first table and the second table of a target system are related to the query. 3. The method of claim 2, wherein the changes related to the query include operations directed to a database of the target system. 4. The method of claim 1, wherein the probability indicates a correlation between the first table and the second table within the target system. 5. The method of claim 4, wherein the probability of the first table being correlated to the second table is not the same as the probability of the second table being correlated to the first table. 6. The method of claim 1, wherein the probability is generated based on the frequency the first table and the second table appear together within the target system. 7. The method of claim 6, wherein the probability is generated based on the number of operations in which both the first table and the second table are updated based on a query. 8. The method of claim 1, further comprising:
determining if the probability value is above or below a predefined threshold. 9. The method of claim 8, further comprising:
sending an alert to a user if the probability value is above or below the predefined threshold. 10. The method of claim 1, further comprising:
generating a report including a list of tables and the probability the list of tables are correlated. 11. A non-transitory computer readable medium having stored thereon instructions for causing a processing circuitry to perform a process, the process comprising:
receiving at least a portion of a transaction log, the transaction log comprising a plurality of data records detailing changes of at least a first table and a second table of a target system; and generating a probability regarding a relation between the first table and the second table within the target system, based on the at least a portion of the transaction log. 12. A system for determining database relations, comprising:
a processing circuitry; and a memory, the memory containing instructions that, when executed by the processing circuitry, configure the system to: receive at least a portion of a transaction log, the transaction log comprising a plurality of data records detailing changes of at least a first table and a second table of a target system; and generate a probability regarding a relation between the first table and the second table within the target system, based on the at least a portion of the transaction log. 13. The system of claim 12, the system further configured to:
send a query to target system, wherein the query includes instructions related to data within at least one table stored within the target system, wherein the changes of at least the first table and the second table of a target system are related to the query. 14. The system of claim 13, wherein the changes related to the query include operations directed to a database of the target system. 15. The system of claim 12, wherein the probability indicates a correlation between the first table and the second table within the target system. 16. The system of claim 15, wherein the probability of the first table being correlated to the second table is not the same as the probability of the second table being correlated to the first table. 17. The system of claim 12, wherein the probability is generated based on the frequency the first table and the second table appear together within the target system. 18. The system of claim 17, wherein the probability is generated based on the number of operations in which both the first table and the second table are updated based on a query. 19. The system of claim 12, the system further configured to:
determine if the probability value is above or below a predefined threshold. 20. The system of claim 19, the system further configured to:
send an alert to a user if the probability value is above or below the predefined threshold. 21. The system of claim 12, the system further configured to:
generate a report including a list of tables and the probability the list of tables are correlated. | A system and method for determining database relations. The method includes: receiving at least a portion of a transaction log, the transaction log comprising a plurality of data records detailing changes of at least a first table and a second table of a target system; and generating a probability regarding a relation between the first table and the second table within the target system, based on the at least a portion of the transaction log. In an embodiment, the method further includes: sending a query to target system, wherein the query includes instructions related to data within at least one table stored within the target system, wherein the changes of at least the first table and the second table of a target system are related to the query.1. A method for determining database relations, comprising:
receiving at least a portion of a transaction log, the transaction log comprising a plurality of data records detailing changes of at least a first table and a second table of a target system; and generating a probability regarding a relation between the first table and the second table within the target system, based on the at least a portion of the transaction log. 2. The method of claim 1, further comprising:
sending a query to target system, wherein the query includes instructions related to data within at least one table stored within the target system, wherein the changes of at least the first table and the second table of a target system are related to the query. 3. The method of claim 2, wherein the changes related to the query include operations directed to a database of the target system. 4. The method of claim 1, wherein the probability indicates a correlation between the first table and the second table within the target system. 5. The method of claim 4, wherein the probability of the first table being correlated to the second table is not the same as the probability of the second table being correlated to the first table. 6. The method of claim 1, wherein the probability is generated based on the frequency the first table and the second table appear together within the target system. 7. The method of claim 6, wherein the probability is generated based on the number of operations in which both the first table and the second table are updated based on a query. 8. The method of claim 1, further comprising:
determining if the probability value is above or below a predefined threshold. 9. The method of claim 8, further comprising:
sending an alert to a user if the probability value is above or below the predefined threshold. 10. The method of claim 1, further comprising:
generating a report including a list of tables and the probability the list of tables are correlated. 11. A non-transitory computer readable medium having stored thereon instructions for causing a processing circuitry to perform a process, the process comprising:
receiving at least a portion of a transaction log, the transaction log comprising a plurality of data records detailing changes of at least a first table and a second table of a target system; and generating a probability regarding a relation between the first table and the second table within the target system, based on the at least a portion of the transaction log. 12. A system for determining database relations, comprising:
a processing circuitry; and a memory, the memory containing instructions that, when executed by the processing circuitry, configure the system to: receive at least a portion of a transaction log, the transaction log comprising a plurality of data records detailing changes of at least a first table and a second table of a target system; and generate a probability regarding a relation between the first table and the second table within the target system, based on the at least a portion of the transaction log. 13. The system of claim 12, the system further configured to:
send a query to target system, wherein the query includes instructions related to data within at least one table stored within the target system, wherein the changes of at least the first table and the second table of a target system are related to the query. 14. The system of claim 13, wherein the changes related to the query include operations directed to a database of the target system. 15. The system of claim 12, wherein the probability indicates a correlation between the first table and the second table within the target system. 16. The system of claim 15, wherein the probability of the first table being correlated to the second table is not the same as the probability of the second table being correlated to the first table. 17. The system of claim 12, wherein the probability is generated based on the frequency the first table and the second table appear together within the target system. 18. The system of claim 17, wherein the probability is generated based on the number of operations in which both the first table and the second table are updated based on a query. 19. The system of claim 12, the system further configured to:
determine if the probability value is above or below a predefined threshold. 20. The system of claim 19, the system further configured to:
send an alert to a user if the probability value is above or below the predefined threshold. 21. The system of claim 12, the system further configured to:
generate a report including a list of tables and the probability the list of tables are correlated. | 2,100 |
6,691 | 6,691 | 15,680,089 | 2,165 | A method, according to one embodiment, includes, at a server in communication with a database and a network, receiving from a user a link to an internet webpage, via the network, analyzing, by the server, a Rich Site Summary (RSS) feed of the internet webpage and text in the internet webpage, categorizing, by the server, the internet webpage into a predetermined category based on the RSS feed of the internet webpage and the text in the internet webpage, creating, by the server, a summary of the internet webpage, utilizing the RSS feed of the internet webpage and the text in the internet webpage, identifying, by the server, an image from the internet webpage, resizing, by the server, the image from the internet webpage to create a resized image, including, by the server, the resized image in the summary of the internet webpage, upon determining that the resized image has a minimum size, meets predetermined aspect ratio limits, and is not repeated within the predetermined category, and in a webpage associated with the predetermined category, displaying, by the server, the summary of the internet webpage with summaries of other internet webpages categorized into the predetermined category. | 1. A method, comprising:
at a server in communication with a database and a network, receiving from a user a link to an internet webpage, via the network; analyzing, by the server, a Rich Site Summary (RSS) feed of the internet webpage and text in the internet webpage; categorizing, by the server, the internet webpage into a predetermined category based on the RSS feed of the internet webpage and the text in the internet webpage; creating, by the server, a summary of the internet webpage, utilizing the RSS feed of the internet webpage and the text in the internet webpage; identifying, by the server, an image from the internet webpage; resizing, by the server, the image from the internet webpage to create a resized image; including, by the server, the resized image in the summary of the internet webpage, upon determining that the resized image has a minimum size, meets predetermined aspect ratio limits, and is not repeated within the predetermined category; and in a webpage associated with the predetermined category, displaying, by the server, the summary of the internet webpage with summaries of other internet webpages categorized into the predetermined category. 2. The method of claim 1, wherein the summary of the internet webpage includes a plurality of representative portions of text and a plurality of representative photos from the internet webpage. 3. The method of claim 1, further comprising displaying, by the server in a page associated with the user:
a summary of each of a predetermined number of categories, where the summary of each of the predetermined number of categories includes an indication of a number of votes, comments, and submissions within each category that are submitted by the user;
a number of reward points accumulated by the user for performing a plurality of actions including spending time on a predetermined website, commenting and voting on links or articles, submitting one or more links to internet webpages, receiving one or more votes or comments on submitted links to internet webpages, and receiving one or more votes or comments on likes to internet webpages that have been commented on by the user; and
an identification of a plurality of friends of the user that use a predetermined service, where each of the plurality of friends is represented by an icon retrieved from a social network. 4. The method of claim 1, further comprising, at the server,
allowing, by the server, the user to log into a home page; and awarding, by the server, a user point to the user, based on at least one of, a user comment, a user vote, user visits to a webpage, votes received for submitted links, comments received for submitted links, category creation, and social media links. 5. The method of claim 1, wherein a manner in which the summary of the internet webpage is displayed in the webpage associated with the predetermined category, as well as a time that the internet webpage is displayed in the webpage associated with the predetermined category, is determined based on a status of the user. 6. The method of claim 1, further comprising displaying, by the server, a plurality of tiles associated with a plurality of categories, where the plurality of tiles are ordered according to a time when each of the plurality of categories was updated. 7. The method of claim 1, further comprising:
displaying, by the server, a search box in the webpage associated with the predetermined category; and allowing, by the server, a user to send a search query to a search engine via the search box; and displaying, by the server, search results, based on the search query. 8. The method of claim 7, wherein the search query includes one or more slashtags. 9. A computer-readable medium having computer-executable instructions thereon for a method of aggregating internet web pages, the method comprising:
at a server in communication with a database and a network, receiving from a user a link to an internet webpage, via the network; analyzing, by the server, a Rich Site Summary (RSS) feed of the internet webpage and text in the internet webpage; categorizing, by the server, the internet webpage into a predetermined category based on the RSS feed of the internet webpage and the text in the internet webpage; creating, by the server, a summary of the internet webpage, utilizing the RSS feed of the internet webpage and the text in the internet webpage; identifying, by the server, an image from the internet webpage; resizing, by the server, the image from the internet webpage to create a resized image; including, by the server, the resized image in the summary of the internet webpage, upon determining that the resized image has a minimum size, meets predetermined aspect ratio limits, and is not repeated within the predetermined category; and in a webpage associated with the predetermined category, displaying, by the server, the summary of the internet webpage with summaries of other internet webpages categorized into the predetermined category. 10. The computer-readable medium of claim 9, wherein the summary of the internet webpage includes a plurality of representative portions of text and a plurality of representative photos from the internet webpage. 11. The computer-readable medium of claim 9, further comprising displaying, by the server in a page associated with the user:
a summary of each of a predetermined number of categories, where the summary of each of the predetermined number of categories includes an indication of a number of votes, comments, and submissions within each category that are submitted by the user;
a number of reward points accumulated by the user for performing a plurality of actions including spending time on a predetermined website, commenting and voting on links or articles, submitting one or more links to internet webpages, receiving one or more votes or comments on submitted links to internet webpages, and receiving one or more votes or comments on likes to internet webpages that have been commented on by the user; and
an identification of a plurality of friends of the user that use a predetermined service, where each of the plurality of friends is represented by an icon retrieved from a social network. 12. The computer-readable medium of claim 9, further comprising, at the server,
allowing, by the server, the user to log into a home page; and awarding, by the server, a user point to the user, based on at least one of, a user comment, a user vote, user visits to a webpage, votes received for submitted links, comments received for submitted links, category creation, and social media links. 13. The computer-readable medium of claim 9, wherein a manner in which the summary of the internet webpage is displayed in the webpage associated with the predetermined category, as well as a time that the internet webpage is displayed in the webpage associated with the predetermined category, is determined based on a status of the user. 14. The computer-readable medium of claim 9, further comprising displaying, by the server, a plurality of tiles associated with a plurality of categories, where the plurality of tiles are ordered according to a time when each of the plurality of categories was updated. 15. The computer-readable medium of claim 9, further comprising:
displaying, by the server, a search box in the webpage associated with the predetermined category; and allowing, by the server, a user to send a search query to a search engine via the search box; and displaying, by the server, search results, based on the search query. 16. The computer-readable medium of claim 15, wherein the search query includes one or more slashtags. 17. A method, comprising:
at a server in communication with a database and a network, receiving from a user a link to an internet webpage, via the network; analyzing, by the server, a Rich Site Summary (RSS) feed of the internet webpage and text in the internet webpage; categorizing, by the server, the internet webpage into a predetermined category based on the RSS feed of the internet webpage and the text in the internet webpage; creating, by the server, a summary of the internet webpage, utilizing the RSS feed of the internet webpage and the text in the internet webpage; identifying, by the server, an image from the internet webpage; resizing, by the server, the image from the internet webpage to create a resized image; including, by the server, the resized image in the summary of the internet webpage, upon determining that the resized image has a minimum size, meets predetermined aspect ratio limits, and is not repeated within the predetermined category; in a webpage associated with the predetermined category, displaying, by the server, the summary of the internet webpage with summaries of other internet webpages categorized into the predetermined category; and displaying, by the server in a page associated with the user:
a summary of each of a predetermined number of predetermined categories, where the summary of each of the predetermined number of predetermined categories includes an indication of a number of votes, comments, and submissions within each predetermined category that are submitted by the user;
a number of reward points accumulated by the user for performing a plurality of actions including spending time on a predetermined website, commenting and voting on links or articles, submitting one or more links to internet webpages, receiving one or more votes or comments on submitted links to internet webpages, and receiving one or more votes or comments on likes to internet webpages that have been commented on by the user; and
an identification of a plurality of friends of the user that use a predetermined service, where each of the plurality of friends is represented by an icon retrieved from a social network. 18. The method of claim 17, wherein the summary of the internet webpage includes a plurality of representative portions of text and a plurality of representative photos from the internet webpage. 19. The method of claim 17, wherein a manner in which the summary of the internet webpage is displayed in the webpage associated with the predetermined category, as well as a time that the internet webpage is displayed in the webpage associated with the predetermined category, is determined based on a status of the user. 20. The method of claim 17, further comprising displaying, by the server, a plurality of tiles associated with a plurality of categories, where the plurality of tiles are ordered according to a time when each of the plurality of categories was updated. | A method, according to one embodiment, includes, at a server in communication with a database and a network, receiving from a user a link to an internet webpage, via the network, analyzing, by the server, a Rich Site Summary (RSS) feed of the internet webpage and text in the internet webpage, categorizing, by the server, the internet webpage into a predetermined category based on the RSS feed of the internet webpage and the text in the internet webpage, creating, by the server, a summary of the internet webpage, utilizing the RSS feed of the internet webpage and the text in the internet webpage, identifying, by the server, an image from the internet webpage, resizing, by the server, the image from the internet webpage to create a resized image, including, by the server, the resized image in the summary of the internet webpage, upon determining that the resized image has a minimum size, meets predetermined aspect ratio limits, and is not repeated within the predetermined category, and in a webpage associated with the predetermined category, displaying, by the server, the summary of the internet webpage with summaries of other internet webpages categorized into the predetermined category.1. A method, comprising:
at a server in communication with a database and a network, receiving from a user a link to an internet webpage, via the network; analyzing, by the server, a Rich Site Summary (RSS) feed of the internet webpage and text in the internet webpage; categorizing, by the server, the internet webpage into a predetermined category based on the RSS feed of the internet webpage and the text in the internet webpage; creating, by the server, a summary of the internet webpage, utilizing the RSS feed of the internet webpage and the text in the internet webpage; identifying, by the server, an image from the internet webpage; resizing, by the server, the image from the internet webpage to create a resized image; including, by the server, the resized image in the summary of the internet webpage, upon determining that the resized image has a minimum size, meets predetermined aspect ratio limits, and is not repeated within the predetermined category; and in a webpage associated with the predetermined category, displaying, by the server, the summary of the internet webpage with summaries of other internet webpages categorized into the predetermined category. 2. The method of claim 1, wherein the summary of the internet webpage includes a plurality of representative portions of text and a plurality of representative photos from the internet webpage. 3. The method of claim 1, further comprising displaying, by the server in a page associated with the user:
a summary of each of a predetermined number of categories, where the summary of each of the predetermined number of categories includes an indication of a number of votes, comments, and submissions within each category that are submitted by the user;
a number of reward points accumulated by the user for performing a plurality of actions including spending time on a predetermined website, commenting and voting on links or articles, submitting one or more links to internet webpages, receiving one or more votes or comments on submitted links to internet webpages, and receiving one or more votes or comments on likes to internet webpages that have been commented on by the user; and
an identification of a plurality of friends of the user that use a predetermined service, where each of the plurality of friends is represented by an icon retrieved from a social network. 4. The method of claim 1, further comprising, at the server,
allowing, by the server, the user to log into a home page; and awarding, by the server, a user point to the user, based on at least one of, a user comment, a user vote, user visits to a webpage, votes received for submitted links, comments received for submitted links, category creation, and social media links. 5. The method of claim 1, wherein a manner in which the summary of the internet webpage is displayed in the webpage associated with the predetermined category, as well as a time that the internet webpage is displayed in the webpage associated with the predetermined category, is determined based on a status of the user. 6. The method of claim 1, further comprising displaying, by the server, a plurality of tiles associated with a plurality of categories, where the plurality of tiles are ordered according to a time when each of the plurality of categories was updated. 7. The method of claim 1, further comprising:
displaying, by the server, a search box in the webpage associated with the predetermined category; and allowing, by the server, a user to send a search query to a search engine via the search box; and displaying, by the server, search results, based on the search query. 8. The method of claim 7, wherein the search query includes one or more slashtags. 9. A computer-readable medium having computer-executable instructions thereon for a method of aggregating internet web pages, the method comprising:
at a server in communication with a database and a network, receiving from a user a link to an internet webpage, via the network; analyzing, by the server, a Rich Site Summary (RSS) feed of the internet webpage and text in the internet webpage; categorizing, by the server, the internet webpage into a predetermined category based on the RSS feed of the internet webpage and the text in the internet webpage; creating, by the server, a summary of the internet webpage, utilizing the RSS feed of the internet webpage and the text in the internet webpage; identifying, by the server, an image from the internet webpage; resizing, by the server, the image from the internet webpage to create a resized image; including, by the server, the resized image in the summary of the internet webpage, upon determining that the resized image has a minimum size, meets predetermined aspect ratio limits, and is not repeated within the predetermined category; and in a webpage associated with the predetermined category, displaying, by the server, the summary of the internet webpage with summaries of other internet webpages categorized into the predetermined category. 10. The computer-readable medium of claim 9, wherein the summary of the internet webpage includes a plurality of representative portions of text and a plurality of representative photos from the internet webpage. 11. The computer-readable medium of claim 9, further comprising displaying, by the server in a page associated with the user:
a summary of each of a predetermined number of categories, where the summary of each of the predetermined number of categories includes an indication of a number of votes, comments, and submissions within each category that are submitted by the user;
a number of reward points accumulated by the user for performing a plurality of actions including spending time on a predetermined website, commenting and voting on links or articles, submitting one or more links to internet webpages, receiving one or more votes or comments on submitted links to internet webpages, and receiving one or more votes or comments on likes to internet webpages that have been commented on by the user; and
an identification of a plurality of friends of the user that use a predetermined service, where each of the plurality of friends is represented by an icon retrieved from a social network. 12. The computer-readable medium of claim 9, further comprising, at the server,
allowing, by the server, the user to log into a home page; and awarding, by the server, a user point to the user, based on at least one of, a user comment, a user vote, user visits to a webpage, votes received for submitted links, comments received for submitted links, category creation, and social media links. 13. The computer-readable medium of claim 9, wherein a manner in which the summary of the internet webpage is displayed in the webpage associated with the predetermined category, as well as a time that the internet webpage is displayed in the webpage associated with the predetermined category, is determined based on a status of the user. 14. The computer-readable medium of claim 9, further comprising displaying, by the server, a plurality of tiles associated with a plurality of categories, where the plurality of tiles are ordered according to a time when each of the plurality of categories was updated. 15. The computer-readable medium of claim 9, further comprising:
displaying, by the server, a search box in the webpage associated with the predetermined category; and allowing, by the server, a user to send a search query to a search engine via the search box; and displaying, by the server, search results, based on the search query. 16. The computer-readable medium of claim 15, wherein the search query includes one or more slashtags. 17. A method, comprising:
at a server in communication with a database and a network, receiving from a user a link to an internet webpage, via the network; analyzing, by the server, a Rich Site Summary (RSS) feed of the internet webpage and text in the internet webpage; categorizing, by the server, the internet webpage into a predetermined category based on the RSS feed of the internet webpage and the text in the internet webpage; creating, by the server, a summary of the internet webpage, utilizing the RSS feed of the internet webpage and the text in the internet webpage; identifying, by the server, an image from the internet webpage; resizing, by the server, the image from the internet webpage to create a resized image; including, by the server, the resized image in the summary of the internet webpage, upon determining that the resized image has a minimum size, meets predetermined aspect ratio limits, and is not repeated within the predetermined category; in a webpage associated with the predetermined category, displaying, by the server, the summary of the internet webpage with summaries of other internet webpages categorized into the predetermined category; and displaying, by the server in a page associated with the user:
a summary of each of a predetermined number of predetermined categories, where the summary of each of the predetermined number of predetermined categories includes an indication of a number of votes, comments, and submissions within each predetermined category that are submitted by the user;
a number of reward points accumulated by the user for performing a plurality of actions including spending time on a predetermined website, commenting and voting on links or articles, submitting one or more links to internet webpages, receiving one or more votes or comments on submitted links to internet webpages, and receiving one or more votes or comments on likes to internet webpages that have been commented on by the user; and
an identification of a plurality of friends of the user that use a predetermined service, where each of the plurality of friends is represented by an icon retrieved from a social network. 18. The method of claim 17, wherein the summary of the internet webpage includes a plurality of representative portions of text and a plurality of representative photos from the internet webpage. 19. The method of claim 17, wherein a manner in which the summary of the internet webpage is displayed in the webpage associated with the predetermined category, as well as a time that the internet webpage is displayed in the webpage associated with the predetermined category, is determined based on a status of the user. 20. The method of claim 17, further comprising displaying, by the server, a plurality of tiles associated with a plurality of categories, where the plurality of tiles are ordered according to a time when each of the plurality of categories was updated. | 2,100 |
6,692 | 6,692 | 11,852,948 | 2,162 | A system, method and computer program product are provided. After identifying a computer readable item, at least one opinion relating to the trustworthiness of the identified computer readable item is received, utilizing a network. Access to the computer readable item is then blocked or allowed, based on at least one opinion. | 1. A method, comprising:
in response to identifying a computer readable item, sending a request for a plurality of opinions of the computer readable item, the request including a criterion; receiving the plurality of opinions, relating to the trustworthiness of the identified computer readable item, utilizing a network; and receiving an input for blocking or allowing access to the computer readable item, based on a display of the plurality of opinions of the computer readable item. 2. (canceled) 3. The computer program product of claim 18, wherein the computer readable item includes an application program. 4. The computer program product of claim 18, wherein the computer readable item includes network traffic. 5. The method of claim 1, wherein the plurality of the opinions are received from a plurality of users correlated into a group. 6. The method of claim 5, wherein the plurality of opinions are received from a server. 7-8. (canceled) 9. The system of claim 19, wherein the request is received via a dialog box. 10. The system of claim 9, wherein the dialog box further includes at least one icon for blocking or allowing the access to the computer readable item. 11-15. (canceled) 16. The method of claim 1, wherein a weighted average is calculated based on the plurality of opinions, which are associated with different peers. 17. The method of claim 16, wherein opinions of a first peer of the plurality of opinions are weighted differently with respect to opinions of a second peer of the plurality of opinions. 18. A computer program product embodied on a computer readable medium, comprising:
computer code to send, in response to an identification of a computer readable item, a request for a plurality of opinions of the computer readable item, the request including a criterion; computer code to receive the plurality of opinions, relating to the trustworthiness of the identified computer readable item, utilizing a network; and computer code to receive an input for blocking or allowing access to the computer readable item, based on a display of the plurality of opinions of the computer readable item. 19. A system, comprising:
a graphical user interface including a field for identifying a plurality of opinions of a computer readable item, relating to the trustworthiness of a computer readable item, utilizing a network; and a network interface that sends a request for the plurality of opinions in response to an identification of the computer readable item, the request including a criterion, wherein the network interface receives the plurality of opinions, and access to the computer readable item is blocked or allowed, based on an input received in response to a display of the plurality of opinions. 20. The method of claim 22, wherein the plurality of opinions includes a visual indication of a level of the trustworthiness related to a security risk associated with allowing the access to the computer readable item. 21. The method of claim 1, wherein the criterion is a group criterion. 22. The method of claim 1, further comprising:
displaying a weighted average of the plurality of opinions. 23. The computer program product of claim 18, wherein the criterion is a group criterion. 24. The computer program product of claim 18, wherein the criterion relates to an urgency of one of the plurality of opinions. 25. The computer program product of claim 18, further comprising:
computer code to display a weighted average of the plurality of opinions. 26. The system of claim 19, wherein the criterion is a group criterion. 27. The system of claim 19, wherein the criterion relates to an urgency of one of the plurality of opinions. 28. The system of claim 19, wherein the graphical user interface displays a weighted average of the plurality of opinions. | A system, method and computer program product are provided. After identifying a computer readable item, at least one opinion relating to the trustworthiness of the identified computer readable item is received, utilizing a network. Access to the computer readable item is then blocked or allowed, based on at least one opinion.1. A method, comprising:
in response to identifying a computer readable item, sending a request for a plurality of opinions of the computer readable item, the request including a criterion; receiving the plurality of opinions, relating to the trustworthiness of the identified computer readable item, utilizing a network; and receiving an input for blocking or allowing access to the computer readable item, based on a display of the plurality of opinions of the computer readable item. 2. (canceled) 3. The computer program product of claim 18, wherein the computer readable item includes an application program. 4. The computer program product of claim 18, wherein the computer readable item includes network traffic. 5. The method of claim 1, wherein the plurality of the opinions are received from a plurality of users correlated into a group. 6. The method of claim 5, wherein the plurality of opinions are received from a server. 7-8. (canceled) 9. The system of claim 19, wherein the request is received via a dialog box. 10. The system of claim 9, wherein the dialog box further includes at least one icon for blocking or allowing the access to the computer readable item. 11-15. (canceled) 16. The method of claim 1, wherein a weighted average is calculated based on the plurality of opinions, which are associated with different peers. 17. The method of claim 16, wherein opinions of a first peer of the plurality of opinions are weighted differently with respect to opinions of a second peer of the plurality of opinions. 18. A computer program product embodied on a computer readable medium, comprising:
computer code to send, in response to an identification of a computer readable item, a request for a plurality of opinions of the computer readable item, the request including a criterion; computer code to receive the plurality of opinions, relating to the trustworthiness of the identified computer readable item, utilizing a network; and computer code to receive an input for blocking or allowing access to the computer readable item, based on a display of the plurality of opinions of the computer readable item. 19. A system, comprising:
a graphical user interface including a field for identifying a plurality of opinions of a computer readable item, relating to the trustworthiness of a computer readable item, utilizing a network; and a network interface that sends a request for the plurality of opinions in response to an identification of the computer readable item, the request including a criterion, wherein the network interface receives the plurality of opinions, and access to the computer readable item is blocked or allowed, based on an input received in response to a display of the plurality of opinions. 20. The method of claim 22, wherein the plurality of opinions includes a visual indication of a level of the trustworthiness related to a security risk associated with allowing the access to the computer readable item. 21. The method of claim 1, wherein the criterion is a group criterion. 22. The method of claim 1, further comprising:
displaying a weighted average of the plurality of opinions. 23. The computer program product of claim 18, wherein the criterion is a group criterion. 24. The computer program product of claim 18, wherein the criterion relates to an urgency of one of the plurality of opinions. 25. The computer program product of claim 18, further comprising:
computer code to display a weighted average of the plurality of opinions. 26. The system of claim 19, wherein the criterion is a group criterion. 27. The system of claim 19, wherein the criterion relates to an urgency of one of the plurality of opinions. 28. The system of claim 19, wherein the graphical user interface displays a weighted average of the plurality of opinions. | 2,100 |
6,693 | 6,693 | 15,655,253 | 2,179 | The present disclosure relates to user interfaces for manipulating user interface objects. A device, including a display and a rotatable input mechanism, is described in relation to manipulating user interface objects. In some examples, the manipulation of the object is a scroll, zoom, or rotate of the object. In other examples, objects are selected in accordance with simulated magnetic properties. | 1. A non-transitory computer-readable storage medium comprising instructions for execution by one or more processors of an electronic device with a display and a rotatable input mechanism, the instructions for:
displaying, on the display, an object, wherein the object is associated with a first marker having a first value and a second marker having a second value, and wherein a value of a characteristic of the object is based on the first value of the first marker; receiving user input representing rotation of the rotatable input mechanism; in response to receiving the user input representing rotation of the rotatable input mechanism, determining whether an attribute of the user input exceeds a threshold value; in accordance with a determination that the attribute of the user input exceeds the threshold value, updating the value of the characteristic of the object based on the second value of the second marker; and updating display of the object in accordance with the updated value of the characteristic of the object. 2. The non-transitory computer-readable storage medium of claim 1, wherein updating display of the object in accordance with the updated value of the characteristic of the object comprises animating the object to reflect the updated value of the characteristic of the object. 3. The non-transitory computer-readable storage medium of claim 1, further comprising:
in accordance with a determination that the attribute of the user input is less than the threshold value, maintaining display of the object in accordance with the value of the characteristic of the object based on the first value of the first marker. 4. The non-transitory computer-readable storage medium of claim 1, further comprising:
in accordance with a determination that the attribute of the user input does not exceed the threshold value, updating the value of the characteristic of the object to a third value, the third value based on the user input. 5. The non-transitory computer-readable storage medium of claim 1, wherein the second marker is an anchor and the second value of the second marker is an intermediate value of the anchor. 6. The non-transitory computer-readable storage medium of claim 1, further comprising:
in accordance with a determination that the attribute of the user input exceeds the threshold value, performing a haptic alert at the electronic device. 7. The non-transitory computer-readable storage medium of claim 1, wherein the object is a document, and further comprising:
analyzing at least a portion of the document, wherein analyzing at least the portion of the document comprises identifying locations within the document. 8. The non-transitory computer-readable storage medium of claim 1, wherein the locations within the document include one or more of:
one or more page boundaries of at least the portion of the document; one or more paragraph boundaries of at least the portion of the document; and one or more keyword locations of at least the portion of the document; and further comprising: assigning markers to some or all of the identified page boundaries, paragraph boundaries, and keyword locations of the document. 9. The non-transitory computer-readable storage medium of claim 1, further comprising:
accessing a first set of markers of the object; detecting a change in value of the characteristic of the object; and in response to detecting the change in the value of the characteristic of the object, associating a second set of markers to the object, wherein the first set and the second set are different. 10. The non-transitory computer-readable storage medium of claim 1, further comprising:
in accordance with a determination that the attribute of the user input does exceeds the threshold value, initiating a duration during which received user inputs representing rotation of the rotatable input mechanism do not affect the displayed characteristic of the object. 11. The non-transitory computer-readable storage medium of claim 1, wherein the attribute of the user input is angular velocity of the rotatable input mechanism and the threshold value is a threshold angular velocity. 12. The non-transitory computer-readable storage medium of claim 1, wherein the attribute of the user input is a maximum angular velocity of the rotatable input mechanism and the threshold value is a threshold angular velocity. 13. The non-transitory computer-readable storage medium of claim 1, wherein the attribute of the user input is angular acceleration of the rotatable input mechanism and the threshold value is a threshold angular acceleration. 14. The non-transitory computer-readable storage medium of claim 1, wherein the object is selected from the group consisting of a document and an image. 15. The non-transitory computer-readable storage medium of claim 1, wherein the characteristic of the object is selected from the group consisting of scroll position, zoom size, and degree of rotation. 16. A method, comprising:
at an electronic device with a display and a rotatable input mechanism:
displaying, on the display, an object, wherein the object is associated with a first marker having a first value and a second marker having a second value, and wherein a value of a characteristic of the object is based on the first value of the first marker;
receiving user input representing rotation of the rotatable input mechanism;
in response to receiving the user input representing rotation of the rotatable input mechanism, determining whether an attribute of the user input exceeds a threshold value;
in accordance with a determination that the attribute of the user input exceeds the threshold value, updating the value of the characteristic of the object based on the second value of the second marker; and
updating display of the object in accordance with the updated value of the characteristic of the object. 17. An electronic device, comprising:
a rotatable input mechanism; a display; and one or more processors coupled to the rotatable input mechanism and the display, the one or more processors configured to:
display, on the display, an object, wherein the object is associated with a first marker having a first value and a second marker having a second value, and wherein a value of a characteristic of the object is based on the first value of the first marker;
receive user input representing rotation of the rotatable input mechanism;
in response to receiving the user input representing rotation of the rotatable input mechanism, determine whether an attribute of the user input exceeds a threshold value;
in accordance with a determination that the attribute of the user input exceeds the threshold value, update the value of the characteristic of the object based on the second value of the second marker; and
update display of the object in accordance with the updated value of the characteristic of the object. | The present disclosure relates to user interfaces for manipulating user interface objects. A device, including a display and a rotatable input mechanism, is described in relation to manipulating user interface objects. In some examples, the manipulation of the object is a scroll, zoom, or rotate of the object. In other examples, objects are selected in accordance with simulated magnetic properties.1. A non-transitory computer-readable storage medium comprising instructions for execution by one or more processors of an electronic device with a display and a rotatable input mechanism, the instructions for:
displaying, on the display, an object, wherein the object is associated with a first marker having a first value and a second marker having a second value, and wherein a value of a characteristic of the object is based on the first value of the first marker; receiving user input representing rotation of the rotatable input mechanism; in response to receiving the user input representing rotation of the rotatable input mechanism, determining whether an attribute of the user input exceeds a threshold value; in accordance with a determination that the attribute of the user input exceeds the threshold value, updating the value of the characteristic of the object based on the second value of the second marker; and updating display of the object in accordance with the updated value of the characteristic of the object. 2. The non-transitory computer-readable storage medium of claim 1, wherein updating display of the object in accordance with the updated value of the characteristic of the object comprises animating the object to reflect the updated value of the characteristic of the object. 3. The non-transitory computer-readable storage medium of claim 1, further comprising:
in accordance with a determination that the attribute of the user input is less than the threshold value, maintaining display of the object in accordance with the value of the characteristic of the object based on the first value of the first marker. 4. The non-transitory computer-readable storage medium of claim 1, further comprising:
in accordance with a determination that the attribute of the user input does not exceed the threshold value, updating the value of the characteristic of the object to a third value, the third value based on the user input. 5. The non-transitory computer-readable storage medium of claim 1, wherein the second marker is an anchor and the second value of the second marker is an intermediate value of the anchor. 6. The non-transitory computer-readable storage medium of claim 1, further comprising:
in accordance with a determination that the attribute of the user input exceeds the threshold value, performing a haptic alert at the electronic device. 7. The non-transitory computer-readable storage medium of claim 1, wherein the object is a document, and further comprising:
analyzing at least a portion of the document, wherein analyzing at least the portion of the document comprises identifying locations within the document. 8. The non-transitory computer-readable storage medium of claim 1, wherein the locations within the document include one or more of:
one or more page boundaries of at least the portion of the document; one or more paragraph boundaries of at least the portion of the document; and one or more keyword locations of at least the portion of the document; and further comprising: assigning markers to some or all of the identified page boundaries, paragraph boundaries, and keyword locations of the document. 9. The non-transitory computer-readable storage medium of claim 1, further comprising:
accessing a first set of markers of the object; detecting a change in value of the characteristic of the object; and in response to detecting the change in the value of the characteristic of the object, associating a second set of markers to the object, wherein the first set and the second set are different. 10. The non-transitory computer-readable storage medium of claim 1, further comprising:
in accordance with a determination that the attribute of the user input does exceeds the threshold value, initiating a duration during which received user inputs representing rotation of the rotatable input mechanism do not affect the displayed characteristic of the object. 11. The non-transitory computer-readable storage medium of claim 1, wherein the attribute of the user input is angular velocity of the rotatable input mechanism and the threshold value is a threshold angular velocity. 12. The non-transitory computer-readable storage medium of claim 1, wherein the attribute of the user input is a maximum angular velocity of the rotatable input mechanism and the threshold value is a threshold angular velocity. 13. The non-transitory computer-readable storage medium of claim 1, wherein the attribute of the user input is angular acceleration of the rotatable input mechanism and the threshold value is a threshold angular acceleration. 14. The non-transitory computer-readable storage medium of claim 1, wherein the object is selected from the group consisting of a document and an image. 15. The non-transitory computer-readable storage medium of claim 1, wherein the characteristic of the object is selected from the group consisting of scroll position, zoom size, and degree of rotation. 16. A method, comprising:
at an electronic device with a display and a rotatable input mechanism:
displaying, on the display, an object, wherein the object is associated with a first marker having a first value and a second marker having a second value, and wherein a value of a characteristic of the object is based on the first value of the first marker;
receiving user input representing rotation of the rotatable input mechanism;
in response to receiving the user input representing rotation of the rotatable input mechanism, determining whether an attribute of the user input exceeds a threshold value;
in accordance with a determination that the attribute of the user input exceeds the threshold value, updating the value of the characteristic of the object based on the second value of the second marker; and
updating display of the object in accordance with the updated value of the characteristic of the object. 17. An electronic device, comprising:
a rotatable input mechanism; a display; and one or more processors coupled to the rotatable input mechanism and the display, the one or more processors configured to:
display, on the display, an object, wherein the object is associated with a first marker having a first value and a second marker having a second value, and wherein a value of a characteristic of the object is based on the first value of the first marker;
receive user input representing rotation of the rotatable input mechanism;
in response to receiving the user input representing rotation of the rotatable input mechanism, determine whether an attribute of the user input exceeds a threshold value;
in accordance with a determination that the attribute of the user input exceeds the threshold value, update the value of the characteristic of the object based on the second value of the second marker; and
update display of the object in accordance with the updated value of the characteristic of the object. | 2,100 |
6,694 | 6,694 | 15,382,103 | 2,133 | Generally discussed herein are systems, devices, and methods for prefetcher in a multi-tiered memory (DSM) system. A node can include a network interface controller (NIC) comprising system address decoder (SAD) circuitry configured to determine a node identification of a node to which a memory request from a processor is homed, and prefetcher circuitry communicatively coupled to the SAD circuitry, the prefetcher circuitry to determine, based on an address in the memory request, one or more addresses from which to prefetch data, the one or more addresses corresponding to respective entries in a memory of a node on a different network than the NIC. | 1. A network interface controller (NIC) comprising:
input/output (I/O) circuitry to receive, from system address decoder (SAD) circuitry, a node identification of a node to which a memory request from a processor is horned; and prefetcher circuitry communicatively coupled to the SAD circuitry, the prefetcher circuitry to determine, based on an address in the memory request, one or more addresses from which to prefetch data, the one or more addresses corresponding to respective entries in a memory of a node on a different network than the NIC. 2. The NIC of claim 1, wherein the prefetcher circuitry includes a plurality of prefetcher circuits each dedicated to predicting a next one or more addresses of a respective memory to be accessed by the processor. 3. The NIC of claim 1, wherein the prefetcher circuitry includes first prefetcher circuitry to monitor only load requests from the processor and determine an address associated with a next load request from the processor and second prefetcher circuitry to monitor only get requests from the processor and determine an address associated with a next get request from the processor. 4. The NIC of claim 1, further comprising NIC logic circuitry configured to add one or more bits to a decode request indicating whether or not the decode request is associated with a prefetch operation. 5. The NIC of claim 1, wherein the SAD circuitry includes a memory with a plurality of entries to indicate one or more of a memory type, an injection rate, and a granularity of a prefetching scheme to be used with a memory associated with a respective entry of the plurality of entries. 6. The NIC of claim 5, wherein the SAD circuitry is further configured to adjust one or more of the injection rate and the granularity in response to feedback from the node on the different network. 7. The NIC of claim 6, wherein the feedback includes one or more bits indicating whether to increase, decrease, or keep constant one or more of the injection rate and the granularity. 8. The NIC of claim 1, wherein the prefetcher circuitry includes first prefetcher circuitry configured to implement a first prefetching scheme and second prefetcher circuitry configured to implement a second, different prefetching scheme. 9. The NIC of claim 1, wherein the NIC includes the circuitry. 10. A non-transitory machine-readable medium including instructions stored thereon that, when executed by a network interface controller (NIC), configure the NIC to:
receive, from a processor, a plurality of requests for data on a different network than the processor; determine, based on a plurality of addresses of the plurality of requests, one or more addresses from which to prefetch data from the different network for the processor; issue a prefetch request for data from the determined one or more next addresses; and provide, to the processor, the data corresponding to the prefetch request. 11. The non-transitory machine-readable medium of claim 10, further comprising instructions that, when executed by the NIC, configure the NIC to:
monitor, using first instructions of the instructions, only load requests from the processor; determine, using the first instructions, an address associated with a next load request from the processor; monitor, using second instructions of the instructions, only get requests from the processor; and determine, using the second instructions, an address associated with a next get request from the processor. 12. The non-transitory machine-readable medium of claim 10, further comprising instructions that, when executed by the NIC, configure the NIC to add one or more bits to a decode request indicating whether or not the decode request is associated with a prefetch operation. 13. The non-transitory machine-readable medium of claim 10, further comprising instructions that, when executed by the NIC, configure the NIC to adjust one or more of an injection rate and a granularity value stored in a memory in response to feedback from the node on the different network. 14. The non-transitory machine-readable medium of claim 13, wherein the feedback includes one or more bits indicating whether to increase, decrease, or keep constant one or more of the injection rate and the granularity value. 15. A system operating in a computer network, the system comprising:
a plurality of communicatively coupled nodes, each including a network interface controller (NIC) coupled to a plurality of processors, each NIC comprising: system address decoder (SAD) circuitry configured to determine a node identification of a node to which a memory request from a processor is homed; and prefetcher circuitry communicatively coupled to the SAD circuitry, the prefetcher circuitry to determine one or more addresses from which to prefetch data, the one or more addresses corresponding to respective entries in a memory of a node on a different network than the computer network. 16. The system of claim 15, wherein the prefetcher circuitry includes a plurality of prefetcher circuits each dedicated to predicting a next one or more addresses of a respective memory to be accessed by the processor. 17. The system of claim 15, wherein the prefetcher circuitry includes a plurality of prefetcher circuits, wherein the plurality of prefetcher circuits includes a first prefetcher circuit to monitor load requests from the processor and determine an address associated with a next load request from the processor and a second prefetcher circuit to monitor get requests from the processor and determine an address associated with a next get request from the processor. 18. The system of claim 15, wherein each NIC further comprises:
NIC logic circuitry configured to add one or more bits to a decode request indicating whether or not the decode request is associated with a prefetch operation. 19. The system of claim 15, wherein the each NIC further comprises SAD circuitry including a memory with respective entries detailing one or more of a memory type, an injection rate, and a granularity of a prefetching scheme to be used with a corresponding memory. 20. The system of claim 19, wherein the SAD circuitry is further configured to adjust one or more of the injection rate and the granularity in response to feedback from the node on the different network. 21. The system of claim 20, wherein the feedback includes one or more bits indicating whether to increase, decrease, or keep constant one or more of the injection rate and the granularity. 22. The system of claim 15, wherein the prefetcher circuitry includes first prefetcher circuitry configured to implement a first prefetching scheme and second prefetcher circuitry configured to implement a second, different prefetching scheme. 23. A network interface controller (NIC) comprising:
means for determining a node identification of a node to which a memory request from a processor is homed; and means for determining, based on an address in the memory request, one or more addresses from which to prefetch data, the one or more addresses corresponding to respective entries in a memory of a node on a different network than the NIC. 24. The NIC of claim 23, further comprising means for monitoring only load requests from the processor and determining an address associated with a next load request from the processor and other means for monitoring only get requests from the processor and determining an address associated with a next get request from the processor. 25. The NIC of claim 23, further comprising means for adding one or more bits to a decode request indicating whether or not the decode request is associated with a prefetch operation. 26. The NIC of claim 23, further comprising means for adjusting one or more of an injection rate and a granularity of prefetching in response to feedback from the node on the different network. 27. The NIC of claim 26, wherein the feedback includes one or more bits indicating whether to increase, decrease, or keep constant one or more of the injection rate and the granularity. 28. A method comprising:
receiving, from a processor and at a network interface controller (NIC), a plurality of requests for data residing on a different network than the processor; determining, based on a plurality of addresses of the plurality of requests and by prefetcher circuitry of the NIC, one or more next addresses from which to prefetch data from the different network for the processor; issuing a prefetch request, by the NIC, for data from the determined one or more next addresses; and providing, to the processor, the data corresponding to the prefetch request. 29. The method of claim 28, wherein the prefetcher circuitry includes a plurality of prefetcher circuits, and wherein determining the one or more addresses from which to prefetch data includes determining, using a prefetcher circuit of the plurality of prefetcher circuits dedicated to performing a prefetching scheme for a single remote memory, a next one or more addresses of the memory to be accessed by the processor. 30. The method of claim 28, wherein the prefetcher circuitry includes a plurality of prefetcher circuits, and the method further comprises:
monitoring, using a first prefetcher circuit of the plurality of prefetcher circuits, only load requests from the processor; determining, using the first prefetcher circuit, an address associated with a next load request from the processor; monitoring, using a second prefetcher circuit of the plurality of prefetcher circuits, only get requests from the processor; and determining, using the first prefetcher circuit, an address associated with a next get request from the processor; 31. The method of claim 28, further comprising adding, by the NIC, one or more bits to a decode request indicating whether or not the decode request is associated with a prefetch operation. 32. The method of claim 28, wherein the SAD circuitry includes a memory with respective entries detailing one or more of a memory type, an injection rate, and a granularity of a prefetching scheme to be used with a corresponding memory and the method further comprises adjusting one or more of the injection rate and the granularity in response to feedback from the node on the different network. 33. The method of claim 32, wherein the feedback includes one or more bits indicating whether to increase, decrease, or keep constant one or more of the injection rate and the granularity. 34. The method of claim 28, wherein the prefetcher circuitry includes first prefetcher circuitry and a second prefetcher circuitry, and wherein determining the one or more addresses from which to prefetch data includes determining, using the first prefetcher circuitry, a next one or more addresses of the memory to be accessed by the processor using a first prefetching scheme and determining, using the second prefetcher circuitry, a next one or more addresses of a different memory to be accessed by the processor using a second prefetching scheme. | Generally discussed herein are systems, devices, and methods for prefetcher in a multi-tiered memory (DSM) system. A node can include a network interface controller (NIC) comprising system address decoder (SAD) circuitry configured to determine a node identification of a node to which a memory request from a processor is homed, and prefetcher circuitry communicatively coupled to the SAD circuitry, the prefetcher circuitry to determine, based on an address in the memory request, one or more addresses from which to prefetch data, the one or more addresses corresponding to respective entries in a memory of a node on a different network than the NIC.1. A network interface controller (NIC) comprising:
input/output (I/O) circuitry to receive, from system address decoder (SAD) circuitry, a node identification of a node to which a memory request from a processor is horned; and prefetcher circuitry communicatively coupled to the SAD circuitry, the prefetcher circuitry to determine, based on an address in the memory request, one or more addresses from which to prefetch data, the one or more addresses corresponding to respective entries in a memory of a node on a different network than the NIC. 2. The NIC of claim 1, wherein the prefetcher circuitry includes a plurality of prefetcher circuits each dedicated to predicting a next one or more addresses of a respective memory to be accessed by the processor. 3. The NIC of claim 1, wherein the prefetcher circuitry includes first prefetcher circuitry to monitor only load requests from the processor and determine an address associated with a next load request from the processor and second prefetcher circuitry to monitor only get requests from the processor and determine an address associated with a next get request from the processor. 4. The NIC of claim 1, further comprising NIC logic circuitry configured to add one or more bits to a decode request indicating whether or not the decode request is associated with a prefetch operation. 5. The NIC of claim 1, wherein the SAD circuitry includes a memory with a plurality of entries to indicate one or more of a memory type, an injection rate, and a granularity of a prefetching scheme to be used with a memory associated with a respective entry of the plurality of entries. 6. The NIC of claim 5, wherein the SAD circuitry is further configured to adjust one or more of the injection rate and the granularity in response to feedback from the node on the different network. 7. The NIC of claim 6, wherein the feedback includes one or more bits indicating whether to increase, decrease, or keep constant one or more of the injection rate and the granularity. 8. The NIC of claim 1, wherein the prefetcher circuitry includes first prefetcher circuitry configured to implement a first prefetching scheme and second prefetcher circuitry configured to implement a second, different prefetching scheme. 9. The NIC of claim 1, wherein the NIC includes the circuitry. 10. A non-transitory machine-readable medium including instructions stored thereon that, when executed by a network interface controller (NIC), configure the NIC to:
receive, from a processor, a plurality of requests for data on a different network than the processor; determine, based on a plurality of addresses of the plurality of requests, one or more addresses from which to prefetch data from the different network for the processor; issue a prefetch request for data from the determined one or more next addresses; and provide, to the processor, the data corresponding to the prefetch request. 11. The non-transitory machine-readable medium of claim 10, further comprising instructions that, when executed by the NIC, configure the NIC to:
monitor, using first instructions of the instructions, only load requests from the processor; determine, using the first instructions, an address associated with a next load request from the processor; monitor, using second instructions of the instructions, only get requests from the processor; and determine, using the second instructions, an address associated with a next get request from the processor. 12. The non-transitory machine-readable medium of claim 10, further comprising instructions that, when executed by the NIC, configure the NIC to add one or more bits to a decode request indicating whether or not the decode request is associated with a prefetch operation. 13. The non-transitory machine-readable medium of claim 10, further comprising instructions that, when executed by the NIC, configure the NIC to adjust one or more of an injection rate and a granularity value stored in a memory in response to feedback from the node on the different network. 14. The non-transitory machine-readable medium of claim 13, wherein the feedback includes one or more bits indicating whether to increase, decrease, or keep constant one or more of the injection rate and the granularity value. 15. A system operating in a computer network, the system comprising:
a plurality of communicatively coupled nodes, each including a network interface controller (NIC) coupled to a plurality of processors, each NIC comprising: system address decoder (SAD) circuitry configured to determine a node identification of a node to which a memory request from a processor is homed; and prefetcher circuitry communicatively coupled to the SAD circuitry, the prefetcher circuitry to determine one or more addresses from which to prefetch data, the one or more addresses corresponding to respective entries in a memory of a node on a different network than the computer network. 16. The system of claim 15, wherein the prefetcher circuitry includes a plurality of prefetcher circuits each dedicated to predicting a next one or more addresses of a respective memory to be accessed by the processor. 17. The system of claim 15, wherein the prefetcher circuitry includes a plurality of prefetcher circuits, wherein the plurality of prefetcher circuits includes a first prefetcher circuit to monitor load requests from the processor and determine an address associated with a next load request from the processor and a second prefetcher circuit to monitor get requests from the processor and determine an address associated with a next get request from the processor. 18. The system of claim 15, wherein each NIC further comprises:
NIC logic circuitry configured to add one or more bits to a decode request indicating whether or not the decode request is associated with a prefetch operation. 19. The system of claim 15, wherein the each NIC further comprises SAD circuitry including a memory with respective entries detailing one or more of a memory type, an injection rate, and a granularity of a prefetching scheme to be used with a corresponding memory. 20. The system of claim 19, wherein the SAD circuitry is further configured to adjust one or more of the injection rate and the granularity in response to feedback from the node on the different network. 21. The system of claim 20, wherein the feedback includes one or more bits indicating whether to increase, decrease, or keep constant one or more of the injection rate and the granularity. 22. The system of claim 15, wherein the prefetcher circuitry includes first prefetcher circuitry configured to implement a first prefetching scheme and second prefetcher circuitry configured to implement a second, different prefetching scheme. 23. A network interface controller (NIC) comprising:
means for determining a node identification of a node to which a memory request from a processor is homed; and means for determining, based on an address in the memory request, one or more addresses from which to prefetch data, the one or more addresses corresponding to respective entries in a memory of a node on a different network than the NIC. 24. The NIC of claim 23, further comprising means for monitoring only load requests from the processor and determining an address associated with a next load request from the processor and other means for monitoring only get requests from the processor and determining an address associated with a next get request from the processor. 25. The NIC of claim 23, further comprising means for adding one or more bits to a decode request indicating whether or not the decode request is associated with a prefetch operation. 26. The NIC of claim 23, further comprising means for adjusting one or more of an injection rate and a granularity of prefetching in response to feedback from the node on the different network. 27. The NIC of claim 26, wherein the feedback includes one or more bits indicating whether to increase, decrease, or keep constant one or more of the injection rate and the granularity. 28. A method comprising:
receiving, from a processor and at a network interface controller (NIC), a plurality of requests for data residing on a different network than the processor; determining, based on a plurality of addresses of the plurality of requests and by prefetcher circuitry of the NIC, one or more next addresses from which to prefetch data from the different network for the processor; issuing a prefetch request, by the NIC, for data from the determined one or more next addresses; and providing, to the processor, the data corresponding to the prefetch request. 29. The method of claim 28, wherein the prefetcher circuitry includes a plurality of prefetcher circuits, and wherein determining the one or more addresses from which to prefetch data includes determining, using a prefetcher circuit of the plurality of prefetcher circuits dedicated to performing a prefetching scheme for a single remote memory, a next one or more addresses of the memory to be accessed by the processor. 30. The method of claim 28, wherein the prefetcher circuitry includes a plurality of prefetcher circuits, and the method further comprises:
monitoring, using a first prefetcher circuit of the plurality of prefetcher circuits, only load requests from the processor; determining, using the first prefetcher circuit, an address associated with a next load request from the processor; monitoring, using a second prefetcher circuit of the plurality of prefetcher circuits, only get requests from the processor; and determining, using the first prefetcher circuit, an address associated with a next get request from the processor; 31. The method of claim 28, further comprising adding, by the NIC, one or more bits to a decode request indicating whether or not the decode request is associated with a prefetch operation. 32. The method of claim 28, wherein the SAD circuitry includes a memory with respective entries detailing one or more of a memory type, an injection rate, and a granularity of a prefetching scheme to be used with a corresponding memory and the method further comprises adjusting one or more of the injection rate and the granularity in response to feedback from the node on the different network. 33. The method of claim 32, wherein the feedback includes one or more bits indicating whether to increase, decrease, or keep constant one or more of the injection rate and the granularity. 34. The method of claim 28, wherein the prefetcher circuitry includes first prefetcher circuitry and a second prefetcher circuitry, and wherein determining the one or more addresses from which to prefetch data includes determining, using the first prefetcher circuitry, a next one or more addresses of the memory to be accessed by the processor using a first prefetching scheme and determining, using the second prefetcher circuitry, a next one or more addresses of a different memory to be accessed by the processor using a second prefetching scheme. | 2,100 |
6,695 | 6,695 | 15,921,045 | 2,169 | A method is provided for analyzing and interpreting a dataset composed of electronic documents including free-form text. The method includes unifying terms of interest in the collection of terms of interest to identify variants of the terms of interest. This includes identifying candidate variants of a term of interest based on semantic similarity between the term of interest and other terms in the database, determined using an unsupervised machine learning algorithm. Linguistic features and contextual features of the term of interest and its candidate variants are extracted, at least the contextual features being extracted using the unsupervised machine learning algorithm. And a supervised machine learning algorithm is used with the linguistic features and contextual features to identify variants of the term of interest from the candidate variants, such as for application to generate features of the documents for data analytics performed thereon. | 1. An apparatus for extracting features from electronic documents for database query processing, the apparatus comprising:
a memory storing a collection of terms of interest from a database composed of a plurality of electronic documents including free-form text; and processing circuitry configured to access the memory, and execute computer-readable program code to cause the apparatus to at least:
unify terms of interest in the collection of terms of interest to identify variants of the terms of interest, including for a term of interest, the apparatus being caused to at last:
use an unsupervised machine learning algorithm to determine semantic similarity between the term of interest and other terms in the database, and identify candidate variants of the term of interest based thereon;
extract linguistic features and contextual features of the term of interest and the candidate variants of the term of interest, at least the contextual features being extracted using the unsupervised machine learning algorithm; and
use a supervised machine learning algorithm with the linguistic features and contextual features to identify variants of the term of interest from the candidate variants of the term of interest; and
execute a database query for features of the plurality of electronic documents from the database using the collection of terms of interest with arrays in which the terms of interest and variants of the terms of interest are collected, for data analytics performed thereon. 2. The apparatus of claim 1, wherein before the apparatus is caused to unify the terms of interest, the processing circuitry is configured to execute further computer-readable program code to cause the apparatus to further:
define a training set for the supervised machine learning algorithm, the training set including pairs of a term and respective other terms, and predictions of the respective other terms being variants of the term, the predictions including predictions of at least some of the other terms being variants of the term, and at least some of the other terms not being variants of the term; extract linguistic features and contextual features of the term and the respective other terms, at least the contextual features being extracted using the unsupervised machine learning algorithm; and use the training set and the linguistic features and contextual features to train the supervised machine learning algorithm. 3. The apparatus of claim 1, wherein after the apparatus is caused to unify the terms of interest, the processing circuitry is configured to execute further computer-readable program code to cause the apparatus to further:
normalize the terms of interest to provide canonical names for the terms of interest and variants of the terms of interest, the arrays in which the terms of interest and variants of the terms of interest are collected being identifiable by respective ones of the canonical names. 4. The apparatus of claim 1, wherein the collection of terms of interest includes multiword terms. 5. The apparatus of claim 4, wherein the apparatus being caused to unify the terms of interest includes being caused to unify the multiword terms that are equal in number of words and according to head words in the multiword terms. 6. The apparatus of claim 4, wherein the apparatus being caused to unify the multiword terms includes being caused to at least:
identify a group of the multiword terms that are equal in number of words; and unify head words in the group of the multiword terms, including for a head word of the head words, using the unsupervised machine learning algorithm and the supervised machine learning algorithm to identify others of the head words that are variants of the head word, those of the group of the multiword terms that have the head word and variants of the head word constituting a unified group of multiword terms. 7. The apparatus of claim 6, the processing circuitry is configured to execute further computer-readable program code to cause the apparatus to further normalize the head word and variants of the head word to provide a canonical name for the head word and variants of the head word, and in a display of multiword terms in the unified group of multiword terms, represent any of the head word and variants of the head word that differ from the canonical name with the canonical name. 8. The apparatus of claim 6, wherein the apparatus being caused to unify the multiword terms further includes being caused to at least:
unify modifiers in the unified group of multiword terms, including for a modifier of the modifiers in a multiword term of the unified group, the apparatus being caused to use the unsupervised machine learning algorithm and the supervised machine learning algorithm to identify others of the modifiers in others of the unified group that are variants of the modifier. 9. The apparatus of claim 8, wherein the processing circuitry is configured to execute further computer-readable program code to cause the apparatus to further normalize the modifier and variants of the modifier to provide a canonical name for the modifier and variants of the modifier, and in a display of multiword terms in the unified group of multiword terms, represent any of the modifier and variants of the modifier that differ from the canonical name with the canonical name. 10. A method of extracting features from electronic documents for database query processing, the method comprising:
accessing , by processing circuitry, a memory storing a collection of terms of interest from a database composed of a plurality of electronic documents including free-form text; unifying, by the processing circuitry, terms of interest in the collection of terms of interest to identify variants of the terms of interest, including for a term of interest:
using an unsupervised machine learning algorithm to determine semantic similarity between the term of interest and other terms in the database, and identify candidate variants of the term of interest based thereon;
extracting linguistic features and contextual features of the term of interest and the candidate variants of the term of interest, at least the contextual features being extracted using the unsupervised machine learning algorithm; and
using a supervised machine learning algorithm with the linguistic features and contextual features to identify variants of the term of interest from the candidate variants of the term of interest; and
executing, by the processing circuitry, a database query for features of the plurality of electronic documents from the database using the collection of terms of interest with arrays in which the terms of interest and variants of the terms of interest are collected, for data analytics performed thereon. 11. The method of claim 10, wherein before unifying the terms of interest, the method further comprises:
defining a training set for the supervised machine learning algorithm, the training set including pairs of a term and respective other terms, and predictions of the respective other terms being variants of the term, the predictions including predictions of at least some of the other terms being variants of the term, and at least some of the other terms not being variants of the term; extracting linguistic features and contextual features of the term and the respective other terms, at least the contextual features being extracted using the unsupervised machine learning algorithm; and using the training set and the linguistic features and contextual features to train the supervised machine learning algorithm. 12. The method of claim 10, wherein after unifying the terms of interest, the method further comprises:
normalizing, by the processing circuitry, the terms of interest to provide canonical names for the terms of interest and variants of the terms of interest, the arrays in which the terms of interest and variants of the terms of interest are collected being identifiable by respective ones of the canonical names. 13. The method of claim 10, wherein the collection of terms of interest includes multiword terms. 14. The method of claim 13, wherein unifying the terms of interest includes unifying the multiword terms that are equal in number of words and according to head words in the multiword terms. 15. The method of claim 13, wherein unifying the multiword terms includes:
identifying a group of the multiword terms that are equal in number of words; and unifying head words in the group of the multiword terms, including for a head word of the head words, using the unsupervised machine learning algorithm and the supervised machine learning algorithm to identify others of the head words that are variants of the head word, those of the group of the multiword terms that have the head word and variants of the head word constituting a unified group of multiword terms. 16. The method of claim 15 further comprising normalizing, by the processing circuitry, the head word and variants of the head word to provide a canonical name for the head word and variants of the head word, and in a display of multiword terms in the unified group of multiword terms, represent any of the head word and variants of the head word that differ from the canonical name with the canonical name. 17. The method of claim 15, wherein unifying the multiword terms further includes:
unifying modifiers in the unified group of multiword terms, including for a modifier of the modifiers in a multiword term of the unified group, using the unsupervised machine learning algorithm and the supervised machine learning algorithm to identify others of the modifiers in others of the unified group that are variants of the modifier. 18. The method of claim 17 further comprising normalizing, by the processing circuitry, the modifier and variants of the modifier to provide a canonical name for the modifier and variants of the modifier, and in a display of multiword terms in the unified group of multiword terms, represent any of the modifier and variants of the modifier that differ from the canonical name with the canonical name. 19. A non-transitory computer-readable storage medium for extracting features from electronic documents for database query processing, the computer-readable storage medium having computer-readable program code stored therein that in response to execution by processing circuitry, causes an apparatus to at least:
access a memory storing a collection of terms of interest from a database composed of a plurality of electronic documents including free-form text; unify terms of interest in the collection of terms of interest to identify variants of the terms of interest, including for a term of interest, the apparatus being caused to at last:
use an unsupervised machine learning algorithm to determine semantic similarity between the term of interest and other terms in the database, and identify candidate variants of the term of interest based thereon;
extract linguistic features and contextual features of the term of interest and the candidate variants of the term of interest, at least the contextual features being extracted using the unsupervised machine learning algorithm; and
use a supervised machine learning algorithm with the linguistic features and contextual features to identify variants of the term of interest from the candidate variants of the term of interest; and
execute a database query for features of the plurality of electronic documents from the database using the collection of terms of interest with arrays in which the terms of interest and variants of the terms of interest are collected, for data analytics performed thereon. 20. The non-transitory computer-readable storage medium of claim 19 having further computer-readable program code stored therein that in response to execution by the processing circuitry, and before the apparatus is caused to unify the terms of interest, causes the apparatus to further:
define a training set for the supervised machine learning algorithm, the training set including pairs of a term and respective other terms, and predictions of the respective other terms being variants of the term, the predictions including predictions of at least some of the other terms being variants of the term, and at least some of the other terms not being variants of the term;
extract linguistic features and contextual features of the term and the respective other terms, at least the contextual features being extracted using the unsupervised machine learning algorithm; and
use the training set and the linguistic features and contextual features to train the supervised machine learning algorithm. 21. The non-transitory computer-readable storage medium of claim 19 having further computer-readable program code stored therein that in response to execution by the processing circuitry, and after the apparatus is caused to unify the terms of interest, causes the apparatus to further:
normalize the terms of interest to provide canonical names for the terms of interest and variants of the terms of interest, the arrays in which the terms of interest and variants of the terms of interest are collected being identifiable by respective ones of the canonical names. 22. The non-transitory computer-readable storage medium of claim 19, wherein the collection of terms of interest includes multiword terms. 23. The non-transitory computer-readable storage medium of claim 22, wherein the apparatus being caused to unify the terms of interest includes being caused to unify the multiword terms that are equal in number of words and according to head words in the multiword terms. 24. The non-transitory computer-readable storage medium of claim 22, wherein the apparatus being caused to unify the multiword terms includes being caused to at least:
identify a group of the multiword terms that are equal in number of words; and unify head words in the group of the multiword terms, including for a head word of the head words, using the unsupervised machine learning algorithm and the supervised machine learning algorithm to identify others of the head words that are variants of the head word, those of the group of the multiword terms that have the head word and variants of the head word constituting a unified group of multiword terms. 25. The non-transitory computer-readable storage medium of claim 24 having further computer-readable program code stored therein that in response to execution by the processing circuitry, causes the apparatus to further normalize the head word and variants of the head word to provide a canonical name for the head word and variants of the head word, and in a display of multiword terms in the unified group of multiword terms, represent any of the head word and variants of the head word that differ from the canonical name with the canonical name. 26. The non-transitory computer-readable storage medium of claim 24, wherein the apparatus being caused to unify the multiword terms further includes being caused to at least:
unify modifiers in the unified group of multiword terms, including for a modifier of the modifiers in a multiword term of the unified group, the apparatus being caused to use the unsupervised machine learning algorithm and the supervised machine learning algorithm to identify others of the modifiers in others of the unified group that are variants of the modifier. 27. The non-transitory computer-readable storage medium of claim 26 having further computer-readable program code stored therein that in response to execution by the processing circuitry, causes the apparatus to further normalize the modifier and variants of the modifier to provide a canonical name for the modifier and variants of the modifier, and in a display of multiword terms in the unified group of multiword terms, represent any of the modifier and variants of the modifier that differ from the canonical name with the canonical name. | A method is provided for analyzing and interpreting a dataset composed of electronic documents including free-form text. The method includes unifying terms of interest in the collection of terms of interest to identify variants of the terms of interest. This includes identifying candidate variants of a term of interest based on semantic similarity between the term of interest and other terms in the database, determined using an unsupervised machine learning algorithm. Linguistic features and contextual features of the term of interest and its candidate variants are extracted, at least the contextual features being extracted using the unsupervised machine learning algorithm. And a supervised machine learning algorithm is used with the linguistic features and contextual features to identify variants of the term of interest from the candidate variants, such as for application to generate features of the documents for data analytics performed thereon.1. An apparatus for extracting features from electronic documents for database query processing, the apparatus comprising:
a memory storing a collection of terms of interest from a database composed of a plurality of electronic documents including free-form text; and processing circuitry configured to access the memory, and execute computer-readable program code to cause the apparatus to at least:
unify terms of interest in the collection of terms of interest to identify variants of the terms of interest, including for a term of interest, the apparatus being caused to at last:
use an unsupervised machine learning algorithm to determine semantic similarity between the term of interest and other terms in the database, and identify candidate variants of the term of interest based thereon;
extract linguistic features and contextual features of the term of interest and the candidate variants of the term of interest, at least the contextual features being extracted using the unsupervised machine learning algorithm; and
use a supervised machine learning algorithm with the linguistic features and contextual features to identify variants of the term of interest from the candidate variants of the term of interest; and
execute a database query for features of the plurality of electronic documents from the database using the collection of terms of interest with arrays in which the terms of interest and variants of the terms of interest are collected, for data analytics performed thereon. 2. The apparatus of claim 1, wherein before the apparatus is caused to unify the terms of interest, the processing circuitry is configured to execute further computer-readable program code to cause the apparatus to further:
define a training set for the supervised machine learning algorithm, the training set including pairs of a term and respective other terms, and predictions of the respective other terms being variants of the term, the predictions including predictions of at least some of the other terms being variants of the term, and at least some of the other terms not being variants of the term; extract linguistic features and contextual features of the term and the respective other terms, at least the contextual features being extracted using the unsupervised machine learning algorithm; and use the training set and the linguistic features and contextual features to train the supervised machine learning algorithm. 3. The apparatus of claim 1, wherein after the apparatus is caused to unify the terms of interest, the processing circuitry is configured to execute further computer-readable program code to cause the apparatus to further:
normalize the terms of interest to provide canonical names for the terms of interest and variants of the terms of interest, the arrays in which the terms of interest and variants of the terms of interest are collected being identifiable by respective ones of the canonical names. 4. The apparatus of claim 1, wherein the collection of terms of interest includes multiword terms. 5. The apparatus of claim 4, wherein the apparatus being caused to unify the terms of interest includes being caused to unify the multiword terms that are equal in number of words and according to head words in the multiword terms. 6. The apparatus of claim 4, wherein the apparatus being caused to unify the multiword terms includes being caused to at least:
identify a group of the multiword terms that are equal in number of words; and unify head words in the group of the multiword terms, including for a head word of the head words, using the unsupervised machine learning algorithm and the supervised machine learning algorithm to identify others of the head words that are variants of the head word, those of the group of the multiword terms that have the head word and variants of the head word constituting a unified group of multiword terms. 7. The apparatus of claim 6, the processing circuitry is configured to execute further computer-readable program code to cause the apparatus to further normalize the head word and variants of the head word to provide a canonical name for the head word and variants of the head word, and in a display of multiword terms in the unified group of multiword terms, represent any of the head word and variants of the head word that differ from the canonical name with the canonical name. 8. The apparatus of claim 6, wherein the apparatus being caused to unify the multiword terms further includes being caused to at least:
unify modifiers in the unified group of multiword terms, including for a modifier of the modifiers in a multiword term of the unified group, the apparatus being caused to use the unsupervised machine learning algorithm and the supervised machine learning algorithm to identify others of the modifiers in others of the unified group that are variants of the modifier. 9. The apparatus of claim 8, wherein the processing circuitry is configured to execute further computer-readable program code to cause the apparatus to further normalize the modifier and variants of the modifier to provide a canonical name for the modifier and variants of the modifier, and in a display of multiword terms in the unified group of multiword terms, represent any of the modifier and variants of the modifier that differ from the canonical name with the canonical name. 10. A method of extracting features from electronic documents for database query processing, the method comprising:
accessing , by processing circuitry, a memory storing a collection of terms of interest from a database composed of a plurality of electronic documents including free-form text; unifying, by the processing circuitry, terms of interest in the collection of terms of interest to identify variants of the terms of interest, including for a term of interest:
using an unsupervised machine learning algorithm to determine semantic similarity between the term of interest and other terms in the database, and identify candidate variants of the term of interest based thereon;
extracting linguistic features and contextual features of the term of interest and the candidate variants of the term of interest, at least the contextual features being extracted using the unsupervised machine learning algorithm; and
using a supervised machine learning algorithm with the linguistic features and contextual features to identify variants of the term of interest from the candidate variants of the term of interest; and
executing, by the processing circuitry, a database query for features of the plurality of electronic documents from the database using the collection of terms of interest with arrays in which the terms of interest and variants of the terms of interest are collected, for data analytics performed thereon. 11. The method of claim 10, wherein before unifying the terms of interest, the method further comprises:
defining a training set for the supervised machine learning algorithm, the training set including pairs of a term and respective other terms, and predictions of the respective other terms being variants of the term, the predictions including predictions of at least some of the other terms being variants of the term, and at least some of the other terms not being variants of the term; extracting linguistic features and contextual features of the term and the respective other terms, at least the contextual features being extracted using the unsupervised machine learning algorithm; and using the training set and the linguistic features and contextual features to train the supervised machine learning algorithm. 12. The method of claim 10, wherein after unifying the terms of interest, the method further comprises:
normalizing, by the processing circuitry, the terms of interest to provide canonical names for the terms of interest and variants of the terms of interest, the arrays in which the terms of interest and variants of the terms of interest are collected being identifiable by respective ones of the canonical names. 13. The method of claim 10, wherein the collection of terms of interest includes multiword terms. 14. The method of claim 13, wherein unifying the terms of interest includes unifying the multiword terms that are equal in number of words and according to head words in the multiword terms. 15. The method of claim 13, wherein unifying the multiword terms includes:
identifying a group of the multiword terms that are equal in number of words; and unifying head words in the group of the multiword terms, including for a head word of the head words, using the unsupervised machine learning algorithm and the supervised machine learning algorithm to identify others of the head words that are variants of the head word, those of the group of the multiword terms that have the head word and variants of the head word constituting a unified group of multiword terms. 16. The method of claim 15 further comprising normalizing, by the processing circuitry, the head word and variants of the head word to provide a canonical name for the head word and variants of the head word, and in a display of multiword terms in the unified group of multiword terms, represent any of the head word and variants of the head word that differ from the canonical name with the canonical name. 17. The method of claim 15, wherein unifying the multiword terms further includes:
unifying modifiers in the unified group of multiword terms, including for a modifier of the modifiers in a multiword term of the unified group, using the unsupervised machine learning algorithm and the supervised machine learning algorithm to identify others of the modifiers in others of the unified group that are variants of the modifier. 18. The method of claim 17 further comprising normalizing, by the processing circuitry, the modifier and variants of the modifier to provide a canonical name for the modifier and variants of the modifier, and in a display of multiword terms in the unified group of multiword terms, represent any of the modifier and variants of the modifier that differ from the canonical name with the canonical name. 19. A non-transitory computer-readable storage medium for extracting features from electronic documents for database query processing, the computer-readable storage medium having computer-readable program code stored therein that in response to execution by processing circuitry, causes an apparatus to at least:
access a memory storing a collection of terms of interest from a database composed of a plurality of electronic documents including free-form text; unify terms of interest in the collection of terms of interest to identify variants of the terms of interest, including for a term of interest, the apparatus being caused to at last:
use an unsupervised machine learning algorithm to determine semantic similarity between the term of interest and other terms in the database, and identify candidate variants of the term of interest based thereon;
extract linguistic features and contextual features of the term of interest and the candidate variants of the term of interest, at least the contextual features being extracted using the unsupervised machine learning algorithm; and
use a supervised machine learning algorithm with the linguistic features and contextual features to identify variants of the term of interest from the candidate variants of the term of interest; and
execute a database query for features of the plurality of electronic documents from the database using the collection of terms of interest with arrays in which the terms of interest and variants of the terms of interest are collected, for data analytics performed thereon. 20. The non-transitory computer-readable storage medium of claim 19 having further computer-readable program code stored therein that in response to execution by the processing circuitry, and before the apparatus is caused to unify the terms of interest, causes the apparatus to further:
define a training set for the supervised machine learning algorithm, the training set including pairs of a term and respective other terms, and predictions of the respective other terms being variants of the term, the predictions including predictions of at least some of the other terms being variants of the term, and at least some of the other terms not being variants of the term;
extract linguistic features and contextual features of the term and the respective other terms, at least the contextual features being extracted using the unsupervised machine learning algorithm; and
use the training set and the linguistic features and contextual features to train the supervised machine learning algorithm. 21. The non-transitory computer-readable storage medium of claim 19 having further computer-readable program code stored therein that in response to execution by the processing circuitry, and after the apparatus is caused to unify the terms of interest, causes the apparatus to further:
normalize the terms of interest to provide canonical names for the terms of interest and variants of the terms of interest, the arrays in which the terms of interest and variants of the terms of interest are collected being identifiable by respective ones of the canonical names. 22. The non-transitory computer-readable storage medium of claim 19, wherein the collection of terms of interest includes multiword terms. 23. The non-transitory computer-readable storage medium of claim 22, wherein the apparatus being caused to unify the terms of interest includes being caused to unify the multiword terms that are equal in number of words and according to head words in the multiword terms. 24. The non-transitory computer-readable storage medium of claim 22, wherein the apparatus being caused to unify the multiword terms includes being caused to at least:
identify a group of the multiword terms that are equal in number of words; and unify head words in the group of the multiword terms, including for a head word of the head words, using the unsupervised machine learning algorithm and the supervised machine learning algorithm to identify others of the head words that are variants of the head word, those of the group of the multiword terms that have the head word and variants of the head word constituting a unified group of multiword terms. 25. The non-transitory computer-readable storage medium of claim 24 having further computer-readable program code stored therein that in response to execution by the processing circuitry, causes the apparatus to further normalize the head word and variants of the head word to provide a canonical name for the head word and variants of the head word, and in a display of multiword terms in the unified group of multiword terms, represent any of the head word and variants of the head word that differ from the canonical name with the canonical name. 26. The non-transitory computer-readable storage medium of claim 24, wherein the apparatus being caused to unify the multiword terms further includes being caused to at least:
unify modifiers in the unified group of multiword terms, including for a modifier of the modifiers in a multiword term of the unified group, the apparatus being caused to use the unsupervised machine learning algorithm and the supervised machine learning algorithm to identify others of the modifiers in others of the unified group that are variants of the modifier. 27. The non-transitory computer-readable storage medium of claim 26 having further computer-readable program code stored therein that in response to execution by the processing circuitry, causes the apparatus to further normalize the modifier and variants of the modifier to provide a canonical name for the modifier and variants of the modifier, and in a display of multiword terms in the unified group of multiword terms, represent any of the modifier and variants of the modifier that differ from the canonical name with the canonical name. | 2,100 |
6,696 | 6,696 | 16,034,117 | 2,193 | Data is received that encapsulate a test case document including a series of test instructions written in natural language for testing a software application. The software application includes a plurality of graphical user interface views (e.g., views in a web browser, etc.). Thereafter, the test case document is parsed using at least one natural language processing algorithm. This parsing includes tagging instructions in the test case document with one of a plurality of pre-defined sequence labels. Subsequently, a test automate is generated using at least one machine learning model trained using historical test case documents, corresponding automates, and their successful executions and based on the tagged instructions in the test case document. The generated test automate includes one or more test scripts which, when executed, perform a testing sequence of the software application according to the series of test instructions. Related apparatus, systems, techniques and articles are also described. | 1. A computer-implemented method comprising:
receiving data encapsulating a test case document including a series of test instructions written in natural language for testing a software application comprising a plurality of graphical user interface views; parsing, using at least one natural language processing algorithm, the test case document by tagging instructions in the test case document with one of a plurality of pre-defined sequence labels; and generating, using at least one machine learning model trained using historical test case documents, corresponding historical test automates, their successful executions, and corresponding document object models (DOMs), a test automate based on the tagged instructions in the test case document, the test automate comprising one or more test scripts which, when executed, perform a testing sequence of the software application according to the series of test instructions. 2. The method of claim 1, wherein the at least one machine learning model is a recurrent neural network trained using a plurality of parsed historical test case documents and their corresponding test automates. 3. The method of claim 1 further comprising: executing the test automate. 4. The method of claim 3 further comprising:
logging, during execution of the test automate, details characterizing performance of the test automate. 5. The method of claim 4 further comprising:
capturing, during execution of the test automate, screenshots of the application at various states. 6. The method of claim 1 further comprising:
determining, during execution of the test automate, that one of a scripts does not execute properly;
identifying, using at least one second machine learning model, an alternate script for the script that does not execute properly;
substituting the alternate script for the script that does not execute properly; and
restarting execution of the test automate using the substituted alternate script. 7. The method of claim 6, wherein the at least one second machine learning model is a recurrent neural network trained using a plurality of historical test automates. 8. The method of claim 7, wherein the determining comprises capturing the document object model (DOM) of the application at the point at which the script does not execute properly, wherein the DOM is used by the at least one second machine learning model to identify the alternate script. 9. The method of claim 1, wherein the application executes in a web browser. 10. The method of claim 1 further comprising:
adaptively modifying the test automate during execution using a self-healing algorithm. 11. A system comprising:
at least one programmable data processor; and memory storing instructions which, when executed by the at least one programmable data processor, result in operations comprising:
receiving data encapsulating a test case document including a series of test instructions written in natural language for testing a software application comprising a plurality of graphical user interface views;
parsing, using at least one natural language processing algorithm, the test case document by tagging instructions in the test case document with one of a plurality of pre-defined sequence labels; and
generating, using at least one machine learning model trained using historical test case documents, corresponding historical test automates, and their successful executions, a test automate based on the tagged instructions in the test case document, the test automate comprising one or more test scripts which, when executed, perform a testing sequence of the software application according to the series of test instructions. 12. The system of claim 11, wherein the at least one machine learning model is a recurrent neural network trained using a plurality of parsed historical test case documents and their corresponding test automates. 13. The system of claim 11, wherein the operations further comprise:
executing the test automate. 14. The system of claim 13, wherein the operations further comprise:
logging, during execution of the test automate, details characterizing performance of the test automate. 15. The system of claim 14, wherein the operations further comprise:
capturing, during execution of the test automate, screenshots of the application at various states. 16. The system of claim 11, wherein the operations further comprise:
determining, during execution of the test automate, that one of a scripts does not execute properly; identifying, using at least one second machine learning model, an alternate script for the script that does not execute properly; substituting the alternate script for the script that does not execute properly; and restarting execution of the test automate using the substituted alternate script. 17. The system of claim 16, wherein the at least one second machine learning model is a recurrent neural network trained using a plurality of historical test automates. 18. The system of claim 17, wherein the determining comprises capturing a document object model (DOM) of the application at the point at which the script does not execute properly, wherein the DOM is used by the at least one second machine learning model to identify the alternate script. 19. The system of claim 11, wherein the operations further comprise:
adaptively modifying the test automate during execution using a self-healing algorithm. 20. A computer-implemented method comprising:
receiving data encapsulating a test case document including a series of test instructions written in natural language for testing a software application comprising a plurality of graphical user interface views; generating, using at least one machine learning model trained using historical test information, a test automate based on the tagged instructions in the test case document, the test automate comprising one or more test scripts which, when executed, perform a testing sequence of the software application according to the series of test instructions. executing the test automate; adaptively modifying, using at least one second machine learning model trained using historical test automates, the test automate during execution of test automate if an error or failure is detected; and subsequently initiating execution of the modified test automate. | Data is received that encapsulate a test case document including a series of test instructions written in natural language for testing a software application. The software application includes a plurality of graphical user interface views (e.g., views in a web browser, etc.). Thereafter, the test case document is parsed using at least one natural language processing algorithm. This parsing includes tagging instructions in the test case document with one of a plurality of pre-defined sequence labels. Subsequently, a test automate is generated using at least one machine learning model trained using historical test case documents, corresponding automates, and their successful executions and based on the tagged instructions in the test case document. The generated test automate includes one or more test scripts which, when executed, perform a testing sequence of the software application according to the series of test instructions. Related apparatus, systems, techniques and articles are also described.1. A computer-implemented method comprising:
receiving data encapsulating a test case document including a series of test instructions written in natural language for testing a software application comprising a plurality of graphical user interface views; parsing, using at least one natural language processing algorithm, the test case document by tagging instructions in the test case document with one of a plurality of pre-defined sequence labels; and generating, using at least one machine learning model trained using historical test case documents, corresponding historical test automates, their successful executions, and corresponding document object models (DOMs), a test automate based on the tagged instructions in the test case document, the test automate comprising one or more test scripts which, when executed, perform a testing sequence of the software application according to the series of test instructions. 2. The method of claim 1, wherein the at least one machine learning model is a recurrent neural network trained using a plurality of parsed historical test case documents and their corresponding test automates. 3. The method of claim 1 further comprising: executing the test automate. 4. The method of claim 3 further comprising:
logging, during execution of the test automate, details characterizing performance of the test automate. 5. The method of claim 4 further comprising:
capturing, during execution of the test automate, screenshots of the application at various states. 6. The method of claim 1 further comprising:
determining, during execution of the test automate, that one of a scripts does not execute properly;
identifying, using at least one second machine learning model, an alternate script for the script that does not execute properly;
substituting the alternate script for the script that does not execute properly; and
restarting execution of the test automate using the substituted alternate script. 7. The method of claim 6, wherein the at least one second machine learning model is a recurrent neural network trained using a plurality of historical test automates. 8. The method of claim 7, wherein the determining comprises capturing the document object model (DOM) of the application at the point at which the script does not execute properly, wherein the DOM is used by the at least one second machine learning model to identify the alternate script. 9. The method of claim 1, wherein the application executes in a web browser. 10. The method of claim 1 further comprising:
adaptively modifying the test automate during execution using a self-healing algorithm. 11. A system comprising:
at least one programmable data processor; and memory storing instructions which, when executed by the at least one programmable data processor, result in operations comprising:
receiving data encapsulating a test case document including a series of test instructions written in natural language for testing a software application comprising a plurality of graphical user interface views;
parsing, using at least one natural language processing algorithm, the test case document by tagging instructions in the test case document with one of a plurality of pre-defined sequence labels; and
generating, using at least one machine learning model trained using historical test case documents, corresponding historical test automates, and their successful executions, a test automate based on the tagged instructions in the test case document, the test automate comprising one or more test scripts which, when executed, perform a testing sequence of the software application according to the series of test instructions. 12. The system of claim 11, wherein the at least one machine learning model is a recurrent neural network trained using a plurality of parsed historical test case documents and their corresponding test automates. 13. The system of claim 11, wherein the operations further comprise:
executing the test automate. 14. The system of claim 13, wherein the operations further comprise:
logging, during execution of the test automate, details characterizing performance of the test automate. 15. The system of claim 14, wherein the operations further comprise:
capturing, during execution of the test automate, screenshots of the application at various states. 16. The system of claim 11, wherein the operations further comprise:
determining, during execution of the test automate, that one of a scripts does not execute properly; identifying, using at least one second machine learning model, an alternate script for the script that does not execute properly; substituting the alternate script for the script that does not execute properly; and restarting execution of the test automate using the substituted alternate script. 17. The system of claim 16, wherein the at least one second machine learning model is a recurrent neural network trained using a plurality of historical test automates. 18. The system of claim 17, wherein the determining comprises capturing a document object model (DOM) of the application at the point at which the script does not execute properly, wherein the DOM is used by the at least one second machine learning model to identify the alternate script. 19. The system of claim 11, wherein the operations further comprise:
adaptively modifying the test automate during execution using a self-healing algorithm. 20. A computer-implemented method comprising:
receiving data encapsulating a test case document including a series of test instructions written in natural language for testing a software application comprising a plurality of graphical user interface views; generating, using at least one machine learning model trained using historical test information, a test automate based on the tagged instructions in the test case document, the test automate comprising one or more test scripts which, when executed, perform a testing sequence of the software application according to the series of test instructions. executing the test automate; adaptively modifying, using at least one second machine learning model trained using historical test automates, the test automate during execution of test automate if an error or failure is detected; and subsequently initiating execution of the modified test automate. | 2,100 |
6,697 | 6,697 | 16,699,610 | 2,194 | A computer system with a first messaging application communicates a message to another computer system with a second messaging application via a coupling facility storage device. If the message does not exceed a predetermined threshold, the message is put onto the queue in the coupling facility. If the message does exceed a predetermined threshold, the message is put onto a log associated with the first messaging application and readable by the second messaging application. A pointer to the message is put onto the queue in the coupling facility. The pointer can be used to access the message in the log. | 1.-25. (canceled) 26. A method, performed within and by a shared storage device separate from first and second messaging computer systems, comprising:
receiving, from the first messaging computer system, a pointer to a first message stored within a first log owned by a first messaging application within the first messaging computer system; storing the pointer in the shared storage device; receiving, from the second messaging computer system, a request for the message; returning, to the second messaging computer system and responsive to the request, the pointer, wherein the first log is accessible by the second messaging computer system using the pointer, and access rights to the first log by the first messaging computer system are greater than access rights to the first log by the second messaging computer system. 27. The method of claim 26, wherein
the first messaging application is configured to access a second log owned by the second messaging computer system; and access rights to the second log by the second messaging computer system are greater than access rights to the second log by the first messaging computer system. 28. The method of claim 26, wherein
the pointer includes information configured to enable the second messaging computer system to access the first message within the first log. 29. The method of claim 28, wherein
the information includes:
(i) a message identifier associated with the first message, and
(ii) a catalogue identifier associated with a catalogue of the first messaging application. 30. The method of claim 26, wherein
the first message exceeds a predetermined threshold, and a second message from the first messaging application and not exceeding the predetermined threshold is stored within the shared storage device. 31. The method of claim 26, wherein
the first and second messaging computer systems are configured to read and write messages to the shared storage device. 32. A shared storage device separate from first and second messaging computer systems, comprising:
a hardware processor configured to initiate the following operations:
receiving, from the first messaging computer system, a pointer to a first message stored within a first log owned by a first messaging application within the first messaging computer system;
storing the pointer in the shared storage device;
receiving, from the second messaging computer system, a request for the message;
returning, to the second messaging computer system and responsive to the request, the pointer, wherein
the first log is accessible by the second messaging computer system using the pointer, and access rights to the first log by the first messaging computer system are greater than access rights to the first log by the second messaging computer system. 33. The shared storage device of claim 32, wherein
the first messaging application is configured to access a second log owned by the second messaging computer system; and access rights to the second log by the second messaging computer system are greater than access rights to the second log by the first messaging computer system. 34. The shared storage device of claim 32, wherein
the pointer includes information configured to enable the second messaging computer system to access the first message within the first log. 35. The shared storage device of claim 34, wherein
the information includes:
(i) a message identifier associated with the first message, and
(ii) a catalogue identifier associated with a catalogue of the first messaging application. 36. The shared storage device of claim 32, wherein
the first message exceeds a predetermined threshold, and a second message from the first messaging application and not exceeding the predetermined threshold is stored within the shared storage device. 37. The shared storage device of claim 32, wherein
the first and second messaging computer systems are configured to read and write messages to the shared storage device. 38. A computer program product, comprising:
a computer usable storage device having stored therein computer usable program code, the computer usable program code, which when executed by a shared storage device separate from first and second messaging computer systems, causes the shared storage device to perform:
receiving, from the first messaging computer system, a pointer to a first message stored within a first log owned by a first messaging application within the first messaging computer system;
storing the pointer in the shared storage device;
receiving, from the second messaging computer system, a request for the message;
returning, to the second messaging computer system and responsive to the request, the pointer, wherein
the first log is accessible by the second messaging computer system using the pointer, and access rights to the first log by the first messaging computer system are greater than access rights to the first log by the second messaging computer system. 39. The computer program product of claim 38, wherein
the first messaging application is configured to access a second log owned by the second messaging computer system; and access rights to the second log by the second messaging computer system are greater than access rights to the second log by the first messaging computer system. 40. The computer program product of claim 38, wherein
the pointer includes information configured to enable the second messaging computer system to access the first message within the first log. 41. The computer program product of claim 40, wherein
the information includes:
(i) a message identifier associated with the first message, and
(ii) a catalogue identifier associated with a catalogue of the first messaging application. 42. The computer program product of claim 38, wherein
the first message exceeds a predetermined threshold, and a second message from the first messaging application and not exceeding the predetermined threshold is stored within the shared storage device. 43. The computer program product of claim 38, wherein
the first and second messaging computer systems are configured to read and write messages to the shared storage device. | A computer system with a first messaging application communicates a message to another computer system with a second messaging application via a coupling facility storage device. If the message does not exceed a predetermined threshold, the message is put onto the queue in the coupling facility. If the message does exceed a predetermined threshold, the message is put onto a log associated with the first messaging application and readable by the second messaging application. A pointer to the message is put onto the queue in the coupling facility. The pointer can be used to access the message in the log.1.-25. (canceled) 26. A method, performed within and by a shared storage device separate from first and second messaging computer systems, comprising:
receiving, from the first messaging computer system, a pointer to a first message stored within a first log owned by a first messaging application within the first messaging computer system; storing the pointer in the shared storage device; receiving, from the second messaging computer system, a request for the message; returning, to the second messaging computer system and responsive to the request, the pointer, wherein the first log is accessible by the second messaging computer system using the pointer, and access rights to the first log by the first messaging computer system are greater than access rights to the first log by the second messaging computer system. 27. The method of claim 26, wherein
the first messaging application is configured to access a second log owned by the second messaging computer system; and access rights to the second log by the second messaging computer system are greater than access rights to the second log by the first messaging computer system. 28. The method of claim 26, wherein
the pointer includes information configured to enable the second messaging computer system to access the first message within the first log. 29. The method of claim 28, wherein
the information includes:
(i) a message identifier associated with the first message, and
(ii) a catalogue identifier associated with a catalogue of the first messaging application. 30. The method of claim 26, wherein
the first message exceeds a predetermined threshold, and a second message from the first messaging application and not exceeding the predetermined threshold is stored within the shared storage device. 31. The method of claim 26, wherein
the first and second messaging computer systems are configured to read and write messages to the shared storage device. 32. A shared storage device separate from first and second messaging computer systems, comprising:
a hardware processor configured to initiate the following operations:
receiving, from the first messaging computer system, a pointer to a first message stored within a first log owned by a first messaging application within the first messaging computer system;
storing the pointer in the shared storage device;
receiving, from the second messaging computer system, a request for the message;
returning, to the second messaging computer system and responsive to the request, the pointer, wherein
the first log is accessible by the second messaging computer system using the pointer, and access rights to the first log by the first messaging computer system are greater than access rights to the first log by the second messaging computer system. 33. The shared storage device of claim 32, wherein
the first messaging application is configured to access a second log owned by the second messaging computer system; and access rights to the second log by the second messaging computer system are greater than access rights to the second log by the first messaging computer system. 34. The shared storage device of claim 32, wherein
the pointer includes information configured to enable the second messaging computer system to access the first message within the first log. 35. The shared storage device of claim 34, wherein
the information includes:
(i) a message identifier associated with the first message, and
(ii) a catalogue identifier associated with a catalogue of the first messaging application. 36. The shared storage device of claim 32, wherein
the first message exceeds a predetermined threshold, and a second message from the first messaging application and not exceeding the predetermined threshold is stored within the shared storage device. 37. The shared storage device of claim 32, wherein
the first and second messaging computer systems are configured to read and write messages to the shared storage device. 38. A computer program product, comprising:
a computer usable storage device having stored therein computer usable program code, the computer usable program code, which when executed by a shared storage device separate from first and second messaging computer systems, causes the shared storage device to perform:
receiving, from the first messaging computer system, a pointer to a first message stored within a first log owned by a first messaging application within the first messaging computer system;
storing the pointer in the shared storage device;
receiving, from the second messaging computer system, a request for the message;
returning, to the second messaging computer system and responsive to the request, the pointer, wherein
the first log is accessible by the second messaging computer system using the pointer, and access rights to the first log by the first messaging computer system are greater than access rights to the first log by the second messaging computer system. 39. The computer program product of claim 38, wherein
the first messaging application is configured to access a second log owned by the second messaging computer system; and access rights to the second log by the second messaging computer system are greater than access rights to the second log by the first messaging computer system. 40. The computer program product of claim 38, wherein
the pointer includes information configured to enable the second messaging computer system to access the first message within the first log. 41. The computer program product of claim 40, wherein
the information includes:
(i) a message identifier associated with the first message, and
(ii) a catalogue identifier associated with a catalogue of the first messaging application. 42. The computer program product of claim 38, wherein
the first message exceeds a predetermined threshold, and a second message from the first messaging application and not exceeding the predetermined threshold is stored within the shared storage device. 43. The computer program product of claim 38, wherein
the first and second messaging computer systems are configured to read and write messages to the shared storage device. | 2,100 |
6,698 | 6,698 | 15,485,148 | 2,113 | Fault injection methods and apparatus are disclosed. An example method includes interjecting a pattern with fault-inducing sub-fields, where the pattern is an expression including a literal string and a wildcard character class, and using the expression to form a subsequent expression that can be used by a target system to detect and trigger on the network at least one transaction that matches the expression. | 1. A method comprising:
interjecting a pattern with fault-inducing sub-fields, where the pattern is an expression including a literal string and a wildcard character class; and using the expression to form a subsequent expression that can be used by a target system to detect and trigger on the network at least one transaction that matches the expression. | Fault injection methods and apparatus are disclosed. An example method includes interjecting a pattern with fault-inducing sub-fields, where the pattern is an expression including a literal string and a wildcard character class, and using the expression to form a subsequent expression that can be used by a target system to detect and trigger on the network at least one transaction that matches the expression.1. A method comprising:
interjecting a pattern with fault-inducing sub-fields, where the pattern is an expression including a literal string and a wildcard character class; and using the expression to form a subsequent expression that can be used by a target system to detect and trigger on the network at least one transaction that matches the expression. | 2,100 |
6,699 | 6,699 | 14,336,782 | 2,199 | In a computer-implemented method for modifying a state of a virtual machine, information between two states of a virtual machine is compared, wherein the two states include a current state of the virtual machine and previous state of the virtual machine. The previous state of the virtual machine is included within a snapshot of the virtual machine at the previous state. Information that is different between the two states is identified. The information that is different between the two states is presented, wherein the information that is different is selectable for copying between the two states. | 1. A computer-implemented method for modifying a state of a virtual machine, the method comprising:
comparing information between two states of a virtual machine, wherein the two states comprise a current state of the virtual machine and previous state of the virtual machine, wherein the previous state of the virtual machine is comprised within a snapshot of the virtual machine at the previous state; identifying information that is different between the two states; and presenting the information that is different between the two states, wherein the information that is different is selectable for copying between the two states. 2. The method of claim 1, wherein the identifying information that is different between the two states comprises:
deploying a second virtual machine of the snapshot of the virtual machine at the previous state. 3. The method of claim 2, wherein the identifying information that is different between the two states further comprises:
comparing file systems of the current state and the previous state. 4. The method of claim 1, further comprising:
responsive to a selection of selected information for copying between the two states, modifying the current state to comprise the selected information. 5. The method of claim 4, wherein the modifying the current state to comprise the selected information comprises:
provided the selected information has associated information in the current state, replacing the associated information in the current state with the selected information. 6. The method of claim 4, wherein the modifying the current state to comprise the selected information comprises:
provided the selected information does not have associated information in the current state, adding the selected information to the current state. 7. The method of claim 4, further comprising:
prior to modifying the current state to comprise the selected information, capturing a snapshot of the current state. 8. The method of claim 4, wherein the previous state of the virtual machine is not modifiable. 9. The method of claim 4, wherein the current state is modified using an agent on the virtual machine. 10. The method of claim 1, wherein the information comprises files of the two states. 11. The method of claim 10, wherein the identifying information that is different between the two states comprises:
identifying files of the two states having a same name and folder path and different properties. 12. The method of claim 11, wherein the properties is selected from a list consisting of:
file size; modification time; and creation time. 13. The method of claim 10, wherein the identifying information that is different between the two states comprises:
identifying files that are missing between the two states. 14. The method of claim 10, wherein the identifying information that is different between the two states comprises:
performing a checksum operation on files of the two states having a same name and folder path; and provided the checksum operation indicates that contents of the files of the two states having a same name and folder path are different, identifying the files as different. 15. A non-transitory computer readable storage medium having computer readable program code stored thereon for causing a computer system to perform a method for modifying a state of a virtual machine, the method comprising:
comparing information between two states of a virtual machine, wherein the two states comprise a current state of the virtual machine and previous state of the virtual machine, wherein the previous state of the virtual machine is comprised within a snapshot of the virtual machine at the previous state; identifying files that are different between the two states; presenting the files that are different between the two states, wherein the files that are different are selectable for copying between the two states; and responsive to a selection of a file for copying between the two states, modifying the current state to comprise the selected file. 16. The computer readable storage medium of claim 15, wherein the modifying the current state to comprise the selected file comprises:
provided the selected information has associated information in the current state, replacing the associated information in the current state with the selected information. 17. The computer readable storage medium of claim 15, wherein the modifying the current state to comprise the selected file comprises:
provided the selected information does not have associated information in the current state, adding the selected information to the current state. 18. The computer readable storage medium of claim 15, wherein the method further comprises:
prior to modifying the current state to comprise the selected file, capturing a snapshot of the current state. 19. A computer-implemented method for modifying a state of a virtual machine, the method comprising:
comparing information between two states of a virtual machine, wherein the two states comprise a current state of the virtual machine and previous state of the virtual machine, wherein the previous state of the virtual machine is comprised within a snapshot of the virtual machine at the previous state; identifying information that is different between the two states, wherein the identifying which information is different between the two states comprises:
deploying a second virtual machine of the snapshot of the virtual machine at the previous state;
comparing file systems of the current state and the previous state;
presenting the information that is different between the two states, wherein the information that is different is selectable for copying between the two states; capturing a snapshot of the current state; responsive to a selection of selected information for copying between the two states, modifying the current state to comprise the selected information, wherein the modifying the current state to comprise the selected information comprises:
provided the selected information has associated information in the current state, replacing the associated information in the current state with the selected information; and
provided the selected information does not have associated information in the current state, adding the selected information to the current state. 20. The method of claim 19, wherein the previous state of the virtual machine is not modifiable. | In a computer-implemented method for modifying a state of a virtual machine, information between two states of a virtual machine is compared, wherein the two states include a current state of the virtual machine and previous state of the virtual machine. The previous state of the virtual machine is included within a snapshot of the virtual machine at the previous state. Information that is different between the two states is identified. The information that is different between the two states is presented, wherein the information that is different is selectable for copying between the two states.1. A computer-implemented method for modifying a state of a virtual machine, the method comprising:
comparing information between two states of a virtual machine, wherein the two states comprise a current state of the virtual machine and previous state of the virtual machine, wherein the previous state of the virtual machine is comprised within a snapshot of the virtual machine at the previous state; identifying information that is different between the two states; and presenting the information that is different between the two states, wherein the information that is different is selectable for copying between the two states. 2. The method of claim 1, wherein the identifying information that is different between the two states comprises:
deploying a second virtual machine of the snapshot of the virtual machine at the previous state. 3. The method of claim 2, wherein the identifying information that is different between the two states further comprises:
comparing file systems of the current state and the previous state. 4. The method of claim 1, further comprising:
responsive to a selection of selected information for copying between the two states, modifying the current state to comprise the selected information. 5. The method of claim 4, wherein the modifying the current state to comprise the selected information comprises:
provided the selected information has associated information in the current state, replacing the associated information in the current state with the selected information. 6. The method of claim 4, wherein the modifying the current state to comprise the selected information comprises:
provided the selected information does not have associated information in the current state, adding the selected information to the current state. 7. The method of claim 4, further comprising:
prior to modifying the current state to comprise the selected information, capturing a snapshot of the current state. 8. The method of claim 4, wherein the previous state of the virtual machine is not modifiable. 9. The method of claim 4, wherein the current state is modified using an agent on the virtual machine. 10. The method of claim 1, wherein the information comprises files of the two states. 11. The method of claim 10, wherein the identifying information that is different between the two states comprises:
identifying files of the two states having a same name and folder path and different properties. 12. The method of claim 11, wherein the properties is selected from a list consisting of:
file size; modification time; and creation time. 13. The method of claim 10, wherein the identifying information that is different between the two states comprises:
identifying files that are missing between the two states. 14. The method of claim 10, wherein the identifying information that is different between the two states comprises:
performing a checksum operation on files of the two states having a same name and folder path; and provided the checksum operation indicates that contents of the files of the two states having a same name and folder path are different, identifying the files as different. 15. A non-transitory computer readable storage medium having computer readable program code stored thereon for causing a computer system to perform a method for modifying a state of a virtual machine, the method comprising:
comparing information between two states of a virtual machine, wherein the two states comprise a current state of the virtual machine and previous state of the virtual machine, wherein the previous state of the virtual machine is comprised within a snapshot of the virtual machine at the previous state; identifying files that are different between the two states; presenting the files that are different between the two states, wherein the files that are different are selectable for copying between the two states; and responsive to a selection of a file for copying between the two states, modifying the current state to comprise the selected file. 16. The computer readable storage medium of claim 15, wherein the modifying the current state to comprise the selected file comprises:
provided the selected information has associated information in the current state, replacing the associated information in the current state with the selected information. 17. The computer readable storage medium of claim 15, wherein the modifying the current state to comprise the selected file comprises:
provided the selected information does not have associated information in the current state, adding the selected information to the current state. 18. The computer readable storage medium of claim 15, wherein the method further comprises:
prior to modifying the current state to comprise the selected file, capturing a snapshot of the current state. 19. A computer-implemented method for modifying a state of a virtual machine, the method comprising:
comparing information between two states of a virtual machine, wherein the two states comprise a current state of the virtual machine and previous state of the virtual machine, wherein the previous state of the virtual machine is comprised within a snapshot of the virtual machine at the previous state; identifying information that is different between the two states, wherein the identifying which information is different between the two states comprises:
deploying a second virtual machine of the snapshot of the virtual machine at the previous state;
comparing file systems of the current state and the previous state;
presenting the information that is different between the two states, wherein the information that is different is selectable for copying between the two states; capturing a snapshot of the current state; responsive to a selection of selected information for copying between the two states, modifying the current state to comprise the selected information, wherein the modifying the current state to comprise the selected information comprises:
provided the selected information has associated information in the current state, replacing the associated information in the current state with the selected information; and
provided the selected information does not have associated information in the current state, adding the selected information to the current state. 20. The method of claim 19, wherein the previous state of the virtual machine is not modifiable. | 2,100 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.