Unnamed: 0 int64 0 350k | level_0 int64 0 351k | ApplicationNumber int64 9.75M 96.1M | ArtUnit int64 1.6k 3.99k | Abstract stringlengths 1 8.37k | Claims stringlengths 3 292k | abstract-claims stringlengths 68 293k | TechCenter int64 1.6k 3.9k |
|---|---|---|---|---|---|---|---|
5,900 | 5,900 | 15,010,141 | 2,122 | A pseudo-relevance feedback (PRF) system is disclosed that determines an optimized relevance model for a search query by utilizing a posterior relevance model to estimate the likelihood that an initial set of top-K retrieved documents would be retrieved given the posterior relevance model, re-ranking the top-K documents based on their respective estimates of likelihood of retrieval, determining a rank similarity between the initial ranking of the top-K documents and the re-ranking of the top-K documents, updating one or more model parameters of the posterior relevance model based on the rank similarity, and iteratively performing the above process until the rank similarity is maximized, at which point, the optimized relevance model is obtained. | 1. A method for enhancing robustness of pseudo-relevance feedback models using query drift minimization, the method comprising:
determining, by a computer processor, a set of search results returned for a search query, wherein the set of search results is ranked in accordance with a first ranking; determining, by the computer processor, a first relevance model; determining, by the computer processor and based at least in part on the first relevance model, a respective probability of retrieval of each search result in the set of search results; determining, by the computer processor, a second ranking for the set of search results based at least in part on the respective probability of retrieval of each result in the set of search results; determining, by the computer processor, a rank similarity between the first ranking and the second ranking; determining, by the computer processor, a second relevance model by updating at least one model parameter of the first relevance model based at least in part on the rank similarity; and determining, by the computer processor, that the second relevance model is an optimized relevance model for the search query based at least in part on the rank similarity, wherein query drift is minimized without performing query anchoring of the search query. 2. The method of claim 1, wherein determining that the second relevance model is an optimized relevance model for the search query comprises determining that the rank similarity is maximized. 3. The method of claim 2, wherein determining that the rank similarity is maximized comprises determining that the first ranking is equivalent to the second ranking. 4. The method of claim 1, wherein the respective probability of retrieval is a first respective probability of retrieval, the rank similarity is a first rank similarity, and the at least one model parameter is a first at least one model parameter, the method further comprising prior to determining the first relevance model:
determining, by the computer processor, a third relevance model; determining, by the computer processor and based at least in part on the third relevance model, a second respective probability of retrieval of each search result in the set of search results; determining, by the computer processor, a third ranking for the set of search results based at least in part on the second respective probability of retrieval of each result in the set of search results; determining, by the computer processor, a second rank similarity between the first ranking and the third ranking; and determining, by the computer processor, the first relevance model by updating a second at least one model parameter of the third relevance model based at least in part on the second rank similarity. 5. The method of claim 1, wherein the respective probability of retrieval is a first respective probability of retrieval, the rank similarity is a first rank similarity, and the at least one model parameter is a first at least one model parameter, the method further comprising subsequent to determining the second relevance model:
determining, by the computer processor and based at least in part on the second relevance model, a second respective probability of retrieval of each search result in the set of search results; determining, by the computer processor, a third ranking for the set of search results based at least in part on the second respective probability of retrieval of each result in the set of search results; determining, by the computer processor, a second rank similarity between the first ranking and the third ranking; and determining, by the computer processor, that the second rank similarity does not exceed the first rank similarity. 6. The method of claim 1, wherein determining the respective probability of retrieval of each search result in the set of search results comprises determining the respective probability of retrieval of a first result in the set of search results, and wherein determining the respective probability of retrieval of the first result comprises:
determining, by the computer processor, a set of terms in the first search result; and determining, by the computer processor, a respective frequency of occurrence of each term in the set of terms; determining, by the computer processor and based at least in part on the first relevance model, a respective probability of retrieval of each term in the set of terms; and determining, by the computer processor, the respective probability of retrieval of the first result based at least in part on the respective frequency of each term in the set of terms and the respective probability of retrieval of each term in the set of terms. 7. (canceled) 8. A system for enhancing robustness of pseudo-relevance feedback models using query drift minimization, the system comprising:
at least one memory storing computer-executable instructions; and at least one processor configured to access the at least one memory and execute the computer-executable instructions to: determine a set of search results returned for a search query, wherein the set of search results is ranked in accordance with a first ranking; determine a first relevance model; determine, based at least in part on the first relevance model, a respective probability of retrieval of each search result in the set of search results; determine a second ranking for the set of search results based at least in part on the respective probability of retrieval of each result in the set of search results; determine rank similarity between the first ranking and the second ranking; determine a second relevance model by updating at least one model parameter of the first relevance model based at least in part on the rank similarity; and determine that the second relevance model is an optimized relevance model for the search query based at least in part on the rank similarity, wherein query drift is minimized without performing query anchoring of the search query. 9. The system of claim 8, wherein the at least one processor is configured to determine that the second relevance model is an optimized relevance model for the search query by executing the computer-executable instructions to determine that the rank similarity is maximized. 10. The system of claim 9, wherein at least one processor is configured to determine that the rank similarity is maximized by executing the computer-executable instructions to determine that the first ranking is equivalent to the second ranking. 11. The system of claim 8, wherein the respective probability of retrieval is a first respective probability of retrieval, the rank similarity is a first rank similarity, and the at least one model parameter is a first at least one model parameter, and wherein, prior to determining the first relevance model, the at least one processor is further configured to execute the computer-executable instructions to:
determine third relevance model; determine, based at least in part on the third relevance model, a second respective probability of retrieval of each search result in the set of search results; determine a third ranking for the set of search results based at least in part on the second respective probability of retrieval of each result in the set of search results; determine a second rank similarity between the first ranking and the third ranking; and determine the first relevance model by updating a second at least one model parameter of the third relevance model based at least in part on the second rank similarity. 12. The system of claim 8, wherein the respective probability of retrieval is a first respective probability of retrieval, the rank similarity is a first rank similarity, and the at least one model parameter is a first at least one model parameter, and wherein, subsequent to determining the second relevance model, the at least one processor is further configured to execute the computer-executable instructions to:
determine, based at least in part on the second relevance model, a second respective probability of retrieval of each search result in the set of search results; determine a third ranking for the set of search results based at least in part on the second respective probability of retrieval of each result in the set of search results; determine second rank similarity between the first ranking and the third ranking; and determine that the second rank similarity does not exceed the first rank similarity. 13. The system of claim 8, wherein the at least one processor is configured to determine the respective probability of retrieval of each search result in the set of search results by executing the computer-executable instructions to determine the respective probability of retrieval of a first result in the set of search results, and wherein the at least one processor is configured to determine the respective probability of retrieval of the first result by executing the computer-executable instructions to:
determine a set of terms in the first search result; and determine a respective frequency of occurrence of each term in the set of terms; determine, and based at least in part on the first relevance model, a respective probability of retrieval of each term in the set of terms; and determine the respective probability of the first result based at least in part on the respective frequency of each term in the set of terms and the respective probability of retrieval of each term in the set of terms. 14. (canceled) 15. A computer program product for enhancing robustness of pseudo-relevance feedback models using query drift minimization, the computer program product comprising a non-transitory storage medium readable by a processing circuit, the storage medium storing instructions executable by the processing circuit to cause a method to be performed, the method comprising:
determining a set of search results returned for a search query, wherein the set of search results is ranked in accordance with a first ranking; determining a first relevance model; determining, based at least in part on the first relevance model, a respective probability of retrieval of each search result in the set of search results; determining a second ranking for the set of search results based at least in part on the respective probability of retrieval of each result in the set of search results; determining a rank similarity between the first ranking and the second ranking; determining a second relevance model by updating at least one model parameter of the first relevance model based at least in part on the rank similarity; and determining that the second relevance model is an optimized relevance model for the search query based at least in part on the rank similarity, wherein query drift is minimized without performing query anchoring of the search query. 16. The computer program product of claim 15, wherein determining that the second relevance model is an optimized relevance model for the search query comprises determining that the rank similarity is maximized. 17. The computer program product of claim 16, wherein determining that the rank similarity is maximized comprises determining that the first ranking is equivalent to the second ranking. 18. The computer program product of claim 15, wherein the respective probability of retrieval is a first respective probability of retrieval, the rank similarity is a first rank similarity, and the at least one model parameter is a first at least one model parameter, the method further comprising prior to determining the first relevance model:
determining a third relevance model; determining, based at least in part on the third relevance model, a second respective probability of retrieval of each search result in the set of search results; determining a third ranking for the set of search results based at least in part on the second respective probability of retrieval of each result in the set of search results; determining a second rank similarity between the first ranking and the third ranking; and determining the first relevance model by updating a second at least one model parameter of the third relevance model based at least in part on the second rank similarity. 19. The computer program product of claim 15, wherein the respective probability of retrieval is a first respective probability of retrieval, the rank similarity is a first rank similarity, and the at least one model parameter is a first at least one model parameter, the method further comprising subsequent to determining the second relevance model:
determining, based at least in part on the second relevance model, a second respective probability of retrieval of each search result in the set of search results; determining a third ranking for the set of search results based at least in part on the second respective probability of retrieval of each result in the set of search results; determining a second rank similarity between the first ranking and the third ranking; and determining that the second rank similarity does not exceed the first rank similarity. 20. The computer program product of claim 15, wherein determining the respective probability of retrieval of each search result in the set of search results comprises determining the respective probability of retrieval of retrieval of a first result in the set of search results, and wherein determining the respective probability of the first result comprises:
determining a set of terms in the first search result; and determining a respective frequency of occurrence of each term in the set of terms; determining, based at least in part on the first relevance model, a respective probability of retrieval of each term in the set of terms; and determining the respective probability of the first result based at least in part on the respective frequency of each term in the set of terms and the respective probability of retrieval of each term in the set of terms. | A pseudo-relevance feedback (PRF) system is disclosed that determines an optimized relevance model for a search query by utilizing a posterior relevance model to estimate the likelihood that an initial set of top-K retrieved documents would be retrieved given the posterior relevance model, re-ranking the top-K documents based on their respective estimates of likelihood of retrieval, determining a rank similarity between the initial ranking of the top-K documents and the re-ranking of the top-K documents, updating one or more model parameters of the posterior relevance model based on the rank similarity, and iteratively performing the above process until the rank similarity is maximized, at which point, the optimized relevance model is obtained.1. A method for enhancing robustness of pseudo-relevance feedback models using query drift minimization, the method comprising:
determining, by a computer processor, a set of search results returned for a search query, wherein the set of search results is ranked in accordance with a first ranking; determining, by the computer processor, a first relevance model; determining, by the computer processor and based at least in part on the first relevance model, a respective probability of retrieval of each search result in the set of search results; determining, by the computer processor, a second ranking for the set of search results based at least in part on the respective probability of retrieval of each result in the set of search results; determining, by the computer processor, a rank similarity between the first ranking and the second ranking; determining, by the computer processor, a second relevance model by updating at least one model parameter of the first relevance model based at least in part on the rank similarity; and determining, by the computer processor, that the second relevance model is an optimized relevance model for the search query based at least in part on the rank similarity, wherein query drift is minimized without performing query anchoring of the search query. 2. The method of claim 1, wherein determining that the second relevance model is an optimized relevance model for the search query comprises determining that the rank similarity is maximized. 3. The method of claim 2, wherein determining that the rank similarity is maximized comprises determining that the first ranking is equivalent to the second ranking. 4. The method of claim 1, wherein the respective probability of retrieval is a first respective probability of retrieval, the rank similarity is a first rank similarity, and the at least one model parameter is a first at least one model parameter, the method further comprising prior to determining the first relevance model:
determining, by the computer processor, a third relevance model; determining, by the computer processor and based at least in part on the third relevance model, a second respective probability of retrieval of each search result in the set of search results; determining, by the computer processor, a third ranking for the set of search results based at least in part on the second respective probability of retrieval of each result in the set of search results; determining, by the computer processor, a second rank similarity between the first ranking and the third ranking; and determining, by the computer processor, the first relevance model by updating a second at least one model parameter of the third relevance model based at least in part on the second rank similarity. 5. The method of claim 1, wherein the respective probability of retrieval is a first respective probability of retrieval, the rank similarity is a first rank similarity, and the at least one model parameter is a first at least one model parameter, the method further comprising subsequent to determining the second relevance model:
determining, by the computer processor and based at least in part on the second relevance model, a second respective probability of retrieval of each search result in the set of search results; determining, by the computer processor, a third ranking for the set of search results based at least in part on the second respective probability of retrieval of each result in the set of search results; determining, by the computer processor, a second rank similarity between the first ranking and the third ranking; and determining, by the computer processor, that the second rank similarity does not exceed the first rank similarity. 6. The method of claim 1, wherein determining the respective probability of retrieval of each search result in the set of search results comprises determining the respective probability of retrieval of a first result in the set of search results, and wherein determining the respective probability of retrieval of the first result comprises:
determining, by the computer processor, a set of terms in the first search result; and determining, by the computer processor, a respective frequency of occurrence of each term in the set of terms; determining, by the computer processor and based at least in part on the first relevance model, a respective probability of retrieval of each term in the set of terms; and determining, by the computer processor, the respective probability of retrieval of the first result based at least in part on the respective frequency of each term in the set of terms and the respective probability of retrieval of each term in the set of terms. 7. (canceled) 8. A system for enhancing robustness of pseudo-relevance feedback models using query drift minimization, the system comprising:
at least one memory storing computer-executable instructions; and at least one processor configured to access the at least one memory and execute the computer-executable instructions to: determine a set of search results returned for a search query, wherein the set of search results is ranked in accordance with a first ranking; determine a first relevance model; determine, based at least in part on the first relevance model, a respective probability of retrieval of each search result in the set of search results; determine a second ranking for the set of search results based at least in part on the respective probability of retrieval of each result in the set of search results; determine rank similarity between the first ranking and the second ranking; determine a second relevance model by updating at least one model parameter of the first relevance model based at least in part on the rank similarity; and determine that the second relevance model is an optimized relevance model for the search query based at least in part on the rank similarity, wherein query drift is minimized without performing query anchoring of the search query. 9. The system of claim 8, wherein the at least one processor is configured to determine that the second relevance model is an optimized relevance model for the search query by executing the computer-executable instructions to determine that the rank similarity is maximized. 10. The system of claim 9, wherein at least one processor is configured to determine that the rank similarity is maximized by executing the computer-executable instructions to determine that the first ranking is equivalent to the second ranking. 11. The system of claim 8, wherein the respective probability of retrieval is a first respective probability of retrieval, the rank similarity is a first rank similarity, and the at least one model parameter is a first at least one model parameter, and wherein, prior to determining the first relevance model, the at least one processor is further configured to execute the computer-executable instructions to:
determine third relevance model; determine, based at least in part on the third relevance model, a second respective probability of retrieval of each search result in the set of search results; determine a third ranking for the set of search results based at least in part on the second respective probability of retrieval of each result in the set of search results; determine a second rank similarity between the first ranking and the third ranking; and determine the first relevance model by updating a second at least one model parameter of the third relevance model based at least in part on the second rank similarity. 12. The system of claim 8, wherein the respective probability of retrieval is a first respective probability of retrieval, the rank similarity is a first rank similarity, and the at least one model parameter is a first at least one model parameter, and wherein, subsequent to determining the second relevance model, the at least one processor is further configured to execute the computer-executable instructions to:
determine, based at least in part on the second relevance model, a second respective probability of retrieval of each search result in the set of search results; determine a third ranking for the set of search results based at least in part on the second respective probability of retrieval of each result in the set of search results; determine second rank similarity between the first ranking and the third ranking; and determine that the second rank similarity does not exceed the first rank similarity. 13. The system of claim 8, wherein the at least one processor is configured to determine the respective probability of retrieval of each search result in the set of search results by executing the computer-executable instructions to determine the respective probability of retrieval of a first result in the set of search results, and wherein the at least one processor is configured to determine the respective probability of retrieval of the first result by executing the computer-executable instructions to:
determine a set of terms in the first search result; and determine a respective frequency of occurrence of each term in the set of terms; determine, and based at least in part on the first relevance model, a respective probability of retrieval of each term in the set of terms; and determine the respective probability of the first result based at least in part on the respective frequency of each term in the set of terms and the respective probability of retrieval of each term in the set of terms. 14. (canceled) 15. A computer program product for enhancing robustness of pseudo-relevance feedback models using query drift minimization, the computer program product comprising a non-transitory storage medium readable by a processing circuit, the storage medium storing instructions executable by the processing circuit to cause a method to be performed, the method comprising:
determining a set of search results returned for a search query, wherein the set of search results is ranked in accordance with a first ranking; determining a first relevance model; determining, based at least in part on the first relevance model, a respective probability of retrieval of each search result in the set of search results; determining a second ranking for the set of search results based at least in part on the respective probability of retrieval of each result in the set of search results; determining a rank similarity between the first ranking and the second ranking; determining a second relevance model by updating at least one model parameter of the first relevance model based at least in part on the rank similarity; and determining that the second relevance model is an optimized relevance model for the search query based at least in part on the rank similarity, wherein query drift is minimized without performing query anchoring of the search query. 16. The computer program product of claim 15, wherein determining that the second relevance model is an optimized relevance model for the search query comprises determining that the rank similarity is maximized. 17. The computer program product of claim 16, wherein determining that the rank similarity is maximized comprises determining that the first ranking is equivalent to the second ranking. 18. The computer program product of claim 15, wherein the respective probability of retrieval is a first respective probability of retrieval, the rank similarity is a first rank similarity, and the at least one model parameter is a first at least one model parameter, the method further comprising prior to determining the first relevance model:
determining a third relevance model; determining, based at least in part on the third relevance model, a second respective probability of retrieval of each search result in the set of search results; determining a third ranking for the set of search results based at least in part on the second respective probability of retrieval of each result in the set of search results; determining a second rank similarity between the first ranking and the third ranking; and determining the first relevance model by updating a second at least one model parameter of the third relevance model based at least in part on the second rank similarity. 19. The computer program product of claim 15, wherein the respective probability of retrieval is a first respective probability of retrieval, the rank similarity is a first rank similarity, and the at least one model parameter is a first at least one model parameter, the method further comprising subsequent to determining the second relevance model:
determining, based at least in part on the second relevance model, a second respective probability of retrieval of each search result in the set of search results; determining a third ranking for the set of search results based at least in part on the second respective probability of retrieval of each result in the set of search results; determining a second rank similarity between the first ranking and the third ranking; and determining that the second rank similarity does not exceed the first rank similarity. 20. The computer program product of claim 15, wherein determining the respective probability of retrieval of each search result in the set of search results comprises determining the respective probability of retrieval of retrieval of a first result in the set of search results, and wherein determining the respective probability of the first result comprises:
determining a set of terms in the first search result; and determining a respective frequency of occurrence of each term in the set of terms; determining, based at least in part on the first relevance model, a respective probability of retrieval of each term in the set of terms; and determining the respective probability of the first result based at least in part on the respective frequency of each term in the set of terms and the respective probability of retrieval of each term in the set of terms. | 2,100 |
5,901 | 5,901 | 15,659,136 | 2,135 | A method of storing a set of data representing a point cloud, comprising: creating an array in a digital memory having cells addressable by reference to at least one index, wherein each of the at least one indices has a predetermined correspondence to a geometric location within the point cloud; and storing a value of the data set in each of the cells. | 1. A method of storing a set of data representing a point cloud, comprising:
creating an array in a digital memory having cells addressable by reference to at least one index, wherein each of the at least one indices has a predetermined correspondence to a geometric location within the point cloud; and storing a value of the data set in each of the cells. 2. The method of claim 1 wherein the array includes two or more indices, corresponding to or more physical dimensions, wherein the two or more indices cooperatively define specific geometric locations within the point cloud. 3. The method of claim 1 wherein the array includes first and second indices, corresponding to first and second mutually perpendicular axes. 4. The method of claim 1 wherein the at least one index is mapped to a digital memory address. 5. The method of claim 1 wherein the value stored in each cell is a scalar quantity. 6. The method of claim 1 further comprising multiplying a physical dimension by a scalar multiplication factor to determine a value of the at least one index 7. The method of claim 1 where the point cloud is representative of a physical manufacturing process. 8. The method of claim 1 where the point cloud is representative of an additive manufacturing process. 9. A method of storing a set of data representing a point cloud, comprising:
using a sensor to collect data values from a physical process, each data value corresponding to a specific geometric location which is defined by reference to one or more physical coordinate indices; multiplying each of the one or more physical coordinate indices by a predetermined scalar multiplication factor, so as to produce one or more array indices; creating an array in a digital memory having cells addressable by reference to the one or more array indices; and for each specific geometric location within the point cloud, storing the sensed data value in the corresponding cell of the array. 10. The method of claim 9 wherein the array includes two or more indices, corresponding to or more physical dimensions, wherein the two or more indices cooperatively define specific geometric locations within the point cloud. 11. The method of claim 9 wherein the array includes first and second indices, corresponding to first and second mutually perpendicular axes. 12. The method of claim 9 wherein the at one or more array indices are mapped to digital memory addresses. 13. The method of claim 9 wherein the value stored in each cell is a scalar quantity. 14. The method of claim 9 where the sensed data is collected from a physical manufacturing process. 15. The method of claim 9 where the sensed data is collected from an additive manufacturing process. 16. A data structure for storing a data set representing a point cloud, comprising:
an array having a first index and a second index each correlated to a physical value; and a plurality of cells addressable by the indices, each cell containing a value of the data set. 17. The data structure of claim 16 stored in a digital memory. 18. The data structure of claim 16 further comprising a multiplier correlating the indices to physical measurements. | A method of storing a set of data representing a point cloud, comprising: creating an array in a digital memory having cells addressable by reference to at least one index, wherein each of the at least one indices has a predetermined correspondence to a geometric location within the point cloud; and storing a value of the data set in each of the cells.1. A method of storing a set of data representing a point cloud, comprising:
creating an array in a digital memory having cells addressable by reference to at least one index, wherein each of the at least one indices has a predetermined correspondence to a geometric location within the point cloud; and storing a value of the data set in each of the cells. 2. The method of claim 1 wherein the array includes two or more indices, corresponding to or more physical dimensions, wherein the two or more indices cooperatively define specific geometric locations within the point cloud. 3. The method of claim 1 wherein the array includes first and second indices, corresponding to first and second mutually perpendicular axes. 4. The method of claim 1 wherein the at least one index is mapped to a digital memory address. 5. The method of claim 1 wherein the value stored in each cell is a scalar quantity. 6. The method of claim 1 further comprising multiplying a physical dimension by a scalar multiplication factor to determine a value of the at least one index 7. The method of claim 1 where the point cloud is representative of a physical manufacturing process. 8. The method of claim 1 where the point cloud is representative of an additive manufacturing process. 9. A method of storing a set of data representing a point cloud, comprising:
using a sensor to collect data values from a physical process, each data value corresponding to a specific geometric location which is defined by reference to one or more physical coordinate indices; multiplying each of the one or more physical coordinate indices by a predetermined scalar multiplication factor, so as to produce one or more array indices; creating an array in a digital memory having cells addressable by reference to the one or more array indices; and for each specific geometric location within the point cloud, storing the sensed data value in the corresponding cell of the array. 10. The method of claim 9 wherein the array includes two or more indices, corresponding to or more physical dimensions, wherein the two or more indices cooperatively define specific geometric locations within the point cloud. 11. The method of claim 9 wherein the array includes first and second indices, corresponding to first and second mutually perpendicular axes. 12. The method of claim 9 wherein the at one or more array indices are mapped to digital memory addresses. 13. The method of claim 9 wherein the value stored in each cell is a scalar quantity. 14. The method of claim 9 where the sensed data is collected from a physical manufacturing process. 15. The method of claim 9 where the sensed data is collected from an additive manufacturing process. 16. A data structure for storing a data set representing a point cloud, comprising:
an array having a first index and a second index each correlated to a physical value; and a plurality of cells addressable by the indices, each cell containing a value of the data set. 17. The data structure of claim 16 stored in a digital memory. 18. The data structure of claim 16 further comprising a multiplier correlating the indices to physical measurements. | 2,100 |
5,902 | 5,902 | 15,035,725 | 2,129 | A method of manufacturing a drill bit or other oil-field tool includes aligning a cutting element support structure or a support structure for an alternative tool element to an image of a computer-generated model of a drill bit assembly or a computer-generated model of an alternative tool assembly. The method further includes placing a tool element on the support structure and comparing the placement of the tool element on the support structure to the placement of a model of the tool element on the image of the model of the tool assembly using real-time continuous deviation feedback. The method also includes adjusting the placement of the tool element on the element support structure to match the placement of the model of the tool element on the image of the model of the tool assembly and joining the tool element to the support structure. | 1. A method of manufacturing a drill bit assembly, the method comprising:
positioning a cutting element on a cutting element support structure of the drill bit assembly; using an imaging system to acquire and compare a relative position of the cutting element on the cutting element support structure to a relative position of a computer-generated model of the cutting element on a computer-generated model of the cutting element support structure using real-time, continuous visual feedback; in response to the real-time, continuous visual feedback, adjusting a position of the cutting element on the cutting element support structure to an adjusted position that matches the relative position of the computer-generated model of the cutting element on the computer-generated model of the cutting element support structure; and joining the cutting element to the cutting element support structure at the adjusted position. 2. The method of claim 1, the method further comprising:
scanning the cutting element support structure to generate an image of the cutting element support structure; establishing measurement points on the cutting element support structure based on the image of the cutting element support structure; correlating the measurement points to reference points on the computer-generated model of the cutting element support structure; and comparing the measurement points to the reference points. 3. The method of claim 1, the method further comprising:
scanning the cutting element to generate an image of the cutting element; establishing second measurement points on the cutting element based on the image of the cutting element; correlating the second measurement points to second reference points on the computer-generated model of the cutting element support structure; and comparing the second measurement points to the second reference points. 4. The method of claim 3, wherein comparing the second measurement points to the second reference points comprises generating a linear measurement that indicates a difference between a location of a second measurement point and a second reference point. 5. The method of claim 3, wherein comparing the second measurement points to the second reference points comprises generating a volumetric measurement that indicates a difference between a location of a second measurement point and a second reference point. 6. The method of claim 3, wherein comparing the second measurement points to the second reference points comprises generating a surface area measurement that indicates a difference between a location of a second measurement point and a second reference point. 7. The method of claim 1, the method further comprising:
scanning the cutting element support structure and detecting measurement points on the cutting element support structure, the measurement points corresponding to reference points on the computer-generated model of the cutting element support structure; and scanning the cuttings element and detecting second measurement points on the cutting elements, the second measurement points corresponding to second reference points on the computer-generated model of the cutting element support structure; wherein positioning the cutting element on the cutting element support structure comprises viewing a continuous, live video feed showing deviations between the second measurement points and second reference points. 8. A method of manufacturing an oil-field tool, the method comprising:
positioning an oil-field tool element on a support structure of the oil-field tool; using an imaging system to acquire and compare a relative position of the oil-field tool element on the support structure to a relative position of a computer-generated model of the oil-field tool element on a computer-generated model of the support structure using a live video image of the oil-field tool element relative to the computer-generated model of the oil-field tool; in response to the live video image, adjusting a position of the oil-field tool element on the support structure to an adjusted position that matches the relative position of the computer-generated model of the oil-field tool element on the computer-generated model of the support structure; and joining the oil-field tool element to the support structure at the adjusted position. 9. The method of claim 8, wherein the live video image comprises a volumetric measurement indicating a difference between the relative position of the oil-field tool element on the support structure and the relative position of the computer-generated model of the oil-field tool element on the computer-generated model of the support structure. 10. The method of claim 8, wherein the live video image comprises a surface area measurement indicating a difference between the relative position of the oil-field tool element on the support structure and the relative position of the computer-generated model of the oil-field tool element on the computer-generated model of the support structure. 11. The method of claim 8, wherein the live video image comprises a vector measurement indicating a distance between a measurement point of the oil-field tool element and a reference point of the computer-generated model of the support structure. 12. The method of claim 11, wherein the live video image comprises a plurality of vector measurements corresponding to distances between a plurality of measurement points of the oil-field tool element and a plurality of reference points of the computer-generated model of the support structure. 13. The method of claim 12, further comprising comparing at least one of the vector measurements to a predetermined threshold and generating an alarm in response to determining that the calculated vector measurement is greater than the predetermined threshold. 14. The method of claim 13, wherein the alarm comprises a visual signal. 15. A system for manufacturing a drill bit, the system comprising:
a control system having a processor, a memory, a power source, and an input-output subsystem, the input-output subsystem comprising at least one camera and at least one projector operable to illuminate and scan an image of a cutting element support structure and an image of a cutting element, and at least one display operable to display a continuous video image that is indicative of a position of the cutting element on the cutting element support structure in real time; wherein the control system is operable to receive the scanned image of the cutting element and generate a plurality of measurement points, each such of the plurality of measurement points corresponding to a location on the cutting element and a reference point on a computer-generated model of the cutting element support structure; and the control system is further operable to generate a live, continuous video image signal to the display showing the position of each measurement point relative to each reference point. 16. The system of claim 15, wherein the control system is further operable to compute distances between each measurement point and each reference point. 17. The system of claim 16, wherein the processor is operable to communicate a live, continuous video image to the display, the continuous video image including the computed distance between at least one of the measurement points and at least one of the corresponding reference points. 18. The system of claim 16, wherein the processor is operable to compare the computed distances between each measurement point and each reference point to a predetermined threshold, and to generate a signal indicating that at least one of computed distances is greater than the predetermined threshold. 19. The system of claim 18, further comprising a speaker operable to generate an audible alarm in response to receiving a signal from the control system indicating that at least one of the computed distances is greater than the predetermined threshold. 20. The system of claim 18, wherein the display is operable to generate a visual indicator in response to receiving a signal from the control system indicating that at least one of the computed distances is greater than the predetermined threshold. | A method of manufacturing a drill bit or other oil-field tool includes aligning a cutting element support structure or a support structure for an alternative tool element to an image of a computer-generated model of a drill bit assembly or a computer-generated model of an alternative tool assembly. The method further includes placing a tool element on the support structure and comparing the placement of the tool element on the support structure to the placement of a model of the tool element on the image of the model of the tool assembly using real-time continuous deviation feedback. The method also includes adjusting the placement of the tool element on the element support structure to match the placement of the model of the tool element on the image of the model of the tool assembly and joining the tool element to the support structure.1. A method of manufacturing a drill bit assembly, the method comprising:
positioning a cutting element on a cutting element support structure of the drill bit assembly; using an imaging system to acquire and compare a relative position of the cutting element on the cutting element support structure to a relative position of a computer-generated model of the cutting element on a computer-generated model of the cutting element support structure using real-time, continuous visual feedback; in response to the real-time, continuous visual feedback, adjusting a position of the cutting element on the cutting element support structure to an adjusted position that matches the relative position of the computer-generated model of the cutting element on the computer-generated model of the cutting element support structure; and joining the cutting element to the cutting element support structure at the adjusted position. 2. The method of claim 1, the method further comprising:
scanning the cutting element support structure to generate an image of the cutting element support structure; establishing measurement points on the cutting element support structure based on the image of the cutting element support structure; correlating the measurement points to reference points on the computer-generated model of the cutting element support structure; and comparing the measurement points to the reference points. 3. The method of claim 1, the method further comprising:
scanning the cutting element to generate an image of the cutting element; establishing second measurement points on the cutting element based on the image of the cutting element; correlating the second measurement points to second reference points on the computer-generated model of the cutting element support structure; and comparing the second measurement points to the second reference points. 4. The method of claim 3, wherein comparing the second measurement points to the second reference points comprises generating a linear measurement that indicates a difference between a location of a second measurement point and a second reference point. 5. The method of claim 3, wherein comparing the second measurement points to the second reference points comprises generating a volumetric measurement that indicates a difference between a location of a second measurement point and a second reference point. 6. The method of claim 3, wherein comparing the second measurement points to the second reference points comprises generating a surface area measurement that indicates a difference between a location of a second measurement point and a second reference point. 7. The method of claim 1, the method further comprising:
scanning the cutting element support structure and detecting measurement points on the cutting element support structure, the measurement points corresponding to reference points on the computer-generated model of the cutting element support structure; and scanning the cuttings element and detecting second measurement points on the cutting elements, the second measurement points corresponding to second reference points on the computer-generated model of the cutting element support structure; wherein positioning the cutting element on the cutting element support structure comprises viewing a continuous, live video feed showing deviations between the second measurement points and second reference points. 8. A method of manufacturing an oil-field tool, the method comprising:
positioning an oil-field tool element on a support structure of the oil-field tool; using an imaging system to acquire and compare a relative position of the oil-field tool element on the support structure to a relative position of a computer-generated model of the oil-field tool element on a computer-generated model of the support structure using a live video image of the oil-field tool element relative to the computer-generated model of the oil-field tool; in response to the live video image, adjusting a position of the oil-field tool element on the support structure to an adjusted position that matches the relative position of the computer-generated model of the oil-field tool element on the computer-generated model of the support structure; and joining the oil-field tool element to the support structure at the adjusted position. 9. The method of claim 8, wherein the live video image comprises a volumetric measurement indicating a difference between the relative position of the oil-field tool element on the support structure and the relative position of the computer-generated model of the oil-field tool element on the computer-generated model of the support structure. 10. The method of claim 8, wherein the live video image comprises a surface area measurement indicating a difference between the relative position of the oil-field tool element on the support structure and the relative position of the computer-generated model of the oil-field tool element on the computer-generated model of the support structure. 11. The method of claim 8, wherein the live video image comprises a vector measurement indicating a distance between a measurement point of the oil-field tool element and a reference point of the computer-generated model of the support structure. 12. The method of claim 11, wherein the live video image comprises a plurality of vector measurements corresponding to distances between a plurality of measurement points of the oil-field tool element and a plurality of reference points of the computer-generated model of the support structure. 13. The method of claim 12, further comprising comparing at least one of the vector measurements to a predetermined threshold and generating an alarm in response to determining that the calculated vector measurement is greater than the predetermined threshold. 14. The method of claim 13, wherein the alarm comprises a visual signal. 15. A system for manufacturing a drill bit, the system comprising:
a control system having a processor, a memory, a power source, and an input-output subsystem, the input-output subsystem comprising at least one camera and at least one projector operable to illuminate and scan an image of a cutting element support structure and an image of a cutting element, and at least one display operable to display a continuous video image that is indicative of a position of the cutting element on the cutting element support structure in real time; wherein the control system is operable to receive the scanned image of the cutting element and generate a plurality of measurement points, each such of the plurality of measurement points corresponding to a location on the cutting element and a reference point on a computer-generated model of the cutting element support structure; and the control system is further operable to generate a live, continuous video image signal to the display showing the position of each measurement point relative to each reference point. 16. The system of claim 15, wherein the control system is further operable to compute distances between each measurement point and each reference point. 17. The system of claim 16, wherein the processor is operable to communicate a live, continuous video image to the display, the continuous video image including the computed distance between at least one of the measurement points and at least one of the corresponding reference points. 18. The system of claim 16, wherein the processor is operable to compare the computed distances between each measurement point and each reference point to a predetermined threshold, and to generate a signal indicating that at least one of computed distances is greater than the predetermined threshold. 19. The system of claim 18, further comprising a speaker operable to generate an audible alarm in response to receiving a signal from the control system indicating that at least one of the computed distances is greater than the predetermined threshold. 20. The system of claim 18, wherein the display is operable to generate a visual indicator in response to receiving a signal from the control system indicating that at least one of the computed distances is greater than the predetermined threshold. | 2,100 |
5,903 | 5,903 | 14,628,093 | 2,159 | Methods, systems, and media for presenting search results are provided. In accordance with some embodiments, the method comprises: receiving text corresponding to a search query; determining whether a content rating score associated with the search query is below a predetermined threshold, wherein the score is calculated by: identifying a first plurality of search results retrieved using the search query, wherein each search result is associated with one of a plurality of content ratings classes; and calculating the content rating score that is a proportion of search results associated with at least one of the content ratings classes among the first plurality of search results; in response to determining that the content rating score is below the predetermined threshold, identifying a second plurality of search results to be presented based on the search query; and causing the second plurality of search results to be presented. | 1. A method for presenting search results, comprising:
receiving text corresponding to a search query entered on a user device; determining whether a content rating score associated with the search query is below a predetermined threshold value, wherein the score is calculated by:
identifying a first plurality of search results retrieved using the search query, wherein each search result in the first plurality of search results is associated with one of a plurality of content ratings classes; and
calculating the content rating score that is a proportion of search results associated with at least one of the content ratings classes among the first plurality of search results;
in response to determining that the content rating score is below the predetermined threshold value, identifying a second plurality of search results to be presented based on the search query; and causing the second plurality of search results to be presented on the user device. 2. The method of claim 1, further comprising applying a weight to each of the first plurality of search results, wherein the proportion of search results associated with at least one of the content ratings classes is calculated using the weight. 3. The method of claim 2, wherein the weight is determined based on a relevance of the associated search result to the search query. 4. The method of claim 1, wherein the second plurality of search results is identified based on the content rating class associated with each of the first plurality of search results. 5. The method of claim 1, wherein the second plurality of search results is a subset of the first plurality of search results. 6. The method of claim 1, wherein the plurality of content ratings classes correspond to a first content rating class designated for content that is suitable for all ages and a second content rating class designated for content that is suitable for adults. 7. The method of claim 1, wherein the predetermined threshold value includes a first threshold value and a second threshold value and wherein the method further comprises:
in response to determining that the content rating score is above the predetermined threshold value, determining a first relevance of search results associated with a first of the plurality of content ratings classes to the search query and a second relevance of search results associated with a second of the plurality of content ratings classes to the search query; determining whether the first relevance is similar to or larger than the second relevance; in response to determining that the first relevance is similar to or larger than the second relevance, identifying a third plurality of search results to be presented; and causing the third plurality of search results to be presented on the user device. 8. The method of claim 1, further comprising:
in response to determining that the content rating score is above a second predetermined threshold value, inhibiting presentation of search results based on the search query; and causing an indication that presentation of the search results has been inhibited to be presented. 9. The method of claim 1, wherein the received text corresponding to the search query is received from a human annotator and wherein the method further comprises:
determining whether the search query would cause one or more search results to be presented; in response to the determining that the search query causes the second plurality of search results to be presented, causing an indication of the determination to the human annotator to be presented along with a request to modify the search query; receiving additional search queries from the human annotator and determining whether each of the additional search queries would cause one or more search results to be presented; and determining whether the content rating score should be adjusted in response to the search query, the additional search queries, and the one or more search results responsive to the search query and the additional search queries. 10. A system for presenting search results, the system comprising:
a hardware processor that is programmed to:
receive text corresponding to a search query entered on a user device;
determine whether a content rating score associated with the search query is below a predetermined threshold value, wherein the hardware processor is further programmed to:
identify a first plurality of search results retrieved using the search query, wherein each search result in the first plurality of search results is associated with one of a plurality of content ratings classes; and
calculate the content rating score that is a proportion of search results associated with at least one of the content ratings classes among the first plurality of search results;
in response to determining that the content rating score is below the predetermined threshold value, identify a second plurality of search results to be presented based on the search query; and
cause the second plurality of search results to be presented on the user device. 11. The system of claim 10, wherein the hardware processor is further programmed to apply a weight to each of the first plurality of search results, wherein the proportion of search results associated with at least one of the content ratings classes is calculated using the weight. 12. The system of claim 11, wherein the weight is determined based on a relevance of the associated search result to the search query. 13. The system of claim 10, wherein the second plurality of search results is identified based on the content rating class associated with each of the first plurality of search results. 14. The system of claim 10, wherein the second plurality of search results is a subset of the first plurality of search results. 15. The system of claim 10, wherein the plurality of content ratings classes correspond to a first content rating class designated for content that is suitable for all ages and a second content rating class designated for content that is suitable for adults. 16. The system of claim 10, wherein the predetermined threshold value includes a first threshold value and a second threshold value and wherein the hardware processor is further programmed to:
in response to determining that the content rating score is above the predetermined threshold value, determine a first relevance of search results associated with a first of the plurality of content ratings classes to the search query and a second relevance of search results associated with a second of the plurality of content ratings classes to the search query; determine whether the first relevance is similar to or larger than the second relevance; in response to determining that the first relevance is similar to or larger than the second relevance, identify a third plurality of search results to be presented; and cause the third plurality of search results to be presented on the user device. 17. The system of claim 10, wherein the hardware processor is further programmed to:
in response to determining that the content rating score is above a second predetermined threshold value, inhibit presentation of search results based on the search query; and cause an indication that presentation of the search results has been inhibited to be presented. 18. The system of claim 10, wherein the received text corresponding to the search query is received from a human annotator and wherein the hardware processor is further programmed to:
determine whether the search query would cause one or more search results to be presented; in response to the determining that the search query causes the second plurality of search results to be presented, cause an indication of the determination to the human annotator to be presented along with a request to modify the search query; receive additional search queries from the human annotator and determining whether each of the additional search queries would cause one or more search results to be presented; and determine whether the content rating score should be adjusted in response to the search query, the additional search queries, and the one or more search results responsive to the search query and the additional search queries. 19. A non-transitory computer-readable medium containing computer executable instructions that, when executed by a processor, cause the processor to perform a method for presenting search results, the method comprising:
receiving text corresponding to a search query entered on a user device; determining whether a content rating score associated with the search query is below a predetermined threshold value, wherein the score is calculated by:
identifying a first plurality of search results retrieved using the search query, wherein each search result in the first plurality of search results is associated with one of a plurality of content ratings classes; and
calculating the content rating score that is a proportion of search results associated with at least one of the content ratings classes among the first plurality of search results;
in response to determining that the content rating score is below the predetermined threshold value, identifying a second plurality of search results to be presented based on the search query; and causing the second plurality of search results to be presented on the user device. 20. The non-transitory computer-readable medium of claim 19, wherein the method further comprises applying a weight to each of the first plurality of search results, wherein the proportion of search results associated with at least one of the content ratings classes is calculated using the weight. 21. The non-transitory computer-readable medium of claim 20, wherein the weight is determined based on a relevance of the associated search result to the search query. 22. The non-transitory computer-readable medium of claim 19, wherein the second plurality of search results is identified based on the content rating class associated with each of the first plurality of search results. 23. The non-transitory computer-readable medium of claim 19, wherein the second plurality of search results is a subset of the first plurality of search results. 24. The non-transitory computer-readable medium of claim 19, wherein the plurality of content ratings classes correspond to a first content rating class designated for content that is suitable for all ages and a second content rating class designated for content that is suitable for adults. 25. The non-transitory computer-readable medium of claim 19, wherein the predetermined threshold value includes a first threshold value and a second threshold value and wherein the method further comprises:
in response to determining that the content rating score is above the predetermined threshold value, determining a first relevance of search results associated with a first of the plurality of content ratings classes to the search query and a second relevance of search results associated with a second of the plurality of content ratings classes to the search query; determining whether the first relevance is similar to or larger than the second relevance; in response to determining that the first relevance is similar to or larger than the second relevance, identifying a third plurality of search results to be presented; and causing the third plurality of search results to be presented on the user device. 26. The non-transitory computer-readable medium of claim 19, wherein the method further comprises:
in response to determining that the content rating score is above a second predetermined threshold value, inhibiting presentation of search results based on the search query; and causing an indication that presentation of the search results has been inhibited to be presented. 27. The non-transitory computer-readable medium of claim 19, wherein the received text corresponding to the search query is received from a human annotator and wherein the method further comprises:
determining whether the search query would cause one or more search results to be presented; in response to the determining that the search query causes the second plurality of search results to be presented, causing an indication of the determination to the human annotator to be presented along with a request to modify the search query; receiving additional search queries from the human annotator and determining whether each of the additional search queries would cause one or more search results to be presented; and determining whether the content rating score should be adjusted in response to the search query, the additional search queries, and the one or more search results responsive to the search query and the additional search queries. | Methods, systems, and media for presenting search results are provided. In accordance with some embodiments, the method comprises: receiving text corresponding to a search query; determining whether a content rating score associated with the search query is below a predetermined threshold, wherein the score is calculated by: identifying a first plurality of search results retrieved using the search query, wherein each search result is associated with one of a plurality of content ratings classes; and calculating the content rating score that is a proportion of search results associated with at least one of the content ratings classes among the first plurality of search results; in response to determining that the content rating score is below the predetermined threshold, identifying a second plurality of search results to be presented based on the search query; and causing the second plurality of search results to be presented.1. A method for presenting search results, comprising:
receiving text corresponding to a search query entered on a user device; determining whether a content rating score associated with the search query is below a predetermined threshold value, wherein the score is calculated by:
identifying a first plurality of search results retrieved using the search query, wherein each search result in the first plurality of search results is associated with one of a plurality of content ratings classes; and
calculating the content rating score that is a proportion of search results associated with at least one of the content ratings classes among the first plurality of search results;
in response to determining that the content rating score is below the predetermined threshold value, identifying a second plurality of search results to be presented based on the search query; and causing the second plurality of search results to be presented on the user device. 2. The method of claim 1, further comprising applying a weight to each of the first plurality of search results, wherein the proportion of search results associated with at least one of the content ratings classes is calculated using the weight. 3. The method of claim 2, wherein the weight is determined based on a relevance of the associated search result to the search query. 4. The method of claim 1, wherein the second plurality of search results is identified based on the content rating class associated with each of the first plurality of search results. 5. The method of claim 1, wherein the second plurality of search results is a subset of the first plurality of search results. 6. The method of claim 1, wherein the plurality of content ratings classes correspond to a first content rating class designated for content that is suitable for all ages and a second content rating class designated for content that is suitable for adults. 7. The method of claim 1, wherein the predetermined threshold value includes a first threshold value and a second threshold value and wherein the method further comprises:
in response to determining that the content rating score is above the predetermined threshold value, determining a first relevance of search results associated with a first of the plurality of content ratings classes to the search query and a second relevance of search results associated with a second of the plurality of content ratings classes to the search query; determining whether the first relevance is similar to or larger than the second relevance; in response to determining that the first relevance is similar to or larger than the second relevance, identifying a third plurality of search results to be presented; and causing the third plurality of search results to be presented on the user device. 8. The method of claim 1, further comprising:
in response to determining that the content rating score is above a second predetermined threshold value, inhibiting presentation of search results based on the search query; and causing an indication that presentation of the search results has been inhibited to be presented. 9. The method of claim 1, wherein the received text corresponding to the search query is received from a human annotator and wherein the method further comprises:
determining whether the search query would cause one or more search results to be presented; in response to the determining that the search query causes the second plurality of search results to be presented, causing an indication of the determination to the human annotator to be presented along with a request to modify the search query; receiving additional search queries from the human annotator and determining whether each of the additional search queries would cause one or more search results to be presented; and determining whether the content rating score should be adjusted in response to the search query, the additional search queries, and the one or more search results responsive to the search query and the additional search queries. 10. A system for presenting search results, the system comprising:
a hardware processor that is programmed to:
receive text corresponding to a search query entered on a user device;
determine whether a content rating score associated with the search query is below a predetermined threshold value, wherein the hardware processor is further programmed to:
identify a first plurality of search results retrieved using the search query, wherein each search result in the first plurality of search results is associated with one of a plurality of content ratings classes; and
calculate the content rating score that is a proportion of search results associated with at least one of the content ratings classes among the first plurality of search results;
in response to determining that the content rating score is below the predetermined threshold value, identify a second plurality of search results to be presented based on the search query; and
cause the second plurality of search results to be presented on the user device. 11. The system of claim 10, wherein the hardware processor is further programmed to apply a weight to each of the first plurality of search results, wherein the proportion of search results associated with at least one of the content ratings classes is calculated using the weight. 12. The system of claim 11, wherein the weight is determined based on a relevance of the associated search result to the search query. 13. The system of claim 10, wherein the second plurality of search results is identified based on the content rating class associated with each of the first plurality of search results. 14. The system of claim 10, wherein the second plurality of search results is a subset of the first plurality of search results. 15. The system of claim 10, wherein the plurality of content ratings classes correspond to a first content rating class designated for content that is suitable for all ages and a second content rating class designated for content that is suitable for adults. 16. The system of claim 10, wherein the predetermined threshold value includes a first threshold value and a second threshold value and wherein the hardware processor is further programmed to:
in response to determining that the content rating score is above the predetermined threshold value, determine a first relevance of search results associated with a first of the plurality of content ratings classes to the search query and a second relevance of search results associated with a second of the plurality of content ratings classes to the search query; determine whether the first relevance is similar to or larger than the second relevance; in response to determining that the first relevance is similar to or larger than the second relevance, identify a third plurality of search results to be presented; and cause the third plurality of search results to be presented on the user device. 17. The system of claim 10, wherein the hardware processor is further programmed to:
in response to determining that the content rating score is above a second predetermined threshold value, inhibit presentation of search results based on the search query; and cause an indication that presentation of the search results has been inhibited to be presented. 18. The system of claim 10, wherein the received text corresponding to the search query is received from a human annotator and wherein the hardware processor is further programmed to:
determine whether the search query would cause one or more search results to be presented; in response to the determining that the search query causes the second plurality of search results to be presented, cause an indication of the determination to the human annotator to be presented along with a request to modify the search query; receive additional search queries from the human annotator and determining whether each of the additional search queries would cause one or more search results to be presented; and determine whether the content rating score should be adjusted in response to the search query, the additional search queries, and the one or more search results responsive to the search query and the additional search queries. 19. A non-transitory computer-readable medium containing computer executable instructions that, when executed by a processor, cause the processor to perform a method for presenting search results, the method comprising:
receiving text corresponding to a search query entered on a user device; determining whether a content rating score associated with the search query is below a predetermined threshold value, wherein the score is calculated by:
identifying a first plurality of search results retrieved using the search query, wherein each search result in the first plurality of search results is associated with one of a plurality of content ratings classes; and
calculating the content rating score that is a proportion of search results associated with at least one of the content ratings classes among the first plurality of search results;
in response to determining that the content rating score is below the predetermined threshold value, identifying a second plurality of search results to be presented based on the search query; and causing the second plurality of search results to be presented on the user device. 20. The non-transitory computer-readable medium of claim 19, wherein the method further comprises applying a weight to each of the first plurality of search results, wherein the proportion of search results associated with at least one of the content ratings classes is calculated using the weight. 21. The non-transitory computer-readable medium of claim 20, wherein the weight is determined based on a relevance of the associated search result to the search query. 22. The non-transitory computer-readable medium of claim 19, wherein the second plurality of search results is identified based on the content rating class associated with each of the first plurality of search results. 23. The non-transitory computer-readable medium of claim 19, wherein the second plurality of search results is a subset of the first plurality of search results. 24. The non-transitory computer-readable medium of claim 19, wherein the plurality of content ratings classes correspond to a first content rating class designated for content that is suitable for all ages and a second content rating class designated for content that is suitable for adults. 25. The non-transitory computer-readable medium of claim 19, wherein the predetermined threshold value includes a first threshold value and a second threshold value and wherein the method further comprises:
in response to determining that the content rating score is above the predetermined threshold value, determining a first relevance of search results associated with a first of the plurality of content ratings classes to the search query and a second relevance of search results associated with a second of the plurality of content ratings classes to the search query; determining whether the first relevance is similar to or larger than the second relevance; in response to determining that the first relevance is similar to or larger than the second relevance, identifying a third plurality of search results to be presented; and causing the third plurality of search results to be presented on the user device. 26. The non-transitory computer-readable medium of claim 19, wherein the method further comprises:
in response to determining that the content rating score is above a second predetermined threshold value, inhibiting presentation of search results based on the search query; and causing an indication that presentation of the search results has been inhibited to be presented. 27. The non-transitory computer-readable medium of claim 19, wherein the received text corresponding to the search query is received from a human annotator and wherein the method further comprises:
determining whether the search query would cause one or more search results to be presented; in response to the determining that the search query causes the second plurality of search results to be presented, causing an indication of the determination to the human annotator to be presented along with a request to modify the search query; receiving additional search queries from the human annotator and determining whether each of the additional search queries would cause one or more search results to be presented; and determining whether the content rating score should be adjusted in response to the search query, the additional search queries, and the one or more search results responsive to the search query and the additional search queries. | 2,100 |
5,904 | 5,904 | 15,173,349 | 2,164 | Systems, methods, and computer-readable media for providing entity relation extraction across sentences in a document using distant supervision. In some examples, a computing device can receive an input, such as a document comprising a plurality of sentences. The computing device can identify syntactic and/or semantic links between words in a sentence and/or between words in different sentences, and extract relationships between entities throughout the document. Techniques and technologies described herein populate a knowledge base (e.g., a table, chart, database etc.) of entity relations based on the extracted relationships. An output of the populated knowledge base can be used by a classifier to identify additional relationships between entities in various documents. Example techniques described herein can apply machine learning to train the classifier to predict relations between entities. The classifier can be trained using known entity relations, syntactic links and/or semantic links. | 1. A system comprising:
one or more processors; a computer-readable media including instructions for a relation extraction framework, that, when executed by the one or more processors, cause the relation extraction framework to perform operations comprising:
processing at least two sentences of a document;
determining an inter-sentential path between a first entity in a first sentence of the at least two sentences and a second entity in a second sentence of the at least two sentences;
applying a classifier to the inter-sentential path; and
identifying, by the classifier, a relation between the first entity and the second entity. 2. A system as claim 1 recites, the operations further comprising:
receiving a training document comprising at least two related entities; and
training one or more parameters of the classifier to identify a relation between the at least two related entities. 3. A system as claim 2 recites, the operations further comprising:
identifying a path between the at least two related entities;
training the one or more parameters to identify the path linking the at least two related entities;
receiving a second training document comprising the path; and
identifying a relation between a first related entity at the beginning of the path and a second related entity at the end of the path. 4. A system as claim 1 recites, wherein the determining the inter-sentential path between the first entity and the second entity further comprises:
identifying a plurality of nodes in the document;
identifying a plurality of edges in the document, wherein an edge comprises a dependency link between two nodes of the plurality of nodes; and
combining two or more edges between the first entity and the second entity. 5. A system as claim 4 recites, wherein the dependency link comprises one or more of:
a discourse link;
an adjacency link; or
a co-reference link. 6. A system as claim 1 recites, wherein the first sentence and the second sentence are adjacent sentences in the document. 7. A system as claim 1 recites, wherein the classifier identifies the relation between the first entity and the second entity based at least in part on one or more links of the inter-sentential path. 8. A system as claim 1 recites, wherein the classifier references a knowledge base to identify the relation between the first entity and the second entity. 9. A computer-implemented method, comprising:
receiving a document comprising two or more sentences; identifying a first entity in a first sentence and a second entity in a second sentence; identifying one or more links between the first entity and the second entity; reference a classifier shared between the first entity and the second entity; and identifying a relationship between the first entity and the second entity based at least in part on the classifier. 10. A method as claim 9 recites, further comprising storing the relationship between the first entity and the second entity in a knowledge base. 11. A method as claim 9 recites, wherein the identifying the one or more links comprises:
identifying a plurality of nodes in the document; and
identifying a plurality of edges in the document, wherein an edge comprises a dependency link between two nodes of the plurality of nodes. 12. A method as claim 11 recites, wherein the dependency link comprises an inter-sentential link or an intra-sentential link. 13. A method as claim 11 recites, wherein the dependency link comprises at least one of:
a discourse link;
an adjacency link; or
a co-reference link. 14. A method as claim 9 recites, further comprising:
receiving a training document comprising at least two related entities; and
training one or more parameters of the classifier to identify the relationship between the at least two related entities. 15. A method as claim 14 recites, further comprising:
identifying a path between the at least two related entities;
training the one or more parameters to identify the path linking the at least two related entities;
receiving a second training document comprising the path; and
identifying a relation between a first related entity at the beginning of the path and a second related entity at the end of the path. 16. A computer-readable medium having thereon computer-executable instructions, the computer-executable instructions responsive to execution configuring a device to perform operations comprising:
receiving a document comprising two or more sentences; identifying a first entity in a first sentence and a second entity in a second sentence; identifying one or more links between the first entity and the second entity; referencing a classifier shared between the first entity and the second entity; and identifying a relationship between the first entity and the second entity based at least in part on the classifier. 17. A computer-readable medium as claim 16 recites, further comprising storing the relationship between the first entity and the second entity in a knowledge base. 18. A computer-readable medium as claim 16 recites, wherein the identifying the one or more links comprises:
identifying a plurality of nodes in the document; and
identifying a plurality of edges in the document, wherein an edge comprises a dependency link between two nodes of the plurality of nodes. 19. A computer-readable medium as claim 16 recites, further comprising:
receiving a training document comprising at least two related entities; and
training one or more parameters of the classifier to identify the relationship between the at least two related entities. 20. A computer-readable medium as claim 19 recites, further comprising:
identifying a path between the at least two related entities;
training the one or more parameters to identify the path linking the at least two related entities;
receiving a second training document comprising the path; and
identifying a relation between a first related entity at the beginning of the path and a second related entity at the end of the path. | Systems, methods, and computer-readable media for providing entity relation extraction across sentences in a document using distant supervision. In some examples, a computing device can receive an input, such as a document comprising a plurality of sentences. The computing device can identify syntactic and/or semantic links between words in a sentence and/or between words in different sentences, and extract relationships between entities throughout the document. Techniques and technologies described herein populate a knowledge base (e.g., a table, chart, database etc.) of entity relations based on the extracted relationships. An output of the populated knowledge base can be used by a classifier to identify additional relationships between entities in various documents. Example techniques described herein can apply machine learning to train the classifier to predict relations between entities. The classifier can be trained using known entity relations, syntactic links and/or semantic links.1. A system comprising:
one or more processors; a computer-readable media including instructions for a relation extraction framework, that, when executed by the one or more processors, cause the relation extraction framework to perform operations comprising:
processing at least two sentences of a document;
determining an inter-sentential path between a first entity in a first sentence of the at least two sentences and a second entity in a second sentence of the at least two sentences;
applying a classifier to the inter-sentential path; and
identifying, by the classifier, a relation between the first entity and the second entity. 2. A system as claim 1 recites, the operations further comprising:
receiving a training document comprising at least two related entities; and
training one or more parameters of the classifier to identify a relation between the at least two related entities. 3. A system as claim 2 recites, the operations further comprising:
identifying a path between the at least two related entities;
training the one or more parameters to identify the path linking the at least two related entities;
receiving a second training document comprising the path; and
identifying a relation between a first related entity at the beginning of the path and a second related entity at the end of the path. 4. A system as claim 1 recites, wherein the determining the inter-sentential path between the first entity and the second entity further comprises:
identifying a plurality of nodes in the document;
identifying a plurality of edges in the document, wherein an edge comprises a dependency link between two nodes of the plurality of nodes; and
combining two or more edges between the first entity and the second entity. 5. A system as claim 4 recites, wherein the dependency link comprises one or more of:
a discourse link;
an adjacency link; or
a co-reference link. 6. A system as claim 1 recites, wherein the first sentence and the second sentence are adjacent sentences in the document. 7. A system as claim 1 recites, wherein the classifier identifies the relation between the first entity and the second entity based at least in part on one or more links of the inter-sentential path. 8. A system as claim 1 recites, wherein the classifier references a knowledge base to identify the relation between the first entity and the second entity. 9. A computer-implemented method, comprising:
receiving a document comprising two or more sentences; identifying a first entity in a first sentence and a second entity in a second sentence; identifying one or more links between the first entity and the second entity; reference a classifier shared between the first entity and the second entity; and identifying a relationship between the first entity and the second entity based at least in part on the classifier. 10. A method as claim 9 recites, further comprising storing the relationship between the first entity and the second entity in a knowledge base. 11. A method as claim 9 recites, wherein the identifying the one or more links comprises:
identifying a plurality of nodes in the document; and
identifying a plurality of edges in the document, wherein an edge comprises a dependency link between two nodes of the plurality of nodes. 12. A method as claim 11 recites, wherein the dependency link comprises an inter-sentential link or an intra-sentential link. 13. A method as claim 11 recites, wherein the dependency link comprises at least one of:
a discourse link;
an adjacency link; or
a co-reference link. 14. A method as claim 9 recites, further comprising:
receiving a training document comprising at least two related entities; and
training one or more parameters of the classifier to identify the relationship between the at least two related entities. 15. A method as claim 14 recites, further comprising:
identifying a path between the at least two related entities;
training the one or more parameters to identify the path linking the at least two related entities;
receiving a second training document comprising the path; and
identifying a relation between a first related entity at the beginning of the path and a second related entity at the end of the path. 16. A computer-readable medium having thereon computer-executable instructions, the computer-executable instructions responsive to execution configuring a device to perform operations comprising:
receiving a document comprising two or more sentences; identifying a first entity in a first sentence and a second entity in a second sentence; identifying one or more links between the first entity and the second entity; referencing a classifier shared between the first entity and the second entity; and identifying a relationship between the first entity and the second entity based at least in part on the classifier. 17. A computer-readable medium as claim 16 recites, further comprising storing the relationship between the first entity and the second entity in a knowledge base. 18. A computer-readable medium as claim 16 recites, wherein the identifying the one or more links comprises:
identifying a plurality of nodes in the document; and
identifying a plurality of edges in the document, wherein an edge comprises a dependency link between two nodes of the plurality of nodes. 19. A computer-readable medium as claim 16 recites, further comprising:
receiving a training document comprising at least two related entities; and
training one or more parameters of the classifier to identify the relationship between the at least two related entities. 20. A computer-readable medium as claim 19 recites, further comprising:
identifying a path between the at least two related entities;
training the one or more parameters to identify the path linking the at least two related entities;
receiving a second training document comprising the path; and
identifying a relation between a first related entity at the beginning of the path and a second related entity at the end of the path. | 2,100 |
5,905 | 5,905 | 14,636,869 | 2,136 | Methods, systems, and computer program product embodiments for storing data in a virtual data storage environment, by a processor device, are provided. In a virtualized tape storage environment, a plurality of partitions are created on a single node, each partition having unique attributes allowing for specific data management, and a logical volume is replicated across the plurality of partitions, such that the logical volume is redundantly stored in at least one of the plurality of partitions. | 1. A method for storing data in a virtual data storage environment, by a processor device, the method comprising:
in a virtualized tape storage environment, creating a plurality of partitions on a single node, each partition having unique attributes allowing for specific data management, and replicating a logical volume across the plurality of partitions, wherein the logical volume is redundantly stored in at least one of the plurality of partitions. 2. The method of claim 1, further including redundantly storing a plurality of copies of the logical volume in the plurality of partitions in cache. 3. The method of claim 2, further including redundantly storing a plurality of copies of the logical volume in the plurality of partitions on physical media. 4. The method of claim 1, further including using an inter-partition copies (IPC) function to replicate and redundantly store multiple copies of the logical volume, wherein IPC is an aspect of a storage construct policy managed by a data virtualization engine. 5. The method of claim 4, further including applying the storage construct policy as a part of logical volume close processing, wherein attributes governing the replication and retention of the logical volume and its copies are written during close processing. 6. The method of claim 5, further including maintaining, by the logical volume, a database containing a record of each of the plurality of partitions the logical volume resides in. 7. The method of claim 6, wherein the record is a bit mask indicating all partitions to which the logical volume belongs. 8. The method of claim 4, further including mounting, by the IPC function a logical volume requested access by a host system, wherein the IPC function mounts the logical volume according to which partition it resides in. 9. A system for storing data in a virtual data storage environment, the system comprising:
a storage server operating in a virtualized tape storage environment, and a processor device, controlling the storage server, wherein the processor device:
creates a plurality of partitions on a single node, each partition having unique attributes allowing for specific data management, and
replicates a logical volume across the plurality of partitions, wherein the logical volume is redundantly stored in at least one of the plurality of partitions. 10. The system of claim 9, wherein the processor device redundantly stores a plurality of copies of the logical volume in the plurality of partitions in cache. 11. The system of claim 10, wherein the processor device redundantly stores a plurality of copies of the logical volume in the plurality of partitions on physical media. 12. The system of claim 9, wherein the processor device uses an inter-partition copies (IPC) function to replicate and redundantly store multiple copies of the logical volume, wherein IPC is an aspect of a storage construct policy managed by a data virtualization engine. 13. The system of claim 12, wherein the processor device applies the storage construct policy as a part of logical volume close processing, wherein attributes governing the replication and retention of the logical volume and its copies are written during close processing. 14. The system of claim 13, wherein the processor device instructs the logical volume to maintain a database containing a record of each of the plurality of partitions the logical volume resides in. 15. The system of claim 14, wherein the record is a bit mask indicating all partitions to which the logical volume belongs. 16. The system of claim 12, wherein the processor device instructs the IPC function to mount a logical volume requested access by a host system, wherein the IPC function mounts the logical volume according to which partition it resides in. 17. A computer program product for storing data in a virtual data storage environment by a processor device, the computer program product comprising a non-transitory computer-readable storage medium having computer-readable program code portions stored therein, the computer-readable program code portions comprising:
a first executable portion that in a virtualized tape storage environment, creates a plurality of partitions on a single node, each partition having unique attributes allowing for specific data management, and a second executable portion that replicates a logical volume across the plurality of partitions, wherein the logical volume is redundantly stored in at least one of the plurality of partitions. 18. The computer program product of claim 17, further including a third executable portion that redundantly stores a plurality of copies of the logical volume in the plurality of partitions in cache. 19. The computer program product of claim 18, further including a fourth executable portion that redundantly stores a plurality of copies of the logical volume in the plurality of partitions on physical media. 20. The computer program product of claim 17, further including a third executable portion that uses an inter-partition copies (IPC) function to replicate and redundantly store multiple copies of the logical volume, wherein IPC is an aspect of a storage construct policy managed by a data virtualization engine. 21. The computer program product of claim 20, further including a fourth executable portion that applies the storage construct policy as a part of logical volume close processing, wherein attributes governing the replication and retention of the logical volume and its copies are written during close processing. 22. The computer program product of claim 21, further including a fifth executable portion that maintaining, by the logical volume, a database containing a record of each of the plurality of partitions the logical volume resides in. 23. The computer program product of claim 22, wherein the record is a bit mask indicating all partitions to which the logical volume belongs. 24. The computer program product of claim 20, further including a fourth executable portion that mounting, by the IPC function a logical volume requested access by a host system, wherein the IPC function mounts the logical volume according to which partition it resides in. | Methods, systems, and computer program product embodiments for storing data in a virtual data storage environment, by a processor device, are provided. In a virtualized tape storage environment, a plurality of partitions are created on a single node, each partition having unique attributes allowing for specific data management, and a logical volume is replicated across the plurality of partitions, such that the logical volume is redundantly stored in at least one of the plurality of partitions.1. A method for storing data in a virtual data storage environment, by a processor device, the method comprising:
in a virtualized tape storage environment, creating a plurality of partitions on a single node, each partition having unique attributes allowing for specific data management, and replicating a logical volume across the plurality of partitions, wherein the logical volume is redundantly stored in at least one of the plurality of partitions. 2. The method of claim 1, further including redundantly storing a plurality of copies of the logical volume in the plurality of partitions in cache. 3. The method of claim 2, further including redundantly storing a plurality of copies of the logical volume in the plurality of partitions on physical media. 4. The method of claim 1, further including using an inter-partition copies (IPC) function to replicate and redundantly store multiple copies of the logical volume, wherein IPC is an aspect of a storage construct policy managed by a data virtualization engine. 5. The method of claim 4, further including applying the storage construct policy as a part of logical volume close processing, wherein attributes governing the replication and retention of the logical volume and its copies are written during close processing. 6. The method of claim 5, further including maintaining, by the logical volume, a database containing a record of each of the plurality of partitions the logical volume resides in. 7. The method of claim 6, wherein the record is a bit mask indicating all partitions to which the logical volume belongs. 8. The method of claim 4, further including mounting, by the IPC function a logical volume requested access by a host system, wherein the IPC function mounts the logical volume according to which partition it resides in. 9. A system for storing data in a virtual data storage environment, the system comprising:
a storage server operating in a virtualized tape storage environment, and a processor device, controlling the storage server, wherein the processor device:
creates a plurality of partitions on a single node, each partition having unique attributes allowing for specific data management, and
replicates a logical volume across the plurality of partitions, wherein the logical volume is redundantly stored in at least one of the plurality of partitions. 10. The system of claim 9, wherein the processor device redundantly stores a plurality of copies of the logical volume in the plurality of partitions in cache. 11. The system of claim 10, wherein the processor device redundantly stores a plurality of copies of the logical volume in the plurality of partitions on physical media. 12. The system of claim 9, wherein the processor device uses an inter-partition copies (IPC) function to replicate and redundantly store multiple copies of the logical volume, wherein IPC is an aspect of a storage construct policy managed by a data virtualization engine. 13. The system of claim 12, wherein the processor device applies the storage construct policy as a part of logical volume close processing, wherein attributes governing the replication and retention of the logical volume and its copies are written during close processing. 14. The system of claim 13, wherein the processor device instructs the logical volume to maintain a database containing a record of each of the plurality of partitions the logical volume resides in. 15. The system of claim 14, wherein the record is a bit mask indicating all partitions to which the logical volume belongs. 16. The system of claim 12, wherein the processor device instructs the IPC function to mount a logical volume requested access by a host system, wherein the IPC function mounts the logical volume according to which partition it resides in. 17. A computer program product for storing data in a virtual data storage environment by a processor device, the computer program product comprising a non-transitory computer-readable storage medium having computer-readable program code portions stored therein, the computer-readable program code portions comprising:
a first executable portion that in a virtualized tape storage environment, creates a plurality of partitions on a single node, each partition having unique attributes allowing for specific data management, and a second executable portion that replicates a logical volume across the plurality of partitions, wherein the logical volume is redundantly stored in at least one of the plurality of partitions. 18. The computer program product of claim 17, further including a third executable portion that redundantly stores a plurality of copies of the logical volume in the plurality of partitions in cache. 19. The computer program product of claim 18, further including a fourth executable portion that redundantly stores a plurality of copies of the logical volume in the plurality of partitions on physical media. 20. The computer program product of claim 17, further including a third executable portion that uses an inter-partition copies (IPC) function to replicate and redundantly store multiple copies of the logical volume, wherein IPC is an aspect of a storage construct policy managed by a data virtualization engine. 21. The computer program product of claim 20, further including a fourth executable portion that applies the storage construct policy as a part of logical volume close processing, wherein attributes governing the replication and retention of the logical volume and its copies are written during close processing. 22. The computer program product of claim 21, further including a fifth executable portion that maintaining, by the logical volume, a database containing a record of each of the plurality of partitions the logical volume resides in. 23. The computer program product of claim 22, wherein the record is a bit mask indicating all partitions to which the logical volume belongs. 24. The computer program product of claim 20, further including a fourth executable portion that mounting, by the IPC function a logical volume requested access by a host system, wherein the IPC function mounts the logical volume according to which partition it resides in. | 2,100 |
5,906 | 5,906 | 16,116,112 | 2,142 | A list of notification items is received, the list including a plurality of notification items, wherein each respective one of the plurality of notification items is associated with a respective urgency value. An information item is detected. In some implementations, the information item is a communication (e.g., an email). In some implementations, the information item is a change in context of a user. Upon determining that the information item is relevant to the urgency value of the first notification item, the urgency value of the first notification item is adjusted. Upon determining that the adjusted urgency value satisfies the predetermined threshold, a first audio prompt is provided to a user. | 1. A non-transitory computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by one or more processors of an electronic device, cause the device to:
generate a speech output to be provided to a user of the device; engage in a communication session with a remote device; while the device is engaged in the communication session with the remote device:
determine an urgency value of the speech output;
determine whether the urgency value of the speech output satisfies a predetermined threshold;
upon determining that the urgency value of the speech output satisfies the predetermined threshold, provide the speech output to the user of the device; and
upon determining that the urgency value of the speech output does not satisfy the predetermined threshold, forgo providing the speech output to the user of the device. 2. The non-transitory computer readable storage medium of claim 1, wherein the urgency value of the speech output is based on a user-configurable criterion associated with the speech output. 3. The non-transitory computer readable storage medium of claim 1, wherein the urgency value of the speech output is based on a context of the speech output. 4. The non-transitory computer readable storage medium of claim 1, wherein the one or more programs further comprise instructions, which when executed by the one or more processors, cause the device to:
determine whether a mode of operation of the electronic device satisfies a predetermined mode of operation; and in accordance with a determination that that the mode of operation of the electronic device satisfies the predetermined mode of operation, provide the speech output to the user of the device. 5. The non-transitory computer readable storage medium of claim 4, wherein the predetermined mode of operation is based on a user setting of the device. 6. The non-transitory computer readable storage medium of claim 1,wherein the one or more programs further comprise instructions, which when executed by the one or more processors, cause the device to:
upon determining that the urgency value of the speech output does not satisfy the predetermined threshold, delay providing the speech output for a predetermined time. 7. The non-transitory computer readable storage medium of claim 6, wherein the predetermined time is based on whether the device is receiving speech input from the user. 8. The non-transitory computer readable storage medium of claim 6, wherein the one or more programs further comprise instructions, which when executed by the one or more processors, cause the device to:
after delaying providing the speech output for the predetermined time:
determine whether the communication session has ended; and
in accordance with a determination that the communication session has ended, provide the speech output to the user of the device. 9. The non-transitory computer readable storage medium of claim 1, wherein the one or more programs further comprise instructions, which when executed by the one or more processors, cause the device to:
upon determining that the urgency value of the speech output does not satisfy the predetermined threshold, provide a visual output corresponding to the speech output on a display of the electronic device. 10. The non-transitory computer readable storage medium of claim 1, wherein the communication session is a telephone conversation. 11. The non-transitory computer readable storage medium of claim 1, wherein the one or more programs further comprise instructions, which when executed by the one or more processors, cause the device to:
receive a request from the user to perform a task; perform the task requested by the user; and wherein the speech output to be provided to the user of the device corresponds to the performance of the task. 12. The non-transitory computer readable storage medium of claim 1, wherein the one or more programs further comprise instructions, which when executed by the one or more processors, cause the device to:
while the device is engaged in the communication session:
determine whether the device is currently receiving speech input from the user; and
in accordance with a determination that the device is not currently receiving speech input from the user, provide the speech output to the user of the device. 13. The non-transitory computer readable storage medium of claim 12, wherein determining whether the device is currently receiving speech input from the user includes:
determining whether a previous speech input from the user was received within a predetermined period of time. 14. The non-transitory computer readable storage medium of claim 12, wherein determining whether the device is currently receiving speech input from the user includes :
determining whether a strength a characteristic of the speech input exceeds a predetermined strength threshold. 15. The non-transitory computer readable storage medium of claim 1, wherein the urgency value of the speech output is based on a position of the speech output in a list of scheduled speech outputs. 16. The non-transitory computer readable storage medium of claim 1:
wherein engaging in the communication session with the remote device includes:
audibly providing audio data received from the remote device to the user;
and wherein providing the speech output to the user of the device includes:
temporarily muting the provision of the audio data received from the remote device; and
outputting the speech output to the user of the device while the audio data received from the remote device is muted. 17. A method of operating a digital assistant, comprising:
at a device having one or more processors and memory:
generating a speech output to be provided to a user of the device;
engaging in a communication session with a remote device;
while the device is engaged in the communication session with the remote device:
determining an urgency value of the speech output;
determining whether the urgency value of the speech output satisfies a predetermined threshold;
upon determining that the urgency value of the speech output satisfies the predetermined threshold, providing the speech output to the user of the device; and
upon determining that the speech output does not satisfy the predetermined threshold, forgo providing the speech output to the user of the device. 18. An electronic device, comprising:
one or more processors; memory; and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs including instructions for:
generating a speech output to be provided to a user of the device;
engaging in a communication session with a remote device;
while the device is engaged in the communication session with the remote device:
determining an urgency value of the speech output;
determining whether the urgency value of the speech output satisfies a predetermined threshold;
upon determining that the urgency value of the speech output satisfies the predetermined threshold, providing the speech output to the user of the device; and
upon determining that the speech output does not satisfy the predetermined threshold, forgo providing the speech output to the user of the device. | A list of notification items is received, the list including a plurality of notification items, wherein each respective one of the plurality of notification items is associated with a respective urgency value. An information item is detected. In some implementations, the information item is a communication (e.g., an email). In some implementations, the information item is a change in context of a user. Upon determining that the information item is relevant to the urgency value of the first notification item, the urgency value of the first notification item is adjusted. Upon determining that the adjusted urgency value satisfies the predetermined threshold, a first audio prompt is provided to a user.1. A non-transitory computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by one or more processors of an electronic device, cause the device to:
generate a speech output to be provided to a user of the device; engage in a communication session with a remote device; while the device is engaged in the communication session with the remote device:
determine an urgency value of the speech output;
determine whether the urgency value of the speech output satisfies a predetermined threshold;
upon determining that the urgency value of the speech output satisfies the predetermined threshold, provide the speech output to the user of the device; and
upon determining that the urgency value of the speech output does not satisfy the predetermined threshold, forgo providing the speech output to the user of the device. 2. The non-transitory computer readable storage medium of claim 1, wherein the urgency value of the speech output is based on a user-configurable criterion associated with the speech output. 3. The non-transitory computer readable storage medium of claim 1, wherein the urgency value of the speech output is based on a context of the speech output. 4. The non-transitory computer readable storage medium of claim 1, wherein the one or more programs further comprise instructions, which when executed by the one or more processors, cause the device to:
determine whether a mode of operation of the electronic device satisfies a predetermined mode of operation; and in accordance with a determination that that the mode of operation of the electronic device satisfies the predetermined mode of operation, provide the speech output to the user of the device. 5. The non-transitory computer readable storage medium of claim 4, wherein the predetermined mode of operation is based on a user setting of the device. 6. The non-transitory computer readable storage medium of claim 1,wherein the one or more programs further comprise instructions, which when executed by the one or more processors, cause the device to:
upon determining that the urgency value of the speech output does not satisfy the predetermined threshold, delay providing the speech output for a predetermined time. 7. The non-transitory computer readable storage medium of claim 6, wherein the predetermined time is based on whether the device is receiving speech input from the user. 8. The non-transitory computer readable storage medium of claim 6, wherein the one or more programs further comprise instructions, which when executed by the one or more processors, cause the device to:
after delaying providing the speech output for the predetermined time:
determine whether the communication session has ended; and
in accordance with a determination that the communication session has ended, provide the speech output to the user of the device. 9. The non-transitory computer readable storage medium of claim 1, wherein the one or more programs further comprise instructions, which when executed by the one or more processors, cause the device to:
upon determining that the urgency value of the speech output does not satisfy the predetermined threshold, provide a visual output corresponding to the speech output on a display of the electronic device. 10. The non-transitory computer readable storage medium of claim 1, wherein the communication session is a telephone conversation. 11. The non-transitory computer readable storage medium of claim 1, wherein the one or more programs further comprise instructions, which when executed by the one or more processors, cause the device to:
receive a request from the user to perform a task; perform the task requested by the user; and wherein the speech output to be provided to the user of the device corresponds to the performance of the task. 12. The non-transitory computer readable storage medium of claim 1, wherein the one or more programs further comprise instructions, which when executed by the one or more processors, cause the device to:
while the device is engaged in the communication session:
determine whether the device is currently receiving speech input from the user; and
in accordance with a determination that the device is not currently receiving speech input from the user, provide the speech output to the user of the device. 13. The non-transitory computer readable storage medium of claim 12, wherein determining whether the device is currently receiving speech input from the user includes:
determining whether a previous speech input from the user was received within a predetermined period of time. 14. The non-transitory computer readable storage medium of claim 12, wherein determining whether the device is currently receiving speech input from the user includes :
determining whether a strength a characteristic of the speech input exceeds a predetermined strength threshold. 15. The non-transitory computer readable storage medium of claim 1, wherein the urgency value of the speech output is based on a position of the speech output in a list of scheduled speech outputs. 16. The non-transitory computer readable storage medium of claim 1:
wherein engaging in the communication session with the remote device includes:
audibly providing audio data received from the remote device to the user;
and wherein providing the speech output to the user of the device includes:
temporarily muting the provision of the audio data received from the remote device; and
outputting the speech output to the user of the device while the audio data received from the remote device is muted. 17. A method of operating a digital assistant, comprising:
at a device having one or more processors and memory:
generating a speech output to be provided to a user of the device;
engaging in a communication session with a remote device;
while the device is engaged in the communication session with the remote device:
determining an urgency value of the speech output;
determining whether the urgency value of the speech output satisfies a predetermined threshold;
upon determining that the urgency value of the speech output satisfies the predetermined threshold, providing the speech output to the user of the device; and
upon determining that the speech output does not satisfy the predetermined threshold, forgo providing the speech output to the user of the device. 18. An electronic device, comprising:
one or more processors; memory; and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs including instructions for:
generating a speech output to be provided to a user of the device;
engaging in a communication session with a remote device;
while the device is engaged in the communication session with the remote device:
determining an urgency value of the speech output;
determining whether the urgency value of the speech output satisfies a predetermined threshold;
upon determining that the urgency value of the speech output satisfies the predetermined threshold, providing the speech output to the user of the device; and
upon determining that the speech output does not satisfy the predetermined threshold, forgo providing the speech output to the user of the device. | 2,100 |
5,907 | 5,907 | 15,039,391 | 2,119 | The present invention relates to a method for controlling a wind power plant connected to an electrical grid, the wind power plant comprising at least one wind turbine generator and a power plant controller, the power plant controller comprising a signal controller for controlling an electrical parameter with a gain (Kgs), the method comprising, measuring at least one electrical parameter in the electrical grid, determining an internal signal value at least partially based on the at least one electrical parameter, comparing the internal signal value with a saturation value, and if the internal signal value exceed the saturation value, increasing the gain (Kgs) of the signal controller to a first gain value, in order to decrease a rise time for a slope for the electrical-parameter in the electrical grid. The invention also relates to a power plant controller arranged to decrease a rise time for a voltage slope for a voltage parameter in the electrical grid. | 1. A method for controlling a wind power plant connected to an electrical grid, the wind power plant comprising at least one wind turbine generator and a power plant controller, the power plant controller comprising a signal controller for controlling an electrical parameter with a gain (Kgs), the method comprising:
measuring at least one electrical parameter in the electrical grid, determining an internal signal value at least partially based on the at least one electrical parameter, comparing the internal signal value with a saturation value, and if the internal signal value exceeds the saturation value, increasing the gain (Kgs) of the signal controller to a first gain value, in order to decrease a rise time for a slope for the electrical-parameter in the electrical grid. 2. The method according to claim 1, wherein the electrical parameter is a voltage parameter and/or a current parameter. 3. The method according to claim 1, wherein the method further comprises:
determining the saturation value at least partially based on a power measurement and a power factor setpoint, and determining the internal signal value (346) with a limiting function set by a user limit and the saturation value. 4. The method according to claim 3, wherein the method further comprises:
if a difference between the internal signal value and the saturation value is larger than a first threshold value, changing the trigger state to a first state, and increasing the gain (Kgs) of the reactive power controller to a second gain value. 5. The method according to claim 1, wherein the method further comprises:
determining a temporary saturation value at least partially based on a power measurement and a power factor setpoint, determining the saturation value as the minimum of the temporary saturation value and a reactive power user limit, if a difference between the internal signal value and the saturation value is larger than a second threshold value and the saturation value is less than a first minimum threshold value, changing the trigger state to a second state, and increasing the gain (Kgs) of the reactive power controller to a third predefined gain value. 6. The method according to claim 5, wherein the method further comprises:
comparing the saturation value with a reactive power measurement in a second comparison, comparing the saturation value with a reactive power measurement in a third comparison, if a difference between the saturation value and a reactive power measurement of the second comparison is larger than a third threshold value, and a difference between the saturation value and a reactive power measurement of the third comparison is larger than a fourth threshold value, changing a trigger state to a third state, and increasing the gain (Kgs) of the reactive power controller to a fourth gain value. 7. The method according to claim 1, wherein the method further comprises:
detecting an equality of signs by comparing a sign of a first sample of the internal signal value with a sign of a second sample of the internal signal value, detecting if an absolute value of the internal signal value is greater than an absolute value of the saturation value, if the equality of signs and the absolute value of the internal signal value is greater than the absolute value of the saturation value, decreasing the gain (Kgs) of the reactive power controller to a base gain value. 8. The method according to claim 1, wherein the method further comprises:
calculating a short circuit ratio of the wind power plant at a common point of connection in the electrical grid, adjusting the gain (Kgs) according to a predefined function of the trigger state and the short circuit ratio. 9. The method according to claim 1, wherein the reactive power controller is a discrete proportional-integral controller with the gain (Kgs) and an output signal, and wherein the method further comprises:
calculating a derivative of an output signal for a first sample of the proportional-integral controller and an output signal for a second sample of the proportional-integral controller, and if the derivative of the output signal is negative and the gain (Kgs) is smaller for the second sample than the gain (Kgs) for the first sample, freezing the output signal of the proportional-integral controller. 10. The method according to claim 9, wherein the method further comprises:
comparing a sign of a first sample of the internal signal value with a sign of a second sample of the internal signal value for detecting a change in sign, and if the output for the first sample is less than the output of for the second sample and no change in sign is detected, freezing the output of the proportional-integral controller for the duration of one sample. 11. The method according to claim 1,
wherein the internal signal value is an internal reactive power value. 12. (canceled) 13. A wind power plant connectable to an electrical grid, the wind power plant comprising at least one wind turbine generator and a power plant controller, the power plant controller being arranged to perform an operation for controlling an electrical parameter with a gain (Kgs), the operation comprising:
measuring at least one electrical parameter in the electrical grid, determining an internal signal value at least partially based on the at least one electrical parameter, comparing the internal signal value with a saturation value, and if the internal signal value exceeds the saturation value, increasing the gain (Kgs) of the signal controller to a first gain value, in order to decrease a rise time for a slope for the electrical-parameter in the electrical grid. 14. A power plant controller for controlling a wind power plant connected to an electrical grid, the wind power plant comprising at least one wind turbine generator, the power plant controller comprising:
a reactive power controller with a gain (Kgs) arranged to control reactive power in the wind power plant, equipment for measuring at least one electrical parameter in the electrical grid, and a processor unit for determining an internal signal value at least partially based on the at least one electrical parameter, the processor unit being arranged to compare the internal signal value with a saturation value,
the processor unit being arranged to increase the gain (Kgs) of the reactive power controller to a first gain value, if the internal signal value is exceeds the saturation value, in order to decrease a rise time for a voltage slope for a voltage parameter in the electrical grid. 15. At least one computer program product directly loadable into the internal memory of at least one digital computer, comprising software code portions for performing an operation for controlling an electrical parameter with a gain (Kgs) according to when said at least one product is/are run on said at least one computer, the operation comprising:
measuring at least one electrical parameter in the electrical grid, determining an internal signal value at least partially based on the at least one electrical parameter, comparing the internal signal value with a saturation value, and when the internal signal value exceeds the saturation value, increasing the gain (Kgs) of the signal controller to a first gain value, in order to decrease a rise time for a slope for the electrical parameter in the electrical grid. 16. The computer program product according to claim 15, wherein the electrical parameter is a voltage parameter and/or a current parameter. 17. The computer program product according to claim 15, wherein the operation further comprises:
determining the saturation value at least partially based on a power measurement and a power factor setpoint, and determining the internal signal value with a limiting function set by a user limit and the saturation value. 18. The computer program product according to claim 17, wherein the operation further comprises:
when a difference between the internal signal value and the saturation value is larger than a first threshold value, changing the trigger state to a first state, and increasing the gain (Kgs) of the reactive power controller to a second gain value. 19. The computer program product according to claim 15, wherein the operation further comprises:
determining a temporary saturation value at least partially based on a power measurement and a power factor setpoint, determining the saturation value as the minimum of the temporary saturation value and a reactive power user limit, if a difference between the internal signal value and the saturation value is larger than a second threshold value and the saturation value is less than a first minimum threshold value, changing the trigger state to a second state, and increasing the gain (Kgs) of the reactive power controller to a third predefined gain value. 20. The computer program product according to claim 19, wherein the operation further comprises:
comparing the saturation value with a reactive power measurement in a second comparison, comparing the saturation value with a reactive power measurement in a third comparison, when a difference between the saturation value and a reactive power measurement of the second comparison is larger than a third threshold value, and a difference between the saturation value and a reactive power measurement of the third comparison is larger than a fourth threshold value, changing a trigger state to a third state, and increasing the gain (Kgs) of the reactive power controller to a fourth gain value. | The present invention relates to a method for controlling a wind power plant connected to an electrical grid, the wind power plant comprising at least one wind turbine generator and a power plant controller, the power plant controller comprising a signal controller for controlling an electrical parameter with a gain (Kgs), the method comprising, measuring at least one electrical parameter in the electrical grid, determining an internal signal value at least partially based on the at least one electrical parameter, comparing the internal signal value with a saturation value, and if the internal signal value exceed the saturation value, increasing the gain (Kgs) of the signal controller to a first gain value, in order to decrease a rise time for a slope for the electrical-parameter in the electrical grid. The invention also relates to a power plant controller arranged to decrease a rise time for a voltage slope for a voltage parameter in the electrical grid.1. A method for controlling a wind power plant connected to an electrical grid, the wind power plant comprising at least one wind turbine generator and a power plant controller, the power plant controller comprising a signal controller for controlling an electrical parameter with a gain (Kgs), the method comprising:
measuring at least one electrical parameter in the electrical grid, determining an internal signal value at least partially based on the at least one electrical parameter, comparing the internal signal value with a saturation value, and if the internal signal value exceeds the saturation value, increasing the gain (Kgs) of the signal controller to a first gain value, in order to decrease a rise time for a slope for the electrical-parameter in the electrical grid. 2. The method according to claim 1, wherein the electrical parameter is a voltage parameter and/or a current parameter. 3. The method according to claim 1, wherein the method further comprises:
determining the saturation value at least partially based on a power measurement and a power factor setpoint, and determining the internal signal value (346) with a limiting function set by a user limit and the saturation value. 4. The method according to claim 3, wherein the method further comprises:
if a difference between the internal signal value and the saturation value is larger than a first threshold value, changing the trigger state to a first state, and increasing the gain (Kgs) of the reactive power controller to a second gain value. 5. The method according to claim 1, wherein the method further comprises:
determining a temporary saturation value at least partially based on a power measurement and a power factor setpoint, determining the saturation value as the minimum of the temporary saturation value and a reactive power user limit, if a difference between the internal signal value and the saturation value is larger than a second threshold value and the saturation value is less than a first minimum threshold value, changing the trigger state to a second state, and increasing the gain (Kgs) of the reactive power controller to a third predefined gain value. 6. The method according to claim 5, wherein the method further comprises:
comparing the saturation value with a reactive power measurement in a second comparison, comparing the saturation value with a reactive power measurement in a third comparison, if a difference between the saturation value and a reactive power measurement of the second comparison is larger than a third threshold value, and a difference between the saturation value and a reactive power measurement of the third comparison is larger than a fourth threshold value, changing a trigger state to a third state, and increasing the gain (Kgs) of the reactive power controller to a fourth gain value. 7. The method according to claim 1, wherein the method further comprises:
detecting an equality of signs by comparing a sign of a first sample of the internal signal value with a sign of a second sample of the internal signal value, detecting if an absolute value of the internal signal value is greater than an absolute value of the saturation value, if the equality of signs and the absolute value of the internal signal value is greater than the absolute value of the saturation value, decreasing the gain (Kgs) of the reactive power controller to a base gain value. 8. The method according to claim 1, wherein the method further comprises:
calculating a short circuit ratio of the wind power plant at a common point of connection in the electrical grid, adjusting the gain (Kgs) according to a predefined function of the trigger state and the short circuit ratio. 9. The method according to claim 1, wherein the reactive power controller is a discrete proportional-integral controller with the gain (Kgs) and an output signal, and wherein the method further comprises:
calculating a derivative of an output signal for a first sample of the proportional-integral controller and an output signal for a second sample of the proportional-integral controller, and if the derivative of the output signal is negative and the gain (Kgs) is smaller for the second sample than the gain (Kgs) for the first sample, freezing the output signal of the proportional-integral controller. 10. The method according to claim 9, wherein the method further comprises:
comparing a sign of a first sample of the internal signal value with a sign of a second sample of the internal signal value for detecting a change in sign, and if the output for the first sample is less than the output of for the second sample and no change in sign is detected, freezing the output of the proportional-integral controller for the duration of one sample. 11. The method according to claim 1,
wherein the internal signal value is an internal reactive power value. 12. (canceled) 13. A wind power plant connectable to an electrical grid, the wind power plant comprising at least one wind turbine generator and a power plant controller, the power plant controller being arranged to perform an operation for controlling an electrical parameter with a gain (Kgs), the operation comprising:
measuring at least one electrical parameter in the electrical grid, determining an internal signal value at least partially based on the at least one electrical parameter, comparing the internal signal value with a saturation value, and if the internal signal value exceeds the saturation value, increasing the gain (Kgs) of the signal controller to a first gain value, in order to decrease a rise time for a slope for the electrical-parameter in the electrical grid. 14. A power plant controller for controlling a wind power plant connected to an electrical grid, the wind power plant comprising at least one wind turbine generator, the power plant controller comprising:
a reactive power controller with a gain (Kgs) arranged to control reactive power in the wind power plant, equipment for measuring at least one electrical parameter in the electrical grid, and a processor unit for determining an internal signal value at least partially based on the at least one electrical parameter, the processor unit being arranged to compare the internal signal value with a saturation value,
the processor unit being arranged to increase the gain (Kgs) of the reactive power controller to a first gain value, if the internal signal value is exceeds the saturation value, in order to decrease a rise time for a voltage slope for a voltage parameter in the electrical grid. 15. At least one computer program product directly loadable into the internal memory of at least one digital computer, comprising software code portions for performing an operation for controlling an electrical parameter with a gain (Kgs) according to when said at least one product is/are run on said at least one computer, the operation comprising:
measuring at least one electrical parameter in the electrical grid, determining an internal signal value at least partially based on the at least one electrical parameter, comparing the internal signal value with a saturation value, and when the internal signal value exceeds the saturation value, increasing the gain (Kgs) of the signal controller to a first gain value, in order to decrease a rise time for a slope for the electrical parameter in the electrical grid. 16. The computer program product according to claim 15, wherein the electrical parameter is a voltage parameter and/or a current parameter. 17. The computer program product according to claim 15, wherein the operation further comprises:
determining the saturation value at least partially based on a power measurement and a power factor setpoint, and determining the internal signal value with a limiting function set by a user limit and the saturation value. 18. The computer program product according to claim 17, wherein the operation further comprises:
when a difference between the internal signal value and the saturation value is larger than a first threshold value, changing the trigger state to a first state, and increasing the gain (Kgs) of the reactive power controller to a second gain value. 19. The computer program product according to claim 15, wherein the operation further comprises:
determining a temporary saturation value at least partially based on a power measurement and a power factor setpoint, determining the saturation value as the minimum of the temporary saturation value and a reactive power user limit, if a difference between the internal signal value and the saturation value is larger than a second threshold value and the saturation value is less than a first minimum threshold value, changing the trigger state to a second state, and increasing the gain (Kgs) of the reactive power controller to a third predefined gain value. 20. The computer program product according to claim 19, wherein the operation further comprises:
comparing the saturation value with a reactive power measurement in a second comparison, comparing the saturation value with a reactive power measurement in a third comparison, when a difference between the saturation value and a reactive power measurement of the second comparison is larger than a third threshold value, and a difference between the saturation value and a reactive power measurement of the third comparison is larger than a fourth threshold value, changing a trigger state to a third state, and increasing the gain (Kgs) of the reactive power controller to a fourth gain value. | 2,100 |
5,908 | 5,908 | 15,001,305 | 2,159 | A computer-implemented method includes receiving one or more log files. Each of the one or more log files includes one or more logs. The computer-implemented method further includes extracting one or more event records from said one or more logs. The computer-implemented method further includes, for each event record of the one or more event records, determining one or more attributes and one or more dimensions based on the event record, respectively. The computer-implemented method further includes grouping the one or more event records into one or more attribute groups. The computer-implemented method further includes ordering the one or more event records of each of the one or more attribute groups by the one or more dimensions. The computer-implemented method further includes generating one or more graphical representations of the one or more attribute groups. A corresponding computer system and computer program product are also disclosed. | 1. A computer-implemented method comprising:
receiving one or more log files, each of said one or more log files comprising one or more logs; extracting one or more event records from said one or more logs; for each event record of said one or more event records:
determining one or more attributes based on said event record; and
determining one or more dimensions based on said event record;
grouping said one or more event records into one or more attribute groups; ordering said one or more event records of each said one or more attribute groups by said one or more dimensions; and generating one or more graphical representations of said one or more attribute groups. 2. The computer-implemented method of claim 1, further comprising normalizing said one or more event records. 3. The computer-implemented method of claim 1, wherein at least one of said one or more graphical representations comprises a scatter plot, said scatter plot displaying data based on at least one of said one or more event records, at least one of said one or more attributes, and at least one of said one or more dimensions. 4. The computer-implemented method of claim 3, wherein said scatter plot identifies at least one pattern event selected from the group consisting of:
(a) repeated events; (b) periodical events; (c) missing events; (d) abnormal events; and (e) related events. 5. The computer-implemented method of claim 3, wherein said scatter plot is modified by filtering said one or more event records based on one or more filter criteria. 6. The computer-implemented method of claim 1, wherein at least one of said one or more graphical representations displays details of at least one said one or more event records. 7. A computer program product, the computer program product comprising one or more computer readable storage media and program instructions stored on said one or more computer readable storage media, said program instructions comprising instructions to:
receive one or more log files, each of said one or more log files comprising one or more logs; extract one or more event records from said one or more logs; for each event record of said one or more event records:
determine one or more attributes based on said event record; and
determine one or more dimensions based on said event record;
group said one or more event records into one or more attribute groups; order said one or more event records of each said one or more attribute groups by said one or more dimensions; and generate one or more graphical representations of said one or more attribute groups. 8. The computer program product of claim 7, further comprising normalizing said one or more event records. 9. The computer program product of claim 7, wherein at least one of said one or more graphical representations comprises a scatter plot, said scatter plot displaying data based on at least one of said one or more event records, at least one of said one or more attributes, and at least one of said one or more dimensions. 10. The computer program product of claim 9, wherein said scatter plot identifies at least one pattern event selected from the group comprising:
(a) repeated events; (b) periodical events; (c) missing events; (d) abnormal events; and (e) related events. 11. The computer program product of claim 9, wherein said scatter plot is modified by filtering said one or more event records based on one or more filter criteria. 12. The computer program product of claim 7, wherein at least one of said one or more graphical representations displays details of at least one of said one or more event records. 13. A computer system, the computer system comprising:
one or more computer processors; one or more computer readable storage media; computer program instructions; said computer program instructions being stored on said one or more computer readable storage media; said computer program instructions comprising instructions to:
receive one or more log files, each of said one or more log files comprising one or more logs;
extract one or more event records from said one or more logs;
for each event record of said one or more event records:
determine one or more attributes based on said event record; and
determine one or more dimensions based on said event record;
group said one or more event records into one or more attribute groups;
order said one or more event records of each said one or more attribute groups by said one or more dimensions; and
generate one or more graphical representations of said one or more attribute groups. 14. The computer system of claim 13, further comprising normalizing said one or more event records. 15. The computer system of claim 13, wherein at least one of said one or more graphical representations comprises a scatter plot, said scatter plot displaying data based on at least one of said one or more event records, at least one of said one or more attributes, and at least one of said one or more dimensions. 16. The computer system of claim 15, wherein said scatter plot identifies at least one pattern event selected from the group comprising:
(a) repeated events; (b) periodical events; (c) missing events; (d) abnormal events; and (e) related events. 17. The computer system of claim 15, wherein said scatter plot is modified by filtering said one or more event records based on one or more filter criteria. 18. The computer system of claim 13, wherein at least one of said one or more graphical representations displays details of at least one said one or more event records. | A computer-implemented method includes receiving one or more log files. Each of the one or more log files includes one or more logs. The computer-implemented method further includes extracting one or more event records from said one or more logs. The computer-implemented method further includes, for each event record of the one or more event records, determining one or more attributes and one or more dimensions based on the event record, respectively. The computer-implemented method further includes grouping the one or more event records into one or more attribute groups. The computer-implemented method further includes ordering the one or more event records of each of the one or more attribute groups by the one or more dimensions. The computer-implemented method further includes generating one or more graphical representations of the one or more attribute groups. A corresponding computer system and computer program product are also disclosed.1. A computer-implemented method comprising:
receiving one or more log files, each of said one or more log files comprising one or more logs; extracting one or more event records from said one or more logs; for each event record of said one or more event records:
determining one or more attributes based on said event record; and
determining one or more dimensions based on said event record;
grouping said one or more event records into one or more attribute groups; ordering said one or more event records of each said one or more attribute groups by said one or more dimensions; and generating one or more graphical representations of said one or more attribute groups. 2. The computer-implemented method of claim 1, further comprising normalizing said one or more event records. 3. The computer-implemented method of claim 1, wherein at least one of said one or more graphical representations comprises a scatter plot, said scatter plot displaying data based on at least one of said one or more event records, at least one of said one or more attributes, and at least one of said one or more dimensions. 4. The computer-implemented method of claim 3, wherein said scatter plot identifies at least one pattern event selected from the group consisting of:
(a) repeated events; (b) periodical events; (c) missing events; (d) abnormal events; and (e) related events. 5. The computer-implemented method of claim 3, wherein said scatter plot is modified by filtering said one or more event records based on one or more filter criteria. 6. The computer-implemented method of claim 1, wherein at least one of said one or more graphical representations displays details of at least one said one or more event records. 7. A computer program product, the computer program product comprising one or more computer readable storage media and program instructions stored on said one or more computer readable storage media, said program instructions comprising instructions to:
receive one or more log files, each of said one or more log files comprising one or more logs; extract one or more event records from said one or more logs; for each event record of said one or more event records:
determine one or more attributes based on said event record; and
determine one or more dimensions based on said event record;
group said one or more event records into one or more attribute groups; order said one or more event records of each said one or more attribute groups by said one or more dimensions; and generate one or more graphical representations of said one or more attribute groups. 8. The computer program product of claim 7, further comprising normalizing said one or more event records. 9. The computer program product of claim 7, wherein at least one of said one or more graphical representations comprises a scatter plot, said scatter plot displaying data based on at least one of said one or more event records, at least one of said one or more attributes, and at least one of said one or more dimensions. 10. The computer program product of claim 9, wherein said scatter plot identifies at least one pattern event selected from the group comprising:
(a) repeated events; (b) periodical events; (c) missing events; (d) abnormal events; and (e) related events. 11. The computer program product of claim 9, wherein said scatter plot is modified by filtering said one or more event records based on one or more filter criteria. 12. The computer program product of claim 7, wherein at least one of said one or more graphical representations displays details of at least one of said one or more event records. 13. A computer system, the computer system comprising:
one or more computer processors; one or more computer readable storage media; computer program instructions; said computer program instructions being stored on said one or more computer readable storage media; said computer program instructions comprising instructions to:
receive one or more log files, each of said one or more log files comprising one or more logs;
extract one or more event records from said one or more logs;
for each event record of said one or more event records:
determine one or more attributes based on said event record; and
determine one or more dimensions based on said event record;
group said one or more event records into one or more attribute groups;
order said one or more event records of each said one or more attribute groups by said one or more dimensions; and
generate one or more graphical representations of said one or more attribute groups. 14. The computer system of claim 13, further comprising normalizing said one or more event records. 15. The computer system of claim 13, wherein at least one of said one or more graphical representations comprises a scatter plot, said scatter plot displaying data based on at least one of said one or more event records, at least one of said one or more attributes, and at least one of said one or more dimensions. 16. The computer system of claim 15, wherein said scatter plot identifies at least one pattern event selected from the group comprising:
(a) repeated events; (b) periodical events; (c) missing events; (d) abnormal events; and (e) related events. 17. The computer system of claim 15, wherein said scatter plot is modified by filtering said one or more event records based on one or more filter criteria. 18. The computer system of claim 13, wherein at least one of said one or more graphical representations displays details of at least one said one or more event records. | 2,100 |
5,909 | 5,909 | 15,130,448 | 2,132 | A storage device may include at least one memory device logically divided into a plurality of blocksets and a controller. The controller may be configured to receive a command to execute a garbage collection operation on a first blockset of the plurality of blocksets. The controller may be further configured to determine, based on a validity table stored in a non-volatile memory, whether data stored at a first block of the first blockset is valid, cause the data from the first block to be written to a second block of a second blockset of the plurality of blocksets, and modify the validity table to indicate that data stored in the first block is invalid and to indicate that data stored in the second block is valid. | 1. A method comprising:
receiving, by a controller of a storage device, a command to execute a garbage collection operation on a first blockset of the storage device, the first blockset comprising at least a first block associated with a first physical block address of the storage device, and in response to receiving the command to execute the garbage collection operation for the first blockset:
determining, by the controller and based on a validity table stored in a non-volatile memory, whether data stored at the first block of the first blockset is valid;
in response to determining that the data stored in the first block of the first blockset is valid, causing, by the controller, the data from the first block to be written to a second block of a second blockset of the storage device; and
in response to causing the data from the first block to be written to the second block, modifying, by the controller, the validity table to indicate that data stored in the first block is invalid and to indicate that data stored in the second block is valid. 2. The method of claim 1, wherein the controller comprises a hardware accelerator engine, and
wherein determining whether the data stored at the first block is valid comprises determining, by the hardware accelerator engine, whether the data stored at the first block is valid. 3. The method of claim 2, further comprising:
in response to determining that the data stored at the first block is valid, outputting, by the hardware accelerator engine, the first physical block address associated with the first block, wherein writing the data is further in response to outputting the first physical block address associated with the first block. 4. The method of claim 2, further comprising:
in response to determining that the data stored at the first block is valid, causing, by the hardware accelerator engine, reading of the data at the first block; and outputting, by the hardware accelerator engine and based on the data read from the first block, a logical block address associated with the first block. 5. The method of claim 4, further comprising:
in response to outputting the logical block address associated with the first block and causing the data from the first block to be written to the second block, updating, by the controller, an indirection table to indicate that the data associated with the logical block address is stored at the second block. 6. The method of claim 1, comprising:
wherein determining whether data stored at the first block of the first blockset is valid comprises:
determining, by the controller, a validity value mapped by the validity table to the first physical location associated with the first block; and
determining, by the controller, that the data stored at the first block is valid based on the validity value indicating a valid value. 7. The method of claim 1, comprising:
in response to receiving the command to execute the garbage collection operation for the first blockset, determining, by the controller and based on the validity table stored in the non-volatile memory, whether data stored at each block of the first blockset is valid. 8. A storage device comprising:
at least one memory device logically divided into a plurality of blocksets; and a controller configured to:
receive a command to execute a garbage collection operation on a first blockset of the plurality of blocksets, the first blockset comprising at least a first block associated with a first physical block address of the storage device, and
in response to receiving the command to execute the garbage collection operation for the first blockset:
determine, based on a validity table stored in a non-volatile memory, whether data stored at the first block of the first blockset is valid;
in response to determining that the data stored in the first block of the first blockset is valid, cause the data from the first block to be written to a second block of a second blockset of the plurality of blocksets; and
in response to causing the data from the first block to be written to the second block, modify the validity table to indicate that data stored in the first block is invalid and to indicate that data stored in the second block is valid. 9. The storage device of claim 8, wherein the controller comprises a hardware accelerator engine, wherein the hardware accelerator engine is configured to:
determine whether the data stored at the first block is valid. 10. The storage device of claim 9, wherein the hardware accelerator engine is further configured to:
in response to determining that the data stored at the first block is valid, output the first physical block address associated with the first block, wherein writing the data is further in response to outputting the first physical block address associated with the first block. 11. The storage device of claim 10, wherein the hardware accelerator engine is further configured to:
in response to determining that the data stored at the first block is valid, cause reading of the data at the first block; and output, based on the data read from the first block, a logical block address associated with the first block. 12. The storage device of claim 11, wherein the controller is further configured to:
in response to outputting the logical block address associated with the first block and causing the data from the first block to be written to the second block, updating an indirection table to indicate that the data associated with the logical block address is stored at the second block. 13. The storage device of claim 8, wherein the controller is further configured to:
determine a validity value mapped by the validity table to the first physical location associated with the first block; and determine that the data stored at the first block is valid based on the validity value indicating a valid value. 14. The storage device of claim 13, wherein the validity value is a single bit. 15. A computer-readable storage medium comprising instructions that, when executed, configure one or more processors of a storage device to:
receive a command to execute a garbage collection operation on a first blockset of the storage device, the first blockset comprising at least a first block associated with a first physical block address of the storage device, and in response to receiving the command to execute the garbage collection operation for the first blockset:
determine, based on a validity table stored in a non-volatile memory, whether data stored at the first block of the first blockset is valid;
in response to determining that the data stored in the first block of the first blockset is valid, cause the data from the first block to be written to a second block of a second blockset of the storage device; and
in response to causing the data from the first block to be written to the second block, modify the validity table to indicate that data stored in the first block is invalid and to indicate that data stored in the second block is valid. 16. The computer-readable storage medium of claim 15, further comprising instructions that, when executed, configure one or more processors of the storage device to:
determine a validity value mapped by the validity table to the first physical location associated with the first block; and determine that the data stored at the first block is valid based on the validity value indicating a valid value. 17. The computer-readable storage medium of claim 15, wherein the validity value is a single bit. 18. A system comprising:
means for receiving a command to execute a garbage collection operation on a first blockset of the storage device, the first blockset comprising at least a first block associated with a first physical block address of the storage device; means for determining, based on a validity table stored in a non-volatile memory, whether data stored at the first block of the first blockset is valid; means for causing the data from the first block to be written to a second block of a second blockset of the storage device in response to determining that the data stored in the first block of the first blockset is valid; and means for modifying the validity table to indicate that data stored in the first block is invalid and to indicate that data stored in the second block is valid in response to causing the data from the first block to be written to the second block. 19. The system of claim 18, further comprising:
means for outputting the first physical block address associated with the first block in response to determining that the data stored at the first block is valid, wherein writing the data is further in response to outputting the first physical block address associated with the first block. 20. The system of claim 18, further comprising:
means for reading of the data at the first block in response to determining that the data stored at the first block is valid; and means for outputting, based on the data read from the first block, a logical block address associated with the first block. | A storage device may include at least one memory device logically divided into a plurality of blocksets and a controller. The controller may be configured to receive a command to execute a garbage collection operation on a first blockset of the plurality of blocksets. The controller may be further configured to determine, based on a validity table stored in a non-volatile memory, whether data stored at a first block of the first blockset is valid, cause the data from the first block to be written to a second block of a second blockset of the plurality of blocksets, and modify the validity table to indicate that data stored in the first block is invalid and to indicate that data stored in the second block is valid.1. A method comprising:
receiving, by a controller of a storage device, a command to execute a garbage collection operation on a first blockset of the storage device, the first blockset comprising at least a first block associated with a first physical block address of the storage device, and in response to receiving the command to execute the garbage collection operation for the first blockset:
determining, by the controller and based on a validity table stored in a non-volatile memory, whether data stored at the first block of the first blockset is valid;
in response to determining that the data stored in the first block of the first blockset is valid, causing, by the controller, the data from the first block to be written to a second block of a second blockset of the storage device; and
in response to causing the data from the first block to be written to the second block, modifying, by the controller, the validity table to indicate that data stored in the first block is invalid and to indicate that data stored in the second block is valid. 2. The method of claim 1, wherein the controller comprises a hardware accelerator engine, and
wherein determining whether the data stored at the first block is valid comprises determining, by the hardware accelerator engine, whether the data stored at the first block is valid. 3. The method of claim 2, further comprising:
in response to determining that the data stored at the first block is valid, outputting, by the hardware accelerator engine, the first physical block address associated with the first block, wherein writing the data is further in response to outputting the first physical block address associated with the first block. 4. The method of claim 2, further comprising:
in response to determining that the data stored at the first block is valid, causing, by the hardware accelerator engine, reading of the data at the first block; and outputting, by the hardware accelerator engine and based on the data read from the first block, a logical block address associated with the first block. 5. The method of claim 4, further comprising:
in response to outputting the logical block address associated with the first block and causing the data from the first block to be written to the second block, updating, by the controller, an indirection table to indicate that the data associated with the logical block address is stored at the second block. 6. The method of claim 1, comprising:
wherein determining whether data stored at the first block of the first blockset is valid comprises:
determining, by the controller, a validity value mapped by the validity table to the first physical location associated with the first block; and
determining, by the controller, that the data stored at the first block is valid based on the validity value indicating a valid value. 7. The method of claim 1, comprising:
in response to receiving the command to execute the garbage collection operation for the first blockset, determining, by the controller and based on the validity table stored in the non-volatile memory, whether data stored at each block of the first blockset is valid. 8. A storage device comprising:
at least one memory device logically divided into a plurality of blocksets; and a controller configured to:
receive a command to execute a garbage collection operation on a first blockset of the plurality of blocksets, the first blockset comprising at least a first block associated with a first physical block address of the storage device, and
in response to receiving the command to execute the garbage collection operation for the first blockset:
determine, based on a validity table stored in a non-volatile memory, whether data stored at the first block of the first blockset is valid;
in response to determining that the data stored in the first block of the first blockset is valid, cause the data from the first block to be written to a second block of a second blockset of the plurality of blocksets; and
in response to causing the data from the first block to be written to the second block, modify the validity table to indicate that data stored in the first block is invalid and to indicate that data stored in the second block is valid. 9. The storage device of claim 8, wherein the controller comprises a hardware accelerator engine, wherein the hardware accelerator engine is configured to:
determine whether the data stored at the first block is valid. 10. The storage device of claim 9, wherein the hardware accelerator engine is further configured to:
in response to determining that the data stored at the first block is valid, output the first physical block address associated with the first block, wherein writing the data is further in response to outputting the first physical block address associated with the first block. 11. The storage device of claim 10, wherein the hardware accelerator engine is further configured to:
in response to determining that the data stored at the first block is valid, cause reading of the data at the first block; and output, based on the data read from the first block, a logical block address associated with the first block. 12. The storage device of claim 11, wherein the controller is further configured to:
in response to outputting the logical block address associated with the first block and causing the data from the first block to be written to the second block, updating an indirection table to indicate that the data associated with the logical block address is stored at the second block. 13. The storage device of claim 8, wherein the controller is further configured to:
determine a validity value mapped by the validity table to the first physical location associated with the first block; and determine that the data stored at the first block is valid based on the validity value indicating a valid value. 14. The storage device of claim 13, wherein the validity value is a single bit. 15. A computer-readable storage medium comprising instructions that, when executed, configure one or more processors of a storage device to:
receive a command to execute a garbage collection operation on a first blockset of the storage device, the first blockset comprising at least a first block associated with a first physical block address of the storage device, and in response to receiving the command to execute the garbage collection operation for the first blockset:
determine, based on a validity table stored in a non-volatile memory, whether data stored at the first block of the first blockset is valid;
in response to determining that the data stored in the first block of the first blockset is valid, cause the data from the first block to be written to a second block of a second blockset of the storage device; and
in response to causing the data from the first block to be written to the second block, modify the validity table to indicate that data stored in the first block is invalid and to indicate that data stored in the second block is valid. 16. The computer-readable storage medium of claim 15, further comprising instructions that, when executed, configure one or more processors of the storage device to:
determine a validity value mapped by the validity table to the first physical location associated with the first block; and determine that the data stored at the first block is valid based on the validity value indicating a valid value. 17. The computer-readable storage medium of claim 15, wherein the validity value is a single bit. 18. A system comprising:
means for receiving a command to execute a garbage collection operation on a first blockset of the storage device, the first blockset comprising at least a first block associated with a first physical block address of the storage device; means for determining, based on a validity table stored in a non-volatile memory, whether data stored at the first block of the first blockset is valid; means for causing the data from the first block to be written to a second block of a second blockset of the storage device in response to determining that the data stored in the first block of the first blockset is valid; and means for modifying the validity table to indicate that data stored in the first block is invalid and to indicate that data stored in the second block is valid in response to causing the data from the first block to be written to the second block. 19. The system of claim 18, further comprising:
means for outputting the first physical block address associated with the first block in response to determining that the data stored at the first block is valid, wherein writing the data is further in response to outputting the first physical block address associated with the first block. 20. The system of claim 18, further comprising:
means for reading of the data at the first block in response to determining that the data stored at the first block is valid; and means for outputting, based on the data read from the first block, a logical block address associated with the first block. | 2,100 |
5,910 | 5,910 | 15,591,309 | 2,161 | Systems, methods, and computer program products for extracting data from images related to travel accommodation, and performing a search of travel accommodation based on criteria entered by a user. The system collects images related to travel accommodation by querying data sources including images associated with the travel accommodation, processes the images so as to extract an identifying characteristic of the travel accommodation, and represents the identifying characteristic in the form of searchable text keywords and stores the searchable text keywords in the database. The system may receive a user request including travel accommodation-specific criteria via a user interface, search the database for searchable text keywords matching the criteria in the request, and cause the display of travel accommodations represented by the searchable text keywords on the user interface. | 1. A system comprising:
a semantic feeder apparatus including: an image collector module configured, for each of a plurality of travel accommodations, to query at least one data source comprising images associated with the travel accommodation, and to collect at least one image related to the travel accommodation; an image processor module configured to process each of the at least one image so as to extract an identifying characteristic of the travel accommodation, to convert the identifying characteristic to text, and to correlate the text to at least one searchable text keyword using a thesaurus; and an output module configured, for the at least one searchable text keyword, to store an association between the travel accommodation, the at least one image related to the travel accommodation, and the text converted from the identifying characteristic of the travel accommodation in a database. 2. The system of claim 1 comprising:
a semantic feeder database for storing data output by the output module,
wherein the system is configured to receive a user request including travel accommodation-specific criteria, search the semantic feeder database for searchable text keywords matching the criteria in the user request, and communicate a list of travel accommodations represented by the searchable text keywords to be displayed on a user interface. 3. The system of claim 2 comprising:
a search reporting component, the search reporting component including a first database configured to store travel accommodation-specific criteria specified by a user for searching a second database of image-related text keywords and, for each set of search criteria, store a number of search results, and a module configured to generate reports including the searches performed over a predetermined period of time,
wherein at least one report is based on the number of results output from searches over the predetermined period of time and concerns missing content in the images related to the travel accommodation. 4. The system of claim 2 wherein the system is configured to activate the semantic feeder apparatus during:
(i) a time of a new image being made available in a data source,
(ii) a time of a new user request, or
(iii) a predetermined timing. 5. The system of claim 2 wherein the semantic feeder database is configured to store data related to hotels, and the semantic feeder apparatus is configured to be in direct communication with at least one hotel reservation system. 6. The system of claim 5 wherein the system is configured, prior to the activation of the semantic feeder apparatus, to:
update the hotel data stored in the semantic feeder database with hotel data stored in at least one database of the hotel reservation system, and
for each hotel, store a pointer in the semantic feeder database to the hotel description in the hotel reservation system. 7. The system of claim 5 comprising:
a data quality reporting component configured to:
determine the data fields that are common to the semantic feeder database and the hotel reservation system database,
compare, for every data field that is common to the semantic feeder database and the hotel reservation system database, the value in both databases,
quantify the compatibility of the values, generate a report on the compatibility between the values, and
raise alerts in the case of discrepancies between the values. 8. The system of claim 7 wherein the data quality reporting component is further configured to:
delete, for a data field that is common to the semantic feeder database and the hotel reservation system database, the data from the semantic feeder database if it is determined to be not compatible with the data in the hotel reservation database. 9. The system of claim 7 wherein the data quality reporting component is further configured to:
determine the priority of either the database of the hotel reservation system or the semantic feeder database regarding a data field common to the database of the hotel reservation system and the semantic feeder database; and
direct the user request to the prioritized database. 10. The system of claim 2 comprising:
at least one client device including the user interface, the at least one client device being connected via a network to the semantic feeder database,
wherein the user request is received via the user interface. 11. A method comprising:
querying, for each of a plurality of travel accommodations, at least one data source comprising images associated with the travel accommodation in order to collect at least one image related to the travel accommodation; processing the at least one image so as to extract an identifying characteristic of the travel accommodation; converting the identifying characteristic to text; correlating the text to at least one searchable text keyword using a thesaurus; and storing, for the at least one searchable text keyword, an association between the travel accommodation, the at least one image related to the travel accommodation, and the text converted from the identifying characteristic of the travel accommodation in a database. 12. The method of claim 11 further comprising:
receiving a user request comprising travel accommodation-specific criteria via a user interface;
searching the semantic feeder database for searchable text keywords matching the criteria in the user request; and
communicating the travel accommodation represented by the searchable text keywords over a network to at least one client device for display on a user interface of the at least one client device. 13. The method of claim 12 further comprising:
storing, in a first database, travel accommodation-specific criteria specified by a user for searching a second database of image-related text keywords;
for each set of search criteria, storing a number of search results in the second database; and
generating reports concerning the searches performed over a predetermined period of time,
wherein at least one report is based on the number of results output from searches over the predetermined period of time and concerns missing content in the images related to the travel accommodation. 14. The method of claim 12 further comprising:
determining, for a given hotel reservation system, data fields that are common to the semantic feeder database and the hotel reservation system database;
comparing, for every data field that is common to the semantic feeder database and the hotel reservation system database, corresponding values in both databases;
quantifying the compatibility of the values;
generating a report on the compatibility between the values in a data field common to the semantic feeder database and hotel reservation database; and
raising alerts in the case of discrepancies between the values. 15. The method of claim 14 further comprising:
deleting, for a data field that is common to the semantic feeder database and the hotel reservation system database, the data from the semantic feeder database if it is determined not to be compatible with the data in the hotel reservation database. 16. The method of claim 15 further comprising:
when a data field is common to the database of the hotel reservation system and the semantic feeder database, prioritizing either the database of the hotel reservation system or the semantic feeder database; and
directing the user request to the prioritized database. 17. A computer program product comprising:
a non-transitory computer readable storage medium; and instructions stored on the non-transitory computer readable storage medium that, upon execution by one or more processors, cause the one or more processors to: query, for each of a plurality of travel accommodations, at least one data source comprising images associated with the travel accommodation in order to collect at least one image related to the travel accommodation; process the at least one image so as to extract an identifying characteristic of the travel accommodation; convert the identifying characteristic to text; correlate the text to at least one searchable text keyword using a thesaurus; and store, for the at least one searchable text keyword, an association between the travel accommodation, the at least one image related to the travel accommodation, and the text converted from the identifying characteristic of the travel accommodation in a database. | Systems, methods, and computer program products for extracting data from images related to travel accommodation, and performing a search of travel accommodation based on criteria entered by a user. The system collects images related to travel accommodation by querying data sources including images associated with the travel accommodation, processes the images so as to extract an identifying characteristic of the travel accommodation, and represents the identifying characteristic in the form of searchable text keywords and stores the searchable text keywords in the database. The system may receive a user request including travel accommodation-specific criteria via a user interface, search the database for searchable text keywords matching the criteria in the request, and cause the display of travel accommodations represented by the searchable text keywords on the user interface.1. A system comprising:
a semantic feeder apparatus including: an image collector module configured, for each of a plurality of travel accommodations, to query at least one data source comprising images associated with the travel accommodation, and to collect at least one image related to the travel accommodation; an image processor module configured to process each of the at least one image so as to extract an identifying characteristic of the travel accommodation, to convert the identifying characteristic to text, and to correlate the text to at least one searchable text keyword using a thesaurus; and an output module configured, for the at least one searchable text keyword, to store an association between the travel accommodation, the at least one image related to the travel accommodation, and the text converted from the identifying characteristic of the travel accommodation in a database. 2. The system of claim 1 comprising:
a semantic feeder database for storing data output by the output module,
wherein the system is configured to receive a user request including travel accommodation-specific criteria, search the semantic feeder database for searchable text keywords matching the criteria in the user request, and communicate a list of travel accommodations represented by the searchable text keywords to be displayed on a user interface. 3. The system of claim 2 comprising:
a search reporting component, the search reporting component including a first database configured to store travel accommodation-specific criteria specified by a user for searching a second database of image-related text keywords and, for each set of search criteria, store a number of search results, and a module configured to generate reports including the searches performed over a predetermined period of time,
wherein at least one report is based on the number of results output from searches over the predetermined period of time and concerns missing content in the images related to the travel accommodation. 4. The system of claim 2 wherein the system is configured to activate the semantic feeder apparatus during:
(i) a time of a new image being made available in a data source,
(ii) a time of a new user request, or
(iii) a predetermined timing. 5. The system of claim 2 wherein the semantic feeder database is configured to store data related to hotels, and the semantic feeder apparatus is configured to be in direct communication with at least one hotel reservation system. 6. The system of claim 5 wherein the system is configured, prior to the activation of the semantic feeder apparatus, to:
update the hotel data stored in the semantic feeder database with hotel data stored in at least one database of the hotel reservation system, and
for each hotel, store a pointer in the semantic feeder database to the hotel description in the hotel reservation system. 7. The system of claim 5 comprising:
a data quality reporting component configured to:
determine the data fields that are common to the semantic feeder database and the hotel reservation system database,
compare, for every data field that is common to the semantic feeder database and the hotel reservation system database, the value in both databases,
quantify the compatibility of the values, generate a report on the compatibility between the values, and
raise alerts in the case of discrepancies between the values. 8. The system of claim 7 wherein the data quality reporting component is further configured to:
delete, for a data field that is common to the semantic feeder database and the hotel reservation system database, the data from the semantic feeder database if it is determined to be not compatible with the data in the hotel reservation database. 9. The system of claim 7 wherein the data quality reporting component is further configured to:
determine the priority of either the database of the hotel reservation system or the semantic feeder database regarding a data field common to the database of the hotel reservation system and the semantic feeder database; and
direct the user request to the prioritized database. 10. The system of claim 2 comprising:
at least one client device including the user interface, the at least one client device being connected via a network to the semantic feeder database,
wherein the user request is received via the user interface. 11. A method comprising:
querying, for each of a plurality of travel accommodations, at least one data source comprising images associated with the travel accommodation in order to collect at least one image related to the travel accommodation; processing the at least one image so as to extract an identifying characteristic of the travel accommodation; converting the identifying characteristic to text; correlating the text to at least one searchable text keyword using a thesaurus; and storing, for the at least one searchable text keyword, an association between the travel accommodation, the at least one image related to the travel accommodation, and the text converted from the identifying characteristic of the travel accommodation in a database. 12. The method of claim 11 further comprising:
receiving a user request comprising travel accommodation-specific criteria via a user interface;
searching the semantic feeder database for searchable text keywords matching the criteria in the user request; and
communicating the travel accommodation represented by the searchable text keywords over a network to at least one client device for display on a user interface of the at least one client device. 13. The method of claim 12 further comprising:
storing, in a first database, travel accommodation-specific criteria specified by a user for searching a second database of image-related text keywords;
for each set of search criteria, storing a number of search results in the second database; and
generating reports concerning the searches performed over a predetermined period of time,
wherein at least one report is based on the number of results output from searches over the predetermined period of time and concerns missing content in the images related to the travel accommodation. 14. The method of claim 12 further comprising:
determining, for a given hotel reservation system, data fields that are common to the semantic feeder database and the hotel reservation system database;
comparing, for every data field that is common to the semantic feeder database and the hotel reservation system database, corresponding values in both databases;
quantifying the compatibility of the values;
generating a report on the compatibility between the values in a data field common to the semantic feeder database and hotel reservation database; and
raising alerts in the case of discrepancies between the values. 15. The method of claim 14 further comprising:
deleting, for a data field that is common to the semantic feeder database and the hotel reservation system database, the data from the semantic feeder database if it is determined not to be compatible with the data in the hotel reservation database. 16. The method of claim 15 further comprising:
when a data field is common to the database of the hotel reservation system and the semantic feeder database, prioritizing either the database of the hotel reservation system or the semantic feeder database; and
directing the user request to the prioritized database. 17. A computer program product comprising:
a non-transitory computer readable storage medium; and instructions stored on the non-transitory computer readable storage medium that, upon execution by one or more processors, cause the one or more processors to: query, for each of a plurality of travel accommodations, at least one data source comprising images associated with the travel accommodation in order to collect at least one image related to the travel accommodation; process the at least one image so as to extract an identifying characteristic of the travel accommodation; convert the identifying characteristic to text; correlate the text to at least one searchable text keyword using a thesaurus; and store, for the at least one searchable text keyword, an association between the travel accommodation, the at least one image related to the travel accommodation, and the text converted from the identifying characteristic of the travel accommodation in a database. | 2,100 |
5,911 | 5,911 | 14,924,943 | 2,121 | A method, system, and recording medium for cognitive intention detection, including displaying one or more options for automated workflow based on a learned association with a user input, selecting an option of the one or more options for automated workflow, and automating a workflow based on the option selected in the selecting. | 1. A cognitive intention detection method, comprising:
displaying one or more options for automated workflow based on a learned association with a user input; selecting an option of the one or more options for automated workflow; and automating a workflow based on the option selected in the selecting. 2. The method of claim 1, wherein the displaying is based on the learned association with the user input in relationship to a baseline dataset. 3. The method of claim 2, wherein the baseline dataset comprises a list of available functions categorized into a plurality of categories such that each category of the plurality of categories includes the learned association with the user input and is associated with a specific workflow. 4. The method of claim 1, further comprising transforming the user input by:
stripping out stop words auxiliary verbs; stemming the words after the stripping; and adding synonyms of remaining words to the user input. 5. The method of claim 1, further comprising, prior to the displaying, dynamically determining the one or more options for automated workflow based on the learned association with the user input. 6. The method of claim 1, further comprising:
after the selecting, training the learned association with the user input based on the option selected in the selecting. 7. The method of claim 1, further comprising:
after the selecting, training the learned association with the user input based on the option not selected in the selecting. 8. The method of claim 1, further comprising, if a second option of the one or more options is not selected, training the displaying to display the second option below the option of the one or more options for automated workflow. 9. The method of claim 2, further comprising, after the selecting, training the baseline dataset to determine an updated list of the one or more options such that the displaying displays the updated one or more options for automated workflow according to user action data in the selecting. 10. The method of claim 1, wherein the displaying displays the one or more options in a side panel. 11. The method of claim 1, wherein the one or more options for automated workflow displayed in the displaying is dynamically updated based on a change in the user input. 12. The method of claim 1, further comprising, during the selecting, previewing the one or more options selected via a text description. 13. The method of claim 1, further comprising determining an intention of the user input according to a semantic content and a sequence of items within the user input. 14. The method of claim 1, wherein the one or more options for automated workflow are displayed by the displaying so as to be semi-transparent. 15. The method of claim 1, further comprising determining the one or more options to be displayed by the displaying based on the learned association with the user input in relationship to a baseline dataset. 16. A non-transitory computer-readable recording medium recording a cognitive intention detection program, the program causing a computer to perform:
displaying one or more options for automated workflow based on a learned association with a user input; selecting an option of the one or more options for automated workflow; and automating a workflow based on the option selected in the selecting. 17. The non-transitory computer-readable recording medium of claim 16, wherein the displaying is based on the learned association with the user input in relationship to a baseline dataset. 18. The non-transitory computer-readable recording medium of claim 17, wherein the baseline dataset comprises a list of available functions categorized into a plurality of categories such that each category of the plurality of categories includes the learned association with the user input and is associated with a specific workflow. 19. The non-transitory computer-readable recording medium of claim 16, further comprising determining the one or more options to be displayed by the displaying based on the learned association with the user input in relationship to a baseline dataset. 20. A system for cognitive intention detection, comprising
a display device configured to display one or more options for automated workflow based on a learned association with a user input; a selection device configured to select an option of the one or more options for automated workflow; and a workflow automation device configured to automate a workflow based on the option selected by the selection device. | A method, system, and recording medium for cognitive intention detection, including displaying one or more options for automated workflow based on a learned association with a user input, selecting an option of the one or more options for automated workflow, and automating a workflow based on the option selected in the selecting.1. A cognitive intention detection method, comprising:
displaying one or more options for automated workflow based on a learned association with a user input; selecting an option of the one or more options for automated workflow; and automating a workflow based on the option selected in the selecting. 2. The method of claim 1, wherein the displaying is based on the learned association with the user input in relationship to a baseline dataset. 3. The method of claim 2, wherein the baseline dataset comprises a list of available functions categorized into a plurality of categories such that each category of the plurality of categories includes the learned association with the user input and is associated with a specific workflow. 4. The method of claim 1, further comprising transforming the user input by:
stripping out stop words auxiliary verbs; stemming the words after the stripping; and adding synonyms of remaining words to the user input. 5. The method of claim 1, further comprising, prior to the displaying, dynamically determining the one or more options for automated workflow based on the learned association with the user input. 6. The method of claim 1, further comprising:
after the selecting, training the learned association with the user input based on the option selected in the selecting. 7. The method of claim 1, further comprising:
after the selecting, training the learned association with the user input based on the option not selected in the selecting. 8. The method of claim 1, further comprising, if a second option of the one or more options is not selected, training the displaying to display the second option below the option of the one or more options for automated workflow. 9. The method of claim 2, further comprising, after the selecting, training the baseline dataset to determine an updated list of the one or more options such that the displaying displays the updated one or more options for automated workflow according to user action data in the selecting. 10. The method of claim 1, wherein the displaying displays the one or more options in a side panel. 11. The method of claim 1, wherein the one or more options for automated workflow displayed in the displaying is dynamically updated based on a change in the user input. 12. The method of claim 1, further comprising, during the selecting, previewing the one or more options selected via a text description. 13. The method of claim 1, further comprising determining an intention of the user input according to a semantic content and a sequence of items within the user input. 14. The method of claim 1, wherein the one or more options for automated workflow are displayed by the displaying so as to be semi-transparent. 15. The method of claim 1, further comprising determining the one or more options to be displayed by the displaying based on the learned association with the user input in relationship to a baseline dataset. 16. A non-transitory computer-readable recording medium recording a cognitive intention detection program, the program causing a computer to perform:
displaying one or more options for automated workflow based on a learned association with a user input; selecting an option of the one or more options for automated workflow; and automating a workflow based on the option selected in the selecting. 17. The non-transitory computer-readable recording medium of claim 16, wherein the displaying is based on the learned association with the user input in relationship to a baseline dataset. 18. The non-transitory computer-readable recording medium of claim 17, wherein the baseline dataset comprises a list of available functions categorized into a plurality of categories such that each category of the plurality of categories includes the learned association with the user input and is associated with a specific workflow. 19. The non-transitory computer-readable recording medium of claim 16, further comprising determining the one or more options to be displayed by the displaying based on the learned association with the user input in relationship to a baseline dataset. 20. A system for cognitive intention detection, comprising
a display device configured to display one or more options for automated workflow based on a learned association with a user input; a selection device configured to select an option of the one or more options for automated workflow; and a workflow automation device configured to automate a workflow based on the option selected by the selection device. | 2,100 |
5,912 | 5,912 | 14,823,643 | 2,143 | A variation testing system environment for performing variation testing of web pages and applications is disclosed. Users requesting a view from a content provider are not randomly assigned to one of a plurality of variations of the view. Rather, a function is applied to each user's identifier in order to determine which variation of the view is provided to a client device of the user. | 1. A computer-implemented method for determining a variation of a web page to provide to a client device, the method comprising:
receiving, from an application of a client device, an indication of a request by the client device for a web page, the web page associated with a plurality of different variations of the web page; determining an identifier associated with the client device; applying a function to the identifier to generate an assignment identifier for the client device; selecting a variation of the web page from the plurality of different variations of the web page that corresponds to the assignment identifier; and wherein the selected variation of the web page is provided to the client device. 2. The computer-implemented method of claim 1, wherein applying the function comprises:
generating a hash of the identifier using the function; normalizing the hash of the identifier to generate the assignment identifier. 3. The computer-implemented method of claim 2, wherein the function applied to the identifier is a deterministic, uniform and pseudorandom number generator. 4. The computer-implemented method of claim 3, further comprising:
determining an identifier associated with a variation test for the web page that includes the plurality of different variations of the web page. 5. The computer-implemented method of claim 4, wherein applying the function comprises:
applying the deterministic, uniform and pseudorandom generator to a concatenation of the identifier associated with the client device and the identifier associated with the variation test for the web page to generate the hash of the identifier. 6. The computer-implemented method of claim 5, wherein selecting the variation of the web page comprises:
identifying a policy for the web page that describes which variation of the web page to provide based on assignment identifiers; comparing the assignment identifier to the policy; and selecting the variation of the web page to provide to the client device based on the comparison. 7. The computer-implemented method of claim 6, wherein the plurality of different variations of the web page comprise only a first variation of the web page and a second variation of the web page, and the policy includes a threshold indicative of whether to provide the first variation or the second variation to the client device, and wherein comparing the assignment identifier to the policy comprises:
comparing the assignment identifier to the threshold; responsive to the assignment identifier being below the threshold, selecting the first variation of the web page to provide to the client device; and responsive to the assignment identifier being above the threshold, selecting the second variation of the web page to provide to the client device. 8. The computer-implemented method of claim 6, wherein comparing the assignment identifier to the policy comprises:
identifying, from the policy, a mapping of a plurality of ranges of assignment identifiers to the plurality of variations of the web page, each range of assignment identifiers associated with a different variation of the web page; identifying a range of assignment identifiers from the plurality of ranges of assignment identifiers that includes the assignment identifier; and identifying a variation of the web page that is associated with the identified range of assignment identifiers. 9. The computer-implemented method of claim 1, wherein receiving the indication of the request by the client device for the web page comprises:
receiving, from a content provider, the identifier associated with the client device, wherein the content provider receives the request for the web page from the client device. 10. The computer-implemented method of claim 1, wherein providing the selected variation of the web page comprises:
transmitting a notification to the content provider, the notification including an indication of the selected variation of the web page to provide to the client device; wherein the content provider provides the selected variation of the web page to the client device based on the notification. 11. The computer-implemented method of claim 1, wherein receiving the indication comprises:
receiving, by a computer system, the request by the client device for the web page. 12. The computer-implemented method of claim 1, wherein the application is a web browser of the client device and the client device selects the variation of the web page. 13. A computer program product comprising a non-transitory computer-readable storage medium storing executable code for determining a variation of a web page to provide to a client device, the code when executed causing a computer to perform steps comprising:
receiving, from an application of a client device, an indication of a request by the client device for a web page, the web page associated with a plurality of different variations of the web page; determining an identifier associated with the client device; applying a function to the identifier to generate an assignment identifier for the client device; selecting a variation of the web page from the plurality of different variations of the web page that corresponds to the assignment identifier; and wherein the selected variation of the web page is provided to the client device. 14. The computer program product of claim 13, wherein applying the function comprises:
generating a hash of the identifier using the function; normalizing the hash of the identifier to generate the assignment identifier. 15. The computer program product of claim 14, wherein the function applied to the identifier is a deterministic, uniform and pseudorandom number generator. 16. The computer program product of claim 15, wherein the code when executed by the computer further causes the computer to perform steps comprising:
determining an identifier associated with a variation test for the web page that includes the plurality of different variations of the web page; and applying the deterministic, uniform and pseudorandomgenerator to a concatenation of the identifier associated with the client device and the identifier associated with the variation test for the web page to generate the normalized hash of the identifier. 17. The computer program product of claim 16, wherein selecting the variation of the web page comprises:
identifying a policy for the web page that describes which variation of the web page to provide based on assignment identifiers; comparing the normalized assignment identifier to the policy; and selecting the variation of the web page to provide to the client device based on the comparison. 18. The computer program product of claim 17, wherein the plurality of different variations of the web page comprise only a first variation of the web page and a second variation of the web page, and the policy includes a threshold indicative of whether to provide the first variation or the second variation to the client device, and wherein comparing the generated assignment identifier to the policy comprises:
comparing the assignment identifier to the threshold; responsive to the assignment identifier being below the threshold, selecting the first variation of the web page to provide to the client device; and responsive to the assignment identifier being above the threshold, selecting the second variation of the web page to provide to the client device. 19. The computer program product of claim 17, wherein comparing the assignment identifier to the policy comprises:
identifying, from the policy, a mapping of a plurality of ranges of assignment identifiers to the plurality of variations of the web page, each range of assignment identifiers associated with a different variation of the web page; identifying a range of normalized assignment identifiers from the plurality of ranges of assignment identifiers that includes the assignment identifier; and identifying a variation of the web page that is associated with the identified range of assignment identifiers. 20. The computer program product of claim 13, wherein providing the selected variation of the web page comprises:
transmitting a notification to the content provider, the notification including an indication of the selected variation of the web page to provide to the client device; wherein the content provider provides the selected variation of the web page to the client device based on the notification. 21. A computer system for determining a variation of a web page to provide to a client device, the computer system comprising:
a computer processor; a non-transitory computer-readable storage medium storing executable code, the code when executed by the computer processor causes the computer processor to perform steps comprising:
receiving, from an application of a client device, an indication of a request by the client device for a web page, the web page associated with a plurality of different variations of the web page;
determining an identifier associated with the client device;
applying a function to the identifier to generate an assignment identifier for the client device;
selecting a variation of the web page from the plurality of different variations of the web page that corresponds to the assignment identifier; and
wherein the selected variation of the web page is provided to the client device. | A variation testing system environment for performing variation testing of web pages and applications is disclosed. Users requesting a view from a content provider are not randomly assigned to one of a plurality of variations of the view. Rather, a function is applied to each user's identifier in order to determine which variation of the view is provided to a client device of the user.1. A computer-implemented method for determining a variation of a web page to provide to a client device, the method comprising:
receiving, from an application of a client device, an indication of a request by the client device for a web page, the web page associated with a plurality of different variations of the web page; determining an identifier associated with the client device; applying a function to the identifier to generate an assignment identifier for the client device; selecting a variation of the web page from the plurality of different variations of the web page that corresponds to the assignment identifier; and wherein the selected variation of the web page is provided to the client device. 2. The computer-implemented method of claim 1, wherein applying the function comprises:
generating a hash of the identifier using the function; normalizing the hash of the identifier to generate the assignment identifier. 3. The computer-implemented method of claim 2, wherein the function applied to the identifier is a deterministic, uniform and pseudorandom number generator. 4. The computer-implemented method of claim 3, further comprising:
determining an identifier associated with a variation test for the web page that includes the plurality of different variations of the web page. 5. The computer-implemented method of claim 4, wherein applying the function comprises:
applying the deterministic, uniform and pseudorandom generator to a concatenation of the identifier associated with the client device and the identifier associated with the variation test for the web page to generate the hash of the identifier. 6. The computer-implemented method of claim 5, wherein selecting the variation of the web page comprises:
identifying a policy for the web page that describes which variation of the web page to provide based on assignment identifiers; comparing the assignment identifier to the policy; and selecting the variation of the web page to provide to the client device based on the comparison. 7. The computer-implemented method of claim 6, wherein the plurality of different variations of the web page comprise only a first variation of the web page and a second variation of the web page, and the policy includes a threshold indicative of whether to provide the first variation or the second variation to the client device, and wherein comparing the assignment identifier to the policy comprises:
comparing the assignment identifier to the threshold; responsive to the assignment identifier being below the threshold, selecting the first variation of the web page to provide to the client device; and responsive to the assignment identifier being above the threshold, selecting the second variation of the web page to provide to the client device. 8. The computer-implemented method of claim 6, wherein comparing the assignment identifier to the policy comprises:
identifying, from the policy, a mapping of a plurality of ranges of assignment identifiers to the plurality of variations of the web page, each range of assignment identifiers associated with a different variation of the web page; identifying a range of assignment identifiers from the plurality of ranges of assignment identifiers that includes the assignment identifier; and identifying a variation of the web page that is associated with the identified range of assignment identifiers. 9. The computer-implemented method of claim 1, wherein receiving the indication of the request by the client device for the web page comprises:
receiving, from a content provider, the identifier associated with the client device, wherein the content provider receives the request for the web page from the client device. 10. The computer-implemented method of claim 1, wherein providing the selected variation of the web page comprises:
transmitting a notification to the content provider, the notification including an indication of the selected variation of the web page to provide to the client device; wherein the content provider provides the selected variation of the web page to the client device based on the notification. 11. The computer-implemented method of claim 1, wherein receiving the indication comprises:
receiving, by a computer system, the request by the client device for the web page. 12. The computer-implemented method of claim 1, wherein the application is a web browser of the client device and the client device selects the variation of the web page. 13. A computer program product comprising a non-transitory computer-readable storage medium storing executable code for determining a variation of a web page to provide to a client device, the code when executed causing a computer to perform steps comprising:
receiving, from an application of a client device, an indication of a request by the client device for a web page, the web page associated with a plurality of different variations of the web page; determining an identifier associated with the client device; applying a function to the identifier to generate an assignment identifier for the client device; selecting a variation of the web page from the plurality of different variations of the web page that corresponds to the assignment identifier; and wherein the selected variation of the web page is provided to the client device. 14. The computer program product of claim 13, wherein applying the function comprises:
generating a hash of the identifier using the function; normalizing the hash of the identifier to generate the assignment identifier. 15. The computer program product of claim 14, wherein the function applied to the identifier is a deterministic, uniform and pseudorandom number generator. 16. The computer program product of claim 15, wherein the code when executed by the computer further causes the computer to perform steps comprising:
determining an identifier associated with a variation test for the web page that includes the plurality of different variations of the web page; and applying the deterministic, uniform and pseudorandomgenerator to a concatenation of the identifier associated with the client device and the identifier associated with the variation test for the web page to generate the normalized hash of the identifier. 17. The computer program product of claim 16, wherein selecting the variation of the web page comprises:
identifying a policy for the web page that describes which variation of the web page to provide based on assignment identifiers; comparing the normalized assignment identifier to the policy; and selecting the variation of the web page to provide to the client device based on the comparison. 18. The computer program product of claim 17, wherein the plurality of different variations of the web page comprise only a first variation of the web page and a second variation of the web page, and the policy includes a threshold indicative of whether to provide the first variation or the second variation to the client device, and wherein comparing the generated assignment identifier to the policy comprises:
comparing the assignment identifier to the threshold; responsive to the assignment identifier being below the threshold, selecting the first variation of the web page to provide to the client device; and responsive to the assignment identifier being above the threshold, selecting the second variation of the web page to provide to the client device. 19. The computer program product of claim 17, wherein comparing the assignment identifier to the policy comprises:
identifying, from the policy, a mapping of a plurality of ranges of assignment identifiers to the plurality of variations of the web page, each range of assignment identifiers associated with a different variation of the web page; identifying a range of normalized assignment identifiers from the plurality of ranges of assignment identifiers that includes the assignment identifier; and identifying a variation of the web page that is associated with the identified range of assignment identifiers. 20. The computer program product of claim 13, wherein providing the selected variation of the web page comprises:
transmitting a notification to the content provider, the notification including an indication of the selected variation of the web page to provide to the client device; wherein the content provider provides the selected variation of the web page to the client device based on the notification. 21. A computer system for determining a variation of a web page to provide to a client device, the computer system comprising:
a computer processor; a non-transitory computer-readable storage medium storing executable code, the code when executed by the computer processor causes the computer processor to perform steps comprising:
receiving, from an application of a client device, an indication of a request by the client device for a web page, the web page associated with a plurality of different variations of the web page;
determining an identifier associated with the client device;
applying a function to the identifier to generate an assignment identifier for the client device;
selecting a variation of the web page from the plurality of different variations of the web page that corresponds to the assignment identifier; and
wherein the selected variation of the web page is provided to the client device. | 2,100 |
5,913 | 5,913 | 13,834,023 | 2,153 | An user interface for a general product category has an interactive product informational section which presents a plurality of first level product categories for product within the general product category and a plurality of user interface elements for use in expanding and collapsing each of the plurality of first level product categories to thereby provide selective access to a second level of product information for product within the corresponding first level product category. The second level of product information is in the form of a table in which is listed characteristic information for individual products within the corresponding first level product category. Each row within the table includes characteristic information for individual products within the corresponding first level product category and has an associated user interface element for causing a display of more detailed information for a corresponding one of the individual products. | 1. A computer readable media embodied in a non-transient, physical memory device having stored thereon computer executable instructions for facilitating product search result within an electronic product catalog, the instructions perform steps comprising:
receiving a request to search for product within a general product category; causing an user interface for the general product category to be displayed in a consumer computing device, the user interface for the general product category comprising an interactive product informational section wherein the interactive product informational section presents a plurality of first level product categories for product within the general product category and a plurality of user interface elements for use in expanding and collapsing each of the plurality of first level product categories to thereby provide selective access to a second level of product information for product within the corresponding first level product category for product within the general product category; wherein the second level of product information comprises a table in which is listed characteristic information for individual products within the corresponding first level product category for product within the general product category and wherein the table comprises a sticky header in which is presented labels for columns within the table; and wherein each row within the table in which is listed characteristic information for individual products within the corresponding first level product category for product within the general product category has an associated user interface element for causing a display of more detailed information for a corresponding one of the individual products within the corresponding first level product category for product within the general product category. 2. The computer readable media as recited in claim 1, wherein the user interface for the general product category further comprises an interactive filtering section providing user interface elements for filtering information presented in the interactive product informational section. 3. The computer readable media as recited in claim 2, wherein a consumer interaction with the user interface element for causing a display of more detailed information for a corresponding one of the individual products within the corresponding first level product category for product within the general product category also causes a display of information for product that is related to the corresponding one of the individual products within the corresponding first level product category for product within the general product category. 4. The computer readable media as recited in claim 3, wherein the product that is related to the corresponding one of the individual products within the corresponding first level product category for product within the general product category comprises product that has been purchased in the past with the product that is related to the corresponding one of the individual products within the corresponding first level product category for product within the general product category. 5. The computer readable media as recited in claim 3, wherein the product that is related to the corresponding one of the individual products within the corresponding first level product category for product within the general product category comprises product that has been viewed in the past during an on-line session with the product that is related to the corresponding one of the individual products within the corresponding first level product category for product within the general product category. 6. The computer readable media as recited in claim 2, wherein each of the plurality of first level product categories for product within the general product category comprises a representative product image and a representative product descriptor. 7. The computer readable media as recited in claim 6, wherein the detailed information for a corresponding one of the individual products within the corresponding first level product category for product within the general product category comprises an image of the individual product, an obtained rating for the individual product, a description of the individual product, and a price for the individual product. 8. The computer readable media as recited in claim 7, wherein the detailed information for a corresponding one of the individual products within the corresponding first level product category for product within the general product category comprises a link to an informational video that is related to the individual product. 9. The computer readable media as recited in claim 7, wherein the detailed information for a corresponding one of the individual products within the corresponding first level product category for product within the general product category comprises a link to a page of an electronic version of a catalog on which the individual product is located. 10. The computer readable media as recited in claim 7, wherein the user interface for the general product category comprises a display of a number of product within the general product category. 11. The computer readable media as recited in claim 10, wherein each of the plurality of first level product categories for product within the general product category comprises a display of a number of product within the corresponding one of the first level product category for product within the general product category. | An user interface for a general product category has an interactive product informational section which presents a plurality of first level product categories for product within the general product category and a plurality of user interface elements for use in expanding and collapsing each of the plurality of first level product categories to thereby provide selective access to a second level of product information for product within the corresponding first level product category. The second level of product information is in the form of a table in which is listed characteristic information for individual products within the corresponding first level product category. Each row within the table includes characteristic information for individual products within the corresponding first level product category and has an associated user interface element for causing a display of more detailed information for a corresponding one of the individual products.1. A computer readable media embodied in a non-transient, physical memory device having stored thereon computer executable instructions for facilitating product search result within an electronic product catalog, the instructions perform steps comprising:
receiving a request to search for product within a general product category; causing an user interface for the general product category to be displayed in a consumer computing device, the user interface for the general product category comprising an interactive product informational section wherein the interactive product informational section presents a plurality of first level product categories for product within the general product category and a plurality of user interface elements for use in expanding and collapsing each of the plurality of first level product categories to thereby provide selective access to a second level of product information for product within the corresponding first level product category for product within the general product category; wherein the second level of product information comprises a table in which is listed characteristic information for individual products within the corresponding first level product category for product within the general product category and wherein the table comprises a sticky header in which is presented labels for columns within the table; and wherein each row within the table in which is listed characteristic information for individual products within the corresponding first level product category for product within the general product category has an associated user interface element for causing a display of more detailed information for a corresponding one of the individual products within the corresponding first level product category for product within the general product category. 2. The computer readable media as recited in claim 1, wherein the user interface for the general product category further comprises an interactive filtering section providing user interface elements for filtering information presented in the interactive product informational section. 3. The computer readable media as recited in claim 2, wherein a consumer interaction with the user interface element for causing a display of more detailed information for a corresponding one of the individual products within the corresponding first level product category for product within the general product category also causes a display of information for product that is related to the corresponding one of the individual products within the corresponding first level product category for product within the general product category. 4. The computer readable media as recited in claim 3, wherein the product that is related to the corresponding one of the individual products within the corresponding first level product category for product within the general product category comprises product that has been purchased in the past with the product that is related to the corresponding one of the individual products within the corresponding first level product category for product within the general product category. 5. The computer readable media as recited in claim 3, wherein the product that is related to the corresponding one of the individual products within the corresponding first level product category for product within the general product category comprises product that has been viewed in the past during an on-line session with the product that is related to the corresponding one of the individual products within the corresponding first level product category for product within the general product category. 6. The computer readable media as recited in claim 2, wherein each of the plurality of first level product categories for product within the general product category comprises a representative product image and a representative product descriptor. 7. The computer readable media as recited in claim 6, wherein the detailed information for a corresponding one of the individual products within the corresponding first level product category for product within the general product category comprises an image of the individual product, an obtained rating for the individual product, a description of the individual product, and a price for the individual product. 8. The computer readable media as recited in claim 7, wherein the detailed information for a corresponding one of the individual products within the corresponding first level product category for product within the general product category comprises a link to an informational video that is related to the individual product. 9. The computer readable media as recited in claim 7, wherein the detailed information for a corresponding one of the individual products within the corresponding first level product category for product within the general product category comprises a link to a page of an electronic version of a catalog on which the individual product is located. 10. The computer readable media as recited in claim 7, wherein the user interface for the general product category comprises a display of a number of product within the general product category. 11. The computer readable media as recited in claim 10, wherein each of the plurality of first level product categories for product within the general product category comprises a display of a number of product within the corresponding one of the first level product category for product within the general product category. | 2,100 |
5,914 | 5,914 | 14,953,201 | 2,183 | An apparatus which produces branch predictions and a method of operating such an apparatus are provided. A branch target storage used to store entries comprising indications of branch instruction source addresses and indications of branch instruction target addresses is further used to store bias weights. A history storage stores history-based weights for the branch instruction source addresses and a history-based weight is dependent on whether a branch to a branch instruction target address from a branch instruction source address has previously been taken for at least one previous encounter with the branch instruction source address. Prediction generation circuitry receives the bias weight and the history-based weight of the branch instruction source address and generates either a taken prediction or a not-taken prediction for the branch. The reuse of the branch target storage to bias weights reduces the total storage required and the matching of entire source addresses avoids problems related to aliasing. | 1. Apparatus comprising:
branch target storage to store entries comprising indications of branch instruction source addresses and indications of branch instruction target addresses, wherein the entries each further comprise a bias weight; history storage to store history-based weights for branch instruction source addresses, wherein a history-based weight is dependent on whether a branch to a branch instruction target address from a branch instruction source address has previously been taken for at least one previous encounter with the branch instruction source address; and prediction generation circuitry to receive the bias weight and the history-based weight of the branch instruction source address and to generate either a taken prediction or a not-taken prediction for the branch. 2. The apparatus as claimed in claim 1, wherein the prediction generation circuitry is capable of combining the bias weight and the history-based weight to give a combined value and to generate either the taken prediction or the not-taken prediction for the branch in dependence on the combined value. 3. The apparatus as claimed in claim 2, wherein the prediction generation circuitry comprises addition circuitry to add the bias weight and the history-based weight to produce the combined value as a sum and threshold circuitry responsive to the sum to generate the taken prediction if the sum exceeds a threshold value. 4. The apparatus as claimed in claim 1, further comprising at least one further storage to store at least one further set of weights, wherein the at least one further storage is responsive to a function of at least one of: the branch instruction source address, a global history value, and path information to select a further weight from the at least one further set of weights,
and the prediction generation circuitry is capable of combining the further weight with the bias weight and the history-based weight and of generating either the taken prediction or the not-taken prediction for the branch. 5. The apparatus as claimed in claim 1, further comprising weight update circuitry responsive to an outcome of the branch to update the bias weight stored in the branch target storage for the branch instruction source address. 6. The apparatus as claimed in claim 1, further comprising a global history storage to store a global history value for branches encountered and index generation circuitry to combine an at least partial branch instruction source address and the global history value to generate a history index used to select the history-based weight from the history storage. 7. The apparatus as claimed in claim 6, wherein the index generation circuitry comprises a hash function to generate the history index. 8. The apparatus as claimed in claim 1, wherein the entries each further comprise a selection value and wherein the prediction generation circuitry further comprises selection circuitry responsive to the selection value to generate either the taken prediction or the not-taken prediction for the branch based on either the bias weight or the history-based weight. 9. The apparatus as claimed in claim 8, comprising more than one history storage to store at least one further set of history-based weights, and further comprising history combination circuitry to pass a single history-based weight to the prediction generation circuitry in dependence on outputs of the more than one history storage. 10. The apparatus as claimed in claim 9, wherein the at least one further set of history-based weights are branch prediction counter values and the history combination circuitry is capable of generating the single history-based weight on a majority basis from the branch prediction counter values. 11. The apparatus as claimed in claim 1, wherein the apparatus comprises multiple pipeline stages and the apparatus further comprises instruction fetch circuitry in a pipeline stage after the prediction generation circuitry, wherein the instruction fetch circuitry is responsive to the taken prediction generated by the prediction generation circuitry to retrieve an instruction stored at the branch instruction target address. 12. A method of branch target prediction in a data processing apparatus comprising the steps of:
storing entries in branch target storage comprising indications of branch instruction source addresses and indications of branch instruction target addresses, wherein the entries each further comprise a bias weight; storing in history storage history-based weights for branch instruction source addresses, wherein a history-based weight is dependent on whether a branch to a branch instruction target address from a branch instruction source address has previously been taken for at least one previous encounter with the branch instruction source address; and receiving the bias weight and the history-based weight of the branch instruction source address and generating either a taken prediction or a not-taken prediction for the branch. 13. Apparatus comprising:
means for storing entries comprising indications of branch instruction source addresses and indications of branch instruction target addresses, wherein the entries each further comprise a bias weight; means for storing history-based weights for branch instruction source addresses, wherein a history-based weight is dependent on whether a branch to a branch instruction target address from a branch instruction source address has previously been taken for at least one previous encounter with the branch instruction source address; and means for receiving the bias weight and the history-based weight of the branch instruction source address and for generating either a taken prediction or a not-taken prediction for the branch. | An apparatus which produces branch predictions and a method of operating such an apparatus are provided. A branch target storage used to store entries comprising indications of branch instruction source addresses and indications of branch instruction target addresses is further used to store bias weights. A history storage stores history-based weights for the branch instruction source addresses and a history-based weight is dependent on whether a branch to a branch instruction target address from a branch instruction source address has previously been taken for at least one previous encounter with the branch instruction source address. Prediction generation circuitry receives the bias weight and the history-based weight of the branch instruction source address and generates either a taken prediction or a not-taken prediction for the branch. The reuse of the branch target storage to bias weights reduces the total storage required and the matching of entire source addresses avoids problems related to aliasing.1. Apparatus comprising:
branch target storage to store entries comprising indications of branch instruction source addresses and indications of branch instruction target addresses, wherein the entries each further comprise a bias weight; history storage to store history-based weights for branch instruction source addresses, wherein a history-based weight is dependent on whether a branch to a branch instruction target address from a branch instruction source address has previously been taken for at least one previous encounter with the branch instruction source address; and prediction generation circuitry to receive the bias weight and the history-based weight of the branch instruction source address and to generate either a taken prediction or a not-taken prediction for the branch. 2. The apparatus as claimed in claim 1, wherein the prediction generation circuitry is capable of combining the bias weight and the history-based weight to give a combined value and to generate either the taken prediction or the not-taken prediction for the branch in dependence on the combined value. 3. The apparatus as claimed in claim 2, wherein the prediction generation circuitry comprises addition circuitry to add the bias weight and the history-based weight to produce the combined value as a sum and threshold circuitry responsive to the sum to generate the taken prediction if the sum exceeds a threshold value. 4. The apparatus as claimed in claim 1, further comprising at least one further storage to store at least one further set of weights, wherein the at least one further storage is responsive to a function of at least one of: the branch instruction source address, a global history value, and path information to select a further weight from the at least one further set of weights,
and the prediction generation circuitry is capable of combining the further weight with the bias weight and the history-based weight and of generating either the taken prediction or the not-taken prediction for the branch. 5. The apparatus as claimed in claim 1, further comprising weight update circuitry responsive to an outcome of the branch to update the bias weight stored in the branch target storage for the branch instruction source address. 6. The apparatus as claimed in claim 1, further comprising a global history storage to store a global history value for branches encountered and index generation circuitry to combine an at least partial branch instruction source address and the global history value to generate a history index used to select the history-based weight from the history storage. 7. The apparatus as claimed in claim 6, wherein the index generation circuitry comprises a hash function to generate the history index. 8. The apparatus as claimed in claim 1, wherein the entries each further comprise a selection value and wherein the prediction generation circuitry further comprises selection circuitry responsive to the selection value to generate either the taken prediction or the not-taken prediction for the branch based on either the bias weight or the history-based weight. 9. The apparatus as claimed in claim 8, comprising more than one history storage to store at least one further set of history-based weights, and further comprising history combination circuitry to pass a single history-based weight to the prediction generation circuitry in dependence on outputs of the more than one history storage. 10. The apparatus as claimed in claim 9, wherein the at least one further set of history-based weights are branch prediction counter values and the history combination circuitry is capable of generating the single history-based weight on a majority basis from the branch prediction counter values. 11. The apparatus as claimed in claim 1, wherein the apparatus comprises multiple pipeline stages and the apparatus further comprises instruction fetch circuitry in a pipeline stage after the prediction generation circuitry, wherein the instruction fetch circuitry is responsive to the taken prediction generated by the prediction generation circuitry to retrieve an instruction stored at the branch instruction target address. 12. A method of branch target prediction in a data processing apparatus comprising the steps of:
storing entries in branch target storage comprising indications of branch instruction source addresses and indications of branch instruction target addresses, wherein the entries each further comprise a bias weight; storing in history storage history-based weights for branch instruction source addresses, wherein a history-based weight is dependent on whether a branch to a branch instruction target address from a branch instruction source address has previously been taken for at least one previous encounter with the branch instruction source address; and receiving the bias weight and the history-based weight of the branch instruction source address and generating either a taken prediction or a not-taken prediction for the branch. 13. Apparatus comprising:
means for storing entries comprising indications of branch instruction source addresses and indications of branch instruction target addresses, wherein the entries each further comprise a bias weight; means for storing history-based weights for branch instruction source addresses, wherein a history-based weight is dependent on whether a branch to a branch instruction target address from a branch instruction source address has previously been taken for at least one previous encounter with the branch instruction source address; and means for receiving the bias weight and the history-based weight of the branch instruction source address and for generating either a taken prediction or a not-taken prediction for the branch. | 2,100 |
5,915 | 5,915 | 15,257,286 | 2,195 | Techniques described herein improve processor performance in situations where a large number of system service requests are being received from other devices. More specifically, upon detecting that certain operating conditions that indicate a processor slowdown are present, the processor performs one or more system service adjustment techniques. These techniques include throttling (reducing the rate of handling) of such requests, coalescing (grouping multiple requests into a single group) the requests, disabling microarchitctural structures (such as caches or branch prediction units) or updates to those structures, and prefetching data for or pre-performing these requests. Each of these adjustment techniques helps to reduce the number of and/or workload associated with servicing requests for system services. | 1. A method for reducing processing overhead in a processor of a computer system, the processor executing an operating system, the processing overhead associated with processing system service requests by the operating system and received from one or more accelerators external to the processor, the method comprising:
detecting at least one change in an operating parameter of the computer system, the operating parameter being related to the processing overhead associated with processing system service requests; responsive to detecting the at least one change, modifying at least one setting for at least one technique for reducing the processing overhead; and performing the at least one technique to reduce processing overhead in accordance with the at least one modified setting. 2. The method of claim 1, wherein:
performing the at least one technique comprises disabling at least a portion of a microarchitectural structure of the processor. 3. The method of claim 1, wherein:
performing the at least one technique comprises throttling the system service requests by adding artificial delay between when the processor is notified of system service requests and when the processor processes the system service requests, the artificial delay being in addition to delay that normally occurs between being notified of and processing system service requests. 4. The method of claim 1, wherein:
performing the at least one technique comprises coalescing the system service requests by grouping multiple system service requests together before notifying the processor that system service requests are available for processing. 5. The method of claim 1, wherein:
performing the at least one technique comprises prefetching at least one item for an accelerator to prevent the accelerator from generating at least one system service request. 6. The method of claim 1, wherein the at least one change in the operating parameter comprises one of an increase or a decrease in a rate of generation of system service requests. 7. The method of claim 1, wherein the at least one change in the operating parameter comprises one of an increase or a decrease in a cache miss rate. 8. The method of claim 1, wherein the at least one change in the operating parameter comprises one of an increase or a decrease in a misprediction rate of a processor predictor. 9. The method of claim 1, wherein the at least one change in the operating parameter comprises one of an increase or a decrease in an amount of time with which the processor executes handlers for processing system service requests. 10. A computing system, comprising:
one or more processing accelerators; and a processor coupled to the one or more processing accelerators, wherein the processor is configured to:
detect at least one change in an operating parameter of the computing system;
responsive to detecting the at least one change, modify a setting for at least one technique for reducing the processing overhead associated with processing system service requests received from at least one of the one or more accelerators; and
perform the at least one technique to reduce processing overhead associated with processing system service requests received from at least one of the one or more accelerators. 11. The computing system of claim 10, wherein:
performing the at least one technique comprises disabling at least a portion of a microarchitectural structure of the processor. 12. The computing system of claim 10, wherein:
performing the at least one technique comprises throttling the system service requests by adding artificial delay between when the processor is notified of system service requests and when the processor processes the system service requests, the artificial delay being in addition to delay that normally occurs between being notified of and processing system service requests. 13. The computing system of claim 10, wherein:
performing the at least one technique comprises coalescing the system service requests by grouping multiple system service requests together before notifying the processor that system service requests are available for processing. 14. The computing system of claim 10, wherein:
performing the at least one technique comprises prefetching at least one item for an accelerator to prevent the accelerator from generating at least one system service request. 15. The computing system of claim 10, wherein the at least one change in the operating parameter comprises one of an increase or a decrease in a rate of generation of system service requests. 16. The computing system of claim 10, wherein the at least one change in the operating parameter comprises one of an increase or a decrease in a cache miss rate. 17. The computing system of claim 10, wherein the at least one change in the operating parameter comprises one of an increase or a decrease in a misprediction rate of a processor predictor. 18. The computing system of claim 10, wherein the at least one change in the operating parameter comprises one of an increase or a decrease in an amount of time with which the processor executes handlers for processing system service requests. 19. A method for reducing processing overhead in a processor of a computer system, the processor executing an operating system, the processing overhead associated with processing requests to handle page faults by the operating system and received from one of an accelerator on an input/output memory management unit (“IOMMU”), the method comprising:
detecting at least one change in an operating parameter of the computer system, the operating parameter including one or more of a rate of receiving requests to handle page faults from either the IOMMU or an accelerator, an instruction cache miss rate, a data cache miss rate, a branch misprediction rate, and a percentage of time during which the processor handles requests to handle page faults;
responsive to detecting the at least one change, modifying at least one setting for at least one technique for reducing the processing overhead; and
performing the at least one technique to reduce processing overhead in accordance with the at least one modified setting. 20. The method of claim 19, wherein performing the at least one technique comprises:
one or more of disabling updates to one or more microarchitectural structures of the processor, and disabling operation of the one or more microarchitectural structures. | Techniques described herein improve processor performance in situations where a large number of system service requests are being received from other devices. More specifically, upon detecting that certain operating conditions that indicate a processor slowdown are present, the processor performs one or more system service adjustment techniques. These techniques include throttling (reducing the rate of handling) of such requests, coalescing (grouping multiple requests into a single group) the requests, disabling microarchitctural structures (such as caches or branch prediction units) or updates to those structures, and prefetching data for or pre-performing these requests. Each of these adjustment techniques helps to reduce the number of and/or workload associated with servicing requests for system services.1. A method for reducing processing overhead in a processor of a computer system, the processor executing an operating system, the processing overhead associated with processing system service requests by the operating system and received from one or more accelerators external to the processor, the method comprising:
detecting at least one change in an operating parameter of the computer system, the operating parameter being related to the processing overhead associated with processing system service requests; responsive to detecting the at least one change, modifying at least one setting for at least one technique for reducing the processing overhead; and performing the at least one technique to reduce processing overhead in accordance with the at least one modified setting. 2. The method of claim 1, wherein:
performing the at least one technique comprises disabling at least a portion of a microarchitectural structure of the processor. 3. The method of claim 1, wherein:
performing the at least one technique comprises throttling the system service requests by adding artificial delay between when the processor is notified of system service requests and when the processor processes the system service requests, the artificial delay being in addition to delay that normally occurs between being notified of and processing system service requests. 4. The method of claim 1, wherein:
performing the at least one technique comprises coalescing the system service requests by grouping multiple system service requests together before notifying the processor that system service requests are available for processing. 5. The method of claim 1, wherein:
performing the at least one technique comprises prefetching at least one item for an accelerator to prevent the accelerator from generating at least one system service request. 6. The method of claim 1, wherein the at least one change in the operating parameter comprises one of an increase or a decrease in a rate of generation of system service requests. 7. The method of claim 1, wherein the at least one change in the operating parameter comprises one of an increase or a decrease in a cache miss rate. 8. The method of claim 1, wherein the at least one change in the operating parameter comprises one of an increase or a decrease in a misprediction rate of a processor predictor. 9. The method of claim 1, wherein the at least one change in the operating parameter comprises one of an increase or a decrease in an amount of time with which the processor executes handlers for processing system service requests. 10. A computing system, comprising:
one or more processing accelerators; and a processor coupled to the one or more processing accelerators, wherein the processor is configured to:
detect at least one change in an operating parameter of the computing system;
responsive to detecting the at least one change, modify a setting for at least one technique for reducing the processing overhead associated with processing system service requests received from at least one of the one or more accelerators; and
perform the at least one technique to reduce processing overhead associated with processing system service requests received from at least one of the one or more accelerators. 11. The computing system of claim 10, wherein:
performing the at least one technique comprises disabling at least a portion of a microarchitectural structure of the processor. 12. The computing system of claim 10, wherein:
performing the at least one technique comprises throttling the system service requests by adding artificial delay between when the processor is notified of system service requests and when the processor processes the system service requests, the artificial delay being in addition to delay that normally occurs between being notified of and processing system service requests. 13. The computing system of claim 10, wherein:
performing the at least one technique comprises coalescing the system service requests by grouping multiple system service requests together before notifying the processor that system service requests are available for processing. 14. The computing system of claim 10, wherein:
performing the at least one technique comprises prefetching at least one item for an accelerator to prevent the accelerator from generating at least one system service request. 15. The computing system of claim 10, wherein the at least one change in the operating parameter comprises one of an increase or a decrease in a rate of generation of system service requests. 16. The computing system of claim 10, wherein the at least one change in the operating parameter comprises one of an increase or a decrease in a cache miss rate. 17. The computing system of claim 10, wherein the at least one change in the operating parameter comprises one of an increase or a decrease in a misprediction rate of a processor predictor. 18. The computing system of claim 10, wherein the at least one change in the operating parameter comprises one of an increase or a decrease in an amount of time with which the processor executes handlers for processing system service requests. 19. A method for reducing processing overhead in a processor of a computer system, the processor executing an operating system, the processing overhead associated with processing requests to handle page faults by the operating system and received from one of an accelerator on an input/output memory management unit (“IOMMU”), the method comprising:
detecting at least one change in an operating parameter of the computer system, the operating parameter including one or more of a rate of receiving requests to handle page faults from either the IOMMU or an accelerator, an instruction cache miss rate, a data cache miss rate, a branch misprediction rate, and a percentage of time during which the processor handles requests to handle page faults;
responsive to detecting the at least one change, modifying at least one setting for at least one technique for reducing the processing overhead; and
performing the at least one technique to reduce processing overhead in accordance with the at least one modified setting. 20. The method of claim 19, wherein performing the at least one technique comprises:
one or more of disabling updates to one or more microarchitectural structures of the processor, and disabling operation of the one or more microarchitectural structures. | 2,100 |
5,916 | 5,916 | 14,553,685 | 2,145 | An image forming apparatus having a plurality of functions and executing a function designated from the plurality of functions includes a display device for displaying a function selection image allowing a user to designate any of the plurality of functions. The display device displays a first group of functions of which frequency of use is higher than a prescribed threshold value, and a second group of functions of which frequency of use is not higher than the threshold value, on mutually different function selection images, with a display item indicating that functions are displayed distinguished from each other. The image forming apparatus further includes: a designating device receiving a user input designating any of the plurality of functions displayed by the display device; and an image forming unit executing the function designated by the input received by the designating device. | 1. An image forming apparatus having a plurality of functions and executing a function designated from said plurality of functions, comprising:
a display device displaying a function selection image allowing a user to designate any of said plurality of functions, said display device displaying a first group of functions of which frequency of use is higher than a prescribed threshold value, and a second group of functions of which frequency of use is not higher than said threshold value, on mutually different function selection images, with a display item indicating that functions are displayed distinguished from each other; a designating device, connected to said display device, for receiving a user input designating any of the plurality of functions displayed by said display device; an extracting unit for extracting, from the first group of functions, a function in the first group of functions to be moved to the second group of functions, while maintaining a function saved in the second group of functions prior to the extraction, in the second group of functions, when a predetermined condition is met; and an image forming unit, connected to said designating device, for executing the function designated by the input received by said designating device. 2. The image forming apparatus according to claim 1, wherein said display item includes a UI component allowing transition from the function selection image displaying functions of said first group to the function selection image displaying functions of said second group. 3. The image forming apparatus according to claim 2, further comprising:
a changing device changing, when a function belonging to said second group is designated on said function selection image by said designating device, a manner of display of said display device to have the function displayed together with the functions belonging to said first group. 4. The image forming apparatus according to claim 1, wherein said display device displays said plurality of functions, with higher priority to the functions of said first group than the functions of said second group. 5. The image forming apparatus according to claim 4, wherein said display device includes a device controlling menu transition such that the function selection image of the functions belonging to said second group can be reached only after the function selection image of the functions belonging to said first group is reached. 6. The image forming apparatus according to claim 4, further comprising:
a changing device changing, when any of the functions belonging to said second group is designated by said designating device, a manner of display of said display device to have the function displayed together with the functions belonging to said first group. 7. The image forming apparatus according to claim 1, further comprising:
a changing device changing, when any of the functions belonging to said second group is designated by said designating device, a manner of display of said display device to have the function displayed together with the functions belonging to said first group. 8. The image forming apparatus according to claim 7, wherein
said changing device includes a restoring device for determining, when a function belonging to said second group is designated by said designating device, whether or not predetermined restore conditions are satisfied, and restoring said function to said first group or maintaining said function in said second group depending on the result of determination; and said restore conditions relate to the number of designations of said function by said designating device. 9. The image forming apparatus according to claim 7, further comprising:
an authentication device for authentication of a user who uses said image forming apparatus; wherein said changing device includes a restoring device for determining, when a function belonging to said second group is designated by said designating device, whether or not predetermined restore conditions are satisfied, and restoring the function to said first group or maintaining said function in said second group depending on the result of determination; and said restore conditions relate to the user who designated said function using said designating device. 10. The image forming apparatus according to claim 7, further comprising:
an authentication device for authentication of a user who uses said image forming apparatus; wherein said changing device includes a restoring device, when a function belonging to said second group is designated by said designating device, for restoring the function to said first group or maintaining the function in said second group based on predetermined restore conditions; and said restore conditions require that either a condition related to the number of designations of said function by a specific user using said designating device or a condition related to the number of designations of said function by unspecified users using said designating device is satisfied. 11. The image forming apparatus according to claim 7, further comprising:
a confirming device receiving, when a function belonging to said second group is designated by said designating device, a confirmation input of a user approval on movement of said function to said first group; wherein said changing device includes a function moving device moving, when the confirmation input is received by the confirming device, the function designated by said designating device to said first group. 12. The image forming apparatus according to claim 1, further comprising:
a history storage device storing a history of designation of said plurality of functions by said designating device; a frequency calculating device calculating, in accordance with the history stored in said history storage device, frequency of designation of each of the functions belonging to said first group in a prescribed time period; and an auto saving device saving any function belonging to said first group of which frequency calculated by said frequency calculating device is not higher than said threshold value to said second group. 13. The image forming apparatus according to claim 12, wherein
a hierarchical structure is defined among said plurality of functions; said display device makes transition of functions displayed on said function selection image in accordance with said hierarchical structure; and when a function belonging to said first group is saved to said second group, said auto saving device saves a function belonging to a lower layer of said function in said hierarchical structure to said second group, maintaining the hierarchical structure between said function and the function belonging to the lower layer of said function. 14. The image forming apparatus according to claim 1, wherein the predetermined condition is met when the function in the first group of functions is not used for a predetermined period of time. 15. In an image forming apparatus having a plurality of functions and executing a function designated from said plurality of functions, a method of displaying a function selection image, comprising the steps of:
displaying a function selection image allowing a user to designate any of said plurality of functions, said display step displaying a first group of functions of which frequency of use is higher than a prescribed threshold value, and a second group of functions of which frequency of use is not higher than said threshold value, on mutually different function selection images, with a display item indicating that functions are displayed distinguished from each other; receiving a user input designating any of the plurality of functions displayed at said display step; extracting, from the first group of functions, a function to be moved to the second group of functions, while maintaining a function saved in the second group of functions prior to the extraction, in the second group of functions, when a predetermined condition is met; and executing the function designated by the input received at said receiving step and forming an image on a recording medium. 16. The method according to claim 15, wherein
said display item includes a UI component allowing transition from the function selection image displaying functions of said first group to the function selection image displaying functions of said second group. 17. The method according to claim 16, further comprising the step of changing, when a function belonging to said second group is designated on said function selection image at said receiving step, a manner of display of said display device to have the function displayed together with the functions belonging to said first group. 18. The method according to claim 15, wherein
said changing step includes the steps of: determining, when a function belonging to said second group is designated at said receiving step, whether or not predetermined restore conditions are satisfied; and restoring said function to said first group or maintaining said function in said second group depending on the result of determination; and said restore conditions relate to the number of designations of said function at said receiving step. 19. The method according to claim 15, further comprising the steps of:
storing a history of designation of said plurality of functions at said receiving step, in a storage device; calculating, in accordance with the history stored in said history storage device, frequency of designation of each of the functions belonging to said first group in a prescribed time period; and saving any function belonging to said first group of which frequency calculated at said calculating step is not higher than said threshold value to said second group. 20. The method according to claim 19, wherein
a hierarchical structure is defined among said plurality of functions; at said display step, functions displayed on said function selection image are subjected to transition in accordance with said hierarchical structure; and at said saving step, a function belonging to a lower layer of said function in said hierarchical structure is saved to said second group, maintaining the hierarchical structure between said function and the function belonging to the lower layer of said function. 21. The image forming apparatus according to claim 15, wherein the predetermined condition is met when the function in the first group of function is not used for a predetermined period of time. | An image forming apparatus having a plurality of functions and executing a function designated from the plurality of functions includes a display device for displaying a function selection image allowing a user to designate any of the plurality of functions. The display device displays a first group of functions of which frequency of use is higher than a prescribed threshold value, and a second group of functions of which frequency of use is not higher than the threshold value, on mutually different function selection images, with a display item indicating that functions are displayed distinguished from each other. The image forming apparatus further includes: a designating device receiving a user input designating any of the plurality of functions displayed by the display device; and an image forming unit executing the function designated by the input received by the designating device.1. An image forming apparatus having a plurality of functions and executing a function designated from said plurality of functions, comprising:
a display device displaying a function selection image allowing a user to designate any of said plurality of functions, said display device displaying a first group of functions of which frequency of use is higher than a prescribed threshold value, and a second group of functions of which frequency of use is not higher than said threshold value, on mutually different function selection images, with a display item indicating that functions are displayed distinguished from each other; a designating device, connected to said display device, for receiving a user input designating any of the plurality of functions displayed by said display device; an extracting unit for extracting, from the first group of functions, a function in the first group of functions to be moved to the second group of functions, while maintaining a function saved in the second group of functions prior to the extraction, in the second group of functions, when a predetermined condition is met; and an image forming unit, connected to said designating device, for executing the function designated by the input received by said designating device. 2. The image forming apparatus according to claim 1, wherein said display item includes a UI component allowing transition from the function selection image displaying functions of said first group to the function selection image displaying functions of said second group. 3. The image forming apparatus according to claim 2, further comprising:
a changing device changing, when a function belonging to said second group is designated on said function selection image by said designating device, a manner of display of said display device to have the function displayed together with the functions belonging to said first group. 4. The image forming apparatus according to claim 1, wherein said display device displays said plurality of functions, with higher priority to the functions of said first group than the functions of said second group. 5. The image forming apparatus according to claim 4, wherein said display device includes a device controlling menu transition such that the function selection image of the functions belonging to said second group can be reached only after the function selection image of the functions belonging to said first group is reached. 6. The image forming apparatus according to claim 4, further comprising:
a changing device changing, when any of the functions belonging to said second group is designated by said designating device, a manner of display of said display device to have the function displayed together with the functions belonging to said first group. 7. The image forming apparatus according to claim 1, further comprising:
a changing device changing, when any of the functions belonging to said second group is designated by said designating device, a manner of display of said display device to have the function displayed together with the functions belonging to said first group. 8. The image forming apparatus according to claim 7, wherein
said changing device includes a restoring device for determining, when a function belonging to said second group is designated by said designating device, whether or not predetermined restore conditions are satisfied, and restoring said function to said first group or maintaining said function in said second group depending on the result of determination; and said restore conditions relate to the number of designations of said function by said designating device. 9. The image forming apparatus according to claim 7, further comprising:
an authentication device for authentication of a user who uses said image forming apparatus; wherein said changing device includes a restoring device for determining, when a function belonging to said second group is designated by said designating device, whether or not predetermined restore conditions are satisfied, and restoring the function to said first group or maintaining said function in said second group depending on the result of determination; and said restore conditions relate to the user who designated said function using said designating device. 10. The image forming apparatus according to claim 7, further comprising:
an authentication device for authentication of a user who uses said image forming apparatus; wherein said changing device includes a restoring device, when a function belonging to said second group is designated by said designating device, for restoring the function to said first group or maintaining the function in said second group based on predetermined restore conditions; and said restore conditions require that either a condition related to the number of designations of said function by a specific user using said designating device or a condition related to the number of designations of said function by unspecified users using said designating device is satisfied. 11. The image forming apparatus according to claim 7, further comprising:
a confirming device receiving, when a function belonging to said second group is designated by said designating device, a confirmation input of a user approval on movement of said function to said first group; wherein said changing device includes a function moving device moving, when the confirmation input is received by the confirming device, the function designated by said designating device to said first group. 12. The image forming apparatus according to claim 1, further comprising:
a history storage device storing a history of designation of said plurality of functions by said designating device; a frequency calculating device calculating, in accordance with the history stored in said history storage device, frequency of designation of each of the functions belonging to said first group in a prescribed time period; and an auto saving device saving any function belonging to said first group of which frequency calculated by said frequency calculating device is not higher than said threshold value to said second group. 13. The image forming apparatus according to claim 12, wherein
a hierarchical structure is defined among said plurality of functions; said display device makes transition of functions displayed on said function selection image in accordance with said hierarchical structure; and when a function belonging to said first group is saved to said second group, said auto saving device saves a function belonging to a lower layer of said function in said hierarchical structure to said second group, maintaining the hierarchical structure between said function and the function belonging to the lower layer of said function. 14. The image forming apparatus according to claim 1, wherein the predetermined condition is met when the function in the first group of functions is not used for a predetermined period of time. 15. In an image forming apparatus having a plurality of functions and executing a function designated from said plurality of functions, a method of displaying a function selection image, comprising the steps of:
displaying a function selection image allowing a user to designate any of said plurality of functions, said display step displaying a first group of functions of which frequency of use is higher than a prescribed threshold value, and a second group of functions of which frequency of use is not higher than said threshold value, on mutually different function selection images, with a display item indicating that functions are displayed distinguished from each other; receiving a user input designating any of the plurality of functions displayed at said display step; extracting, from the first group of functions, a function to be moved to the second group of functions, while maintaining a function saved in the second group of functions prior to the extraction, in the second group of functions, when a predetermined condition is met; and executing the function designated by the input received at said receiving step and forming an image on a recording medium. 16. The method according to claim 15, wherein
said display item includes a UI component allowing transition from the function selection image displaying functions of said first group to the function selection image displaying functions of said second group. 17. The method according to claim 16, further comprising the step of changing, when a function belonging to said second group is designated on said function selection image at said receiving step, a manner of display of said display device to have the function displayed together with the functions belonging to said first group. 18. The method according to claim 15, wherein
said changing step includes the steps of: determining, when a function belonging to said second group is designated at said receiving step, whether or not predetermined restore conditions are satisfied; and restoring said function to said first group or maintaining said function in said second group depending on the result of determination; and said restore conditions relate to the number of designations of said function at said receiving step. 19. The method according to claim 15, further comprising the steps of:
storing a history of designation of said plurality of functions at said receiving step, in a storage device; calculating, in accordance with the history stored in said history storage device, frequency of designation of each of the functions belonging to said first group in a prescribed time period; and saving any function belonging to said first group of which frequency calculated at said calculating step is not higher than said threshold value to said second group. 20. The method according to claim 19, wherein
a hierarchical structure is defined among said plurality of functions; at said display step, functions displayed on said function selection image are subjected to transition in accordance with said hierarchical structure; and at said saving step, a function belonging to a lower layer of said function in said hierarchical structure is saved to said second group, maintaining the hierarchical structure between said function and the function belonging to the lower layer of said function. 21. The image forming apparatus according to claim 15, wherein the predetermined condition is met when the function in the first group of function is not used for a predetermined period of time. | 2,100 |
5,917 | 5,917 | 14,184,801 | 2,136 | A storage system and an access control method thereof are provided. The storage system receives a first I/O request from at least one hypervisor. The first I/O request is used for accessing a first disk file of disk files. The storage system then operates a first I/O operation of a first virtual disk of virtual disks according to the first I/O request since the disk files correspond to the virtual disks. The storage system reads a QoS data of the first disk file and determines a first delay period according to the QoS data. The storage system transmits a first I/O response to the at least one hypervisor after the first delay period. | 1. An access control method for use in a storage system, the storage system connecting to at least one hypervisor server and storing a plurality of disk files which correspond to a plurality of virtual disks respectively, the access control method comprising:
(a) enabling the storage system to receive a first I/O request from the at least one hypervisor, wherein the first I/O request is used for accessing a first disk file of the disk files; (b) enabling the storage system to operate a first I/O operation of a first virtual disk of the virtual disks according to the first I/O request; (c) enabling the storage system to read a quality of service (QoS) data of the first disk file; (d) enabling the storage system to determine a first delay period according to the QoS data; (e) enabling the storage system to transmit a first I/O response to the at least one hypervisor after the first delay period. 2. The access control method as claimed in claim 1, wherein the QoS data comprises an input output per second (IOPS) information and an I/O bandwidth information, and step (d) further comprises:
(d1) enabling the storage system to determine the first delay period according to the IOPS information or the I/O bandwidth information. 3. The access control method as claimed in claim 1, wherein the QoS data is recorded with the first disk file. 4. The access control method as claimed in claim 1, wherein the QoS data is recorded with a dictionary of the first disk file. 5. The access control method as claimed in claim 4, further comprising the following steps after step (e):
(f) enabling the storage system to receive a second I/O request from the at least one hypervisor, wherein the second I/O request is used for accessing a second disk file of the disk files; (g) enabling the storage system to operate a second I/O operation of a second virtual disk of the virtual disks according to the second I/O request; (h) enabling the storage system to read the QoS data of the second disk file from the dictionary, wherein the first disk file and the second disk file are stored in the dictionary; (i) enabling the storage system to determine a second delay period according to the QoS data; (j) enabling the storage system to transmit a second I/O response to the at least one hypervisor after the second delay period. 6. A storage system, comprising:
an I/O interface, being configured to connect to at least one hypervisor server and to receive a first I/O request from the at least one hypervisor, wherein the first I/O request is used for accessing a first disk file of the disk files; a storage unit, being configured to store a plurality of disk files which correspond to a plurality of virtual disks respectively; and a processing unit, being configured to operate a first I/O operation of a first virtual disk of the virtual disks according to the first I/O request, to read a quality of service (QoS) data of the first disk file, and to determine a first delay period according to the QoS data; wherein the I/O interface is further configured to transmit a first I/O response to the at least one hypervisor after the first delay period. 7. The storage system as claimed in claim 6, wherein the QoS data comprises an input output per second (IOPS) information and an I/O bandwidth information, and the processing unit is further configured to determine the first delay period according to the IOPS information or the I/O bandwidth information. 8. The storage system as claimed in claim 6, wherein the QoS data is recorded with the first disk file. 9. The storage system as claimed in claim 6, wherein the QoS data is recorded with a dictionary of the first disk file. 10. The storage system as claimed in claim 9, wherein the I/O interface is further configured to receive a second I/O request from the at least one hypervisor, the second I/O request is used for accessing a second disk file of the disk files, the processing unit is further configured to operate a second I/O operation of a second virtual disk of the virtual disks according to the second I/O request, to read the QoS data of the second disk file from the dictionary and to determine a second delay period according to the QoS data, the first disk file and the second disk file are stored in the dictionary, and the I/O interface is further configured to transmit a second I/O response to the at least one hypervisor after the second delay period. | A storage system and an access control method thereof are provided. The storage system receives a first I/O request from at least one hypervisor. The first I/O request is used for accessing a first disk file of disk files. The storage system then operates a first I/O operation of a first virtual disk of virtual disks according to the first I/O request since the disk files correspond to the virtual disks. The storage system reads a QoS data of the first disk file and determines a first delay period according to the QoS data. The storage system transmits a first I/O response to the at least one hypervisor after the first delay period.1. An access control method for use in a storage system, the storage system connecting to at least one hypervisor server and storing a plurality of disk files which correspond to a plurality of virtual disks respectively, the access control method comprising:
(a) enabling the storage system to receive a first I/O request from the at least one hypervisor, wherein the first I/O request is used for accessing a first disk file of the disk files; (b) enabling the storage system to operate a first I/O operation of a first virtual disk of the virtual disks according to the first I/O request; (c) enabling the storage system to read a quality of service (QoS) data of the first disk file; (d) enabling the storage system to determine a first delay period according to the QoS data; (e) enabling the storage system to transmit a first I/O response to the at least one hypervisor after the first delay period. 2. The access control method as claimed in claim 1, wherein the QoS data comprises an input output per second (IOPS) information and an I/O bandwidth information, and step (d) further comprises:
(d1) enabling the storage system to determine the first delay period according to the IOPS information or the I/O bandwidth information. 3. The access control method as claimed in claim 1, wherein the QoS data is recorded with the first disk file. 4. The access control method as claimed in claim 1, wherein the QoS data is recorded with a dictionary of the first disk file. 5. The access control method as claimed in claim 4, further comprising the following steps after step (e):
(f) enabling the storage system to receive a second I/O request from the at least one hypervisor, wherein the second I/O request is used for accessing a second disk file of the disk files; (g) enabling the storage system to operate a second I/O operation of a second virtual disk of the virtual disks according to the second I/O request; (h) enabling the storage system to read the QoS data of the second disk file from the dictionary, wherein the first disk file and the second disk file are stored in the dictionary; (i) enabling the storage system to determine a second delay period according to the QoS data; (j) enabling the storage system to transmit a second I/O response to the at least one hypervisor after the second delay period. 6. A storage system, comprising:
an I/O interface, being configured to connect to at least one hypervisor server and to receive a first I/O request from the at least one hypervisor, wherein the first I/O request is used for accessing a first disk file of the disk files; a storage unit, being configured to store a plurality of disk files which correspond to a plurality of virtual disks respectively; and a processing unit, being configured to operate a first I/O operation of a first virtual disk of the virtual disks according to the first I/O request, to read a quality of service (QoS) data of the first disk file, and to determine a first delay period according to the QoS data; wherein the I/O interface is further configured to transmit a first I/O response to the at least one hypervisor after the first delay period. 7. The storage system as claimed in claim 6, wherein the QoS data comprises an input output per second (IOPS) information and an I/O bandwidth information, and the processing unit is further configured to determine the first delay period according to the IOPS information or the I/O bandwidth information. 8. The storage system as claimed in claim 6, wherein the QoS data is recorded with the first disk file. 9. The storage system as claimed in claim 6, wherein the QoS data is recorded with a dictionary of the first disk file. 10. The storage system as claimed in claim 9, wherein the I/O interface is further configured to receive a second I/O request from the at least one hypervisor, the second I/O request is used for accessing a second disk file of the disk files, the processing unit is further configured to operate a second I/O operation of a second virtual disk of the virtual disks according to the second I/O request, to read the QoS data of the second disk file from the dictionary and to determine a second delay period according to the QoS data, the first disk file and the second disk file are stored in the dictionary, and the I/O interface is further configured to transmit a second I/O response to the at least one hypervisor after the second delay period. | 2,100 |
5,918 | 5,918 | 15,428,550 | 2,184 | A data processing apparatus is provided comprising first processing circuitry. Interrupt generating circuitry generates an outgoing interrupt in response to the first processing circuitry becoming unresponsive. Interrupt receiving circuitry receives an incoming interrupt, which indicates that second processing circuitry has become unresponsive, and in response to receiving the incoming interrupt, causes the data processing apparatus to access data managed by the second processing circuitry. | 1. A data processing apparatus comprising:
first processing circuitry; interrupt generating circuitry configured to generate an outgoing interrupt in response to said first processing circuitry becoming unresponsive; and interrupt receiving circuitry configured to receive an incoming interrupt, which indicates that second processing circuitry has become unresponsive, and in response to receiving said incoming interrupt, to cause said data processing apparatus to access data managed by said second processing circuitry. 2. A data processing apparatus according to claim 1, wherein
said data managed by said second processing circuitry comprises a state of said second processing circuitry. 3. A data processing apparatus according to claim wherein
said interrupt receiving circuitry is configured to cause said data processing apparatus to access said data managed by said second processing circuitry by copying said data managed by said second processing circuitry to produce copied data. 4. A data processing apparatus according to claim 3, wherein
said interrupt receiving circuitry is configured to make said copied data available to third processing circuitry. 5. A data processing apparatus according to claim 1, wherein
said interrupt receiving circuitry is configured to cause said data processing apparatus to access said data by overwriting said data managed by said second processing circuitry. 6. A data processing apparatus according to claim 1, wherein
in response to receiving said incoming interrupt, said interrupt receiving circuitry is further configured, after said data processing apparatus has accessed said data managed by said second processing circuitry, to cause said second processing circuitry to be reset. 7. A data processing apparatus according to claim 1, wherein
said interrupt generating circuitry is configured to determine that said first processing circuitry has become unresponsive by expiration of a watchdog timer. 8. A data processing apparatus according to claim 1, wherein
in response to said interrupt receiving circuitry receiving said incoming interrupt and said first processing circuitry becoming unresponsive, said data processing apparatus is configured to cause said first processing circuitry and said second processing circuitry to be reset. 9. A data processing apparatus according to claim wherein
said interrupt generating circuitry is configured to route said outgoing interrupt to handler processing circuitry; and said handler processing circuitry comprises or is comprised by said second processing circuitry. 10. A data processing apparatus according to claim 1, wherein
said interrupt generating circuitry is configured to route said outgoing interrupt to handler processing circuitry; said data processing apparatus is a system control processor; and said handler processing circuitry is a manageability control processor. 11. A data processing apparatus according to claim 1, wherein
said interrupt generating circuitry is configured to route said outgoing interrupt to handler processing circuitry; said data processing apparatus is a manageability control processor; and said handler processing circuitry is a system control processor. 12. A data processing apparatus according to claim wherein
said interrupt generating circuitry is configured to route said outgoing interrupt to handler processing circuitry; and said handler processing circuitry is different to said first processing circuitry. 13. A data processing method comprising:
processing one or more instructions; generating an outgoing interrupt in response to first processing circuitry becoming unresponsive; receiving an incoming interrupt, which indicates that second processing circuitry has become unresponsive; and causing said data processing apparatus, in response to receiving said incoming interrupt, to access data managed by said second processing circuitry. 14. A data processing system comprising:
a plurality of data processing apparatuses, each comprising:
first processing circuitry;
interrupt generating circuitry configured to generate an outgoing interrupt in response to said first processing circuitry becoming unresponsive; and
interrupt receiving circuitry configured to receive an incoming interrupt, which indicates that second processing circuitry on an other one of said plurality of data processing apparatuses has become unresponsive, and in response to receiving said incoming interrupt, to cause said data processing apparatus to access data managed by said other one of said plurality of data processing apparatuses. | A data processing apparatus is provided comprising first processing circuitry. Interrupt generating circuitry generates an outgoing interrupt in response to the first processing circuitry becoming unresponsive. Interrupt receiving circuitry receives an incoming interrupt, which indicates that second processing circuitry has become unresponsive, and in response to receiving the incoming interrupt, causes the data processing apparatus to access data managed by the second processing circuitry.1. A data processing apparatus comprising:
first processing circuitry; interrupt generating circuitry configured to generate an outgoing interrupt in response to said first processing circuitry becoming unresponsive; and interrupt receiving circuitry configured to receive an incoming interrupt, which indicates that second processing circuitry has become unresponsive, and in response to receiving said incoming interrupt, to cause said data processing apparatus to access data managed by said second processing circuitry. 2. A data processing apparatus according to claim 1, wherein
said data managed by said second processing circuitry comprises a state of said second processing circuitry. 3. A data processing apparatus according to claim wherein
said interrupt receiving circuitry is configured to cause said data processing apparatus to access said data managed by said second processing circuitry by copying said data managed by said second processing circuitry to produce copied data. 4. A data processing apparatus according to claim 3, wherein
said interrupt receiving circuitry is configured to make said copied data available to third processing circuitry. 5. A data processing apparatus according to claim 1, wherein
said interrupt receiving circuitry is configured to cause said data processing apparatus to access said data by overwriting said data managed by said second processing circuitry. 6. A data processing apparatus according to claim 1, wherein
in response to receiving said incoming interrupt, said interrupt receiving circuitry is further configured, after said data processing apparatus has accessed said data managed by said second processing circuitry, to cause said second processing circuitry to be reset. 7. A data processing apparatus according to claim 1, wherein
said interrupt generating circuitry is configured to determine that said first processing circuitry has become unresponsive by expiration of a watchdog timer. 8. A data processing apparatus according to claim 1, wherein
in response to said interrupt receiving circuitry receiving said incoming interrupt and said first processing circuitry becoming unresponsive, said data processing apparatus is configured to cause said first processing circuitry and said second processing circuitry to be reset. 9. A data processing apparatus according to claim wherein
said interrupt generating circuitry is configured to route said outgoing interrupt to handler processing circuitry; and said handler processing circuitry comprises or is comprised by said second processing circuitry. 10. A data processing apparatus according to claim 1, wherein
said interrupt generating circuitry is configured to route said outgoing interrupt to handler processing circuitry; said data processing apparatus is a system control processor; and said handler processing circuitry is a manageability control processor. 11. A data processing apparatus according to claim 1, wherein
said interrupt generating circuitry is configured to route said outgoing interrupt to handler processing circuitry; said data processing apparatus is a manageability control processor; and said handler processing circuitry is a system control processor. 12. A data processing apparatus according to claim wherein
said interrupt generating circuitry is configured to route said outgoing interrupt to handler processing circuitry; and said handler processing circuitry is different to said first processing circuitry. 13. A data processing method comprising:
processing one or more instructions; generating an outgoing interrupt in response to first processing circuitry becoming unresponsive; receiving an incoming interrupt, which indicates that second processing circuitry has become unresponsive; and causing said data processing apparatus, in response to receiving said incoming interrupt, to access data managed by said second processing circuitry. 14. A data processing system comprising:
a plurality of data processing apparatuses, each comprising:
first processing circuitry;
interrupt generating circuitry configured to generate an outgoing interrupt in response to said first processing circuitry becoming unresponsive; and
interrupt receiving circuitry configured to receive an incoming interrupt, which indicates that second processing circuitry on an other one of said plurality of data processing apparatuses has become unresponsive, and in response to receiving said incoming interrupt, to cause said data processing apparatus to access data managed by said other one of said plurality of data processing apparatuses. | 2,100 |
5,919 | 5,919 | 15,620,716 | 2,179 | An electronic device: displays an interactive UI object that conditionally exhibits respective interactive behavior responsive to changes in detected contact intensity, and content that does not exhibit the respective interactive behavior. The device detects a first input over the interactive UI object. In accordance with a determination that the first input meets first appearance-manipulation criteria (e.g., an intensity of the contact exceeds a first intensity threshold), the device changes an appearance of the interactive UI object based on the intensity of the contact and independent of lateral movement of the contact. In accordance with a determination that the first input meets second appearance-manipulation criteria (e.g., the intensity of the contact exceeds a second intensity threshold, greater than the first intensity threshold), the device changes the appearance of the interactive UI object based on lateral movement of the contact detected after the intensity of the contact exceeds the second intensity threshold. | 1. A computer-readable storage medium storing one or more programs, the one or more programs comprising instructions which, when executed by an electronic device with a display, a touch-sensitive surface, and one or more sensors to detect intensities of contacts with the touch-sensitive surface, cause the electronic device to:
display, on the display:
first content that includes an interactive user interface object that conditionally exhibits respective interactive behavior responsive to changes in detected contact intensity, and
second content, distinct from the first content, that does not exhibit the respective interactive behavior responsive to changes in detected contact intensity;
detect a first input by a contact while a focus selector is over the interactive user interface object on the display; in accordance with a determination that the first input meets first appearance-manipulation criteria, wherein the first appearance-manipulation criteria include a criterion that is met when a characteristic intensity of the contact exceeds a first intensity threshold during the first input, change an appearance of the interactive user interface object based on the characteristic intensity of the contact, wherein changing the appearance of the interactive user interface object is independent of lateral movement of the contact across the touch-sensitive surface; and, in accordance with a determination that the first input meets second appearance-manipulation criteria, wherein the second appearance-manipulation criteria include a criterion that is met when the characteristic intensity of the contact exceeds a second intensity threshold, greater than the first intensity threshold, during the first input, change the appearance of the interactive user interface object based on lateral movement of the contact across the touch-sensitive surface that is detected after the characteristic intensity of the contact exceeds the second intensity threshold. 2. The storage medium of claim 1, including instructions which, when executed by the electronic device, cause the electronic device to:
after the determination that the first input meets the second appearance-manipulation criteria, continue to change the appearance of the interactive user interface object as the characteristic intensity of the contact increases above the second intensity threshold. 3. The storage medium of claim 1, including instructions which, when executed by the electronic device, cause the electronic device to:
in accordance with the determination that the first input meets the second appearance-manipulation criteria, cease to display the second content. 4. The storage medium of claim 1, including instructions which, when executed by the electronic device, cause the electronic device to:
in accordance with the determination that the first input meets the first appearance-manipulation criteria, continue to display the second content while changing the appearance of the interactive user interface object. 5. The storage medium of claim 1, including instructions which, when executed by the electronic device, cause the electronic device to:
in accordance with a determination that the first input meets scrolling criteria, wherein the scrolling criteria do not require that a characteristic intensity of the contact increase above the first intensity threshold during the first input in order for the scrolling criteria to be met, scroll the first content and the second content in a first direction on the display. 6. The storage medium of claim 5, wherein scrolling the first content and the second content includes presenting on the display a preview of the respective interactive behavior of the interactive user interface object while scrolling the first content and the second content. 7. The storage medium of claim 6, wherein presenting the preview includes tilting at least one 3D feature within the interactive user interface object out of a frame surrounding the interactive user interface object on the display. 8. The storage medium of claim 6, wherein scrolling the second content includes maintaining an appearance of the second content while presenting the preview of the respective interactive behavior of the interactive user interface object. 9. The storage medium of claim 1, wherein changing the appearance of the interactive user interface object based on the characteristic intensity of the contact includes tilting at least one 3D feature within the interactive user interface object out of a frame surrounding the interactive user interface object on the display. 10. The storage medium of claim 1, including instructions which, when executed by the electronic device, cause the electronic device to:
provide first tactile output, via the electronic device, in accordance with the determination that the first input meets the first appearance-manipulation criteria. 11. The storage medium of claim 1, including instructions which, when executed by the electronic device, cause the electronic device to:
after the determination that the first input meets first appearance-manipulation criteria, detect a decrease in the characteristic intensity of the contact such that the characteristic intensity of the contact falls below the first intensity threshold; while the characteristic intensity of the contact remains below the first intensity threshold, detect vertical movement of the contact on the touch-sensitive surface; and, in response to detecting the vertical movement of the contact, scroll the first content and the second content on the display. 12. The storage medium of claim 1, including instructions which, when executed by the electronic device, cause the electronic device to:
after changing the appearance of the interactive user interface object in accordance with the determination that the first input meets the first appearance-manipulation criteria, detect an end of the first input; and, in response to detecting the end of the first input, revert back to the appearance of the interactive user interface object before the increase in intensity of the contact was detected. 13. The storage medium of claim 1, including instructions which, when executed by the electronic device, cause the electronic device to:
provide second tactile output, via the electronic device, in accordance with the determination that the first input meets the second appearance-manipulation criteria. 14. The storage medium of claim 1, including instructions which, when executed by the electronic device, cause the electronic device to:
after the determination that the first input meets the second appearance-manipulation criteria, detect a decrease in the characteristic intensity of the contact such that the characteristic intensity of the contact falls below the second intensity threshold; and, while the characteristic intensity of the contact remains below the second threshold, continue to change the appearance of the interactive user interface object based on lateral movement of the contact across the touch-sensitive surface. 15. The storage medium of claim 1, including instructions which, when executed by the electronic device, cause the electronic device to:
after the determination that the first input meets the second appearance-manipulation criteria, detect vertical movement of the contact on the touch-sensitive surface; and, in response to detecting the vertical movement of the contact, change the appearance of the interactive user interface object based on the vertical movement without scrolling the second content. 16. The storage medium of claim 1, including instructions which, when executed by the electronic device, cause the electronic device to:
after changing the appearance of the interactive user interface object in accordance with the determination that the first input meets the second appearance-manipulation criteria, detect an end of the first input; and, in response to detecting the end of the first input, revert back to the appearance of the interactive user interface object before the increase in intensity of the contact was detected. 17. The storage medium of claim 1, including instructions which, when executed by the electronic device, cause the electronic device to:
in accordance with a determination that the first input meets third appearance-manipulation criteria, wherein the third appearance-manipulation criteria include a criterion that is met when the characteristic intensity of the contact exceeds a third intensity threshold, greater than the first intensity threshold and greater than the second intensity threshold, during the first input, cease to display the second content and displaying the interactive user interface object in an increased-interaction display mode. 18. The storage medium of claim 17, including instructions which, when executed by the electronic device, cause the electronic device to:
while displaying the interactive user interface object in the increased-interaction display mode, detect an end of the first input; and, in response to detecting the end of the first input, maintain display of the interactive user interface object in the increased-interaction display mode. 19. The storage medium of claim 17, including instructions which, when executed by the electronic device, cause the electronic device to:
provide third tactile output, via the electronic device, in accordance with the determination that the first input meets the third appearance-manipulation criteria. 20. The storage medium of claim 17, including instructions which, when executed by the electronic device, cause the electronic device to:
receive a second input while the interactive user interface object is displayed in the increased-interaction display mode; and, in response to receiving the second input, exit the increased-interaction display mode and displaying the interactive user interface object with the second content. 21. The storage medium of claim 1, wherein:
the interactive object includes a 3D object that is associated with a first axis of rotation and a second axis of rotation; the respective interactive behavior includes rotating the 3D object about the first axis of rotation in accordance with the change in intensity of the contact without rotating the 3D object about the second axis of rotation; and changing the appearance of the interactive user interface object based on lateral movement of the contact across the touch-sensitive surface includes rotating the 3D object about the second axis of rotation in accordance with the lateral movement of the contact across the touch-sensitive surface. 22. The storage medium of claim 1, wherein the interactive user interface object includes a 3D feature having separate component parts, and further wherein changing the appearance of the interactive user interface object based on the characteristic intensity of the contact includes dynamically expanding the 3D feature to reveal the separate component parts. 23. The storage medium of claim 1, wherein the interactive user interface object includes two or more location-based identifiers, and further wherein changing the appearance of the interactive user interface object based on the characteristic intensity of the contact includes updating the interactive user interface object to move between displaying each of the two or more location-based identifiers. 24. An electronic device, comprising:
a display; a touch-sensitive surface; one or more sensors to detect intensities of contacts with the touch-sensitive surface; one or more processors; memory storing one or more programs that are configured to be executed by the one or more processors, the one or more programs including instructions for:
displaying, on the display:
first content that includes an interactive user interface object that conditionally exhibits respective interactive behavior responsive to changes in detected contact intensity, and
second content, distinct from the first content, that does not exhibit the respective interactive behavior responsive to changes in detected contact intensity;
detecting a first input by a contact while a focus selector is over the interactive user interface object on the display;
in accordance with a determination that the first input meets first appearance-manipulation criteria, wherein the first appearance-manipulation criteria include a criterion that is met when a characteristic intensity of the contact exceeds a first intensity threshold during the first input, changing an appearance of the interactive user interface object based on the characteristic intensity of the contact, wherein changing the appearance of the interactive user interface object is independent of lateral movement of the contact across the touch-sensitive surface; and,
in accordance with a determination that the first input meets second appearance-manipulation criteria, wherein the second appearance-manipulation criteria include a criterion that is met when the characteristic intensity of the contact exceeds a second intensity threshold, greater than the first intensity threshold, during the first input, changing the appearance of the interactive user interface object based on lateral movement of the contact across the touch-sensitive surface that is detected after the characteristic intensity of the contact exceeds the second intensity threshold. 25. A method comprising:
at an electronic device with a display, a touch-sensitive surface, and one or more sensors to detect intensities of contacts with the touch-sensitive surface:
displaying, on the display:
first content that includes an interactive user interface object that conditionally exhibits respective interactive behavior responsive to changes in detected contact intensity, and
second content, distinct from the first content, that does not exhibit the respective interactive behavior responsive to changes in detected contact intensity;
detecting a first input by a contact while a focus selector is over the interactive user interface object on the display;
in accordance with a determination that the first input meets first appearance-manipulation criteria, wherein the first appearance-manipulation criteria include a criterion that is met when a characteristic intensity of the contact exceeds a first intensity threshold during the first input, changing an appearance of the interactive user interface object based on the characteristic intensity of the contact, wherein changing the appearance of the interactive user interface object is independent of lateral movement of the contact across the touch-sensitive surface; and,
in accordance with a determination that the first input meets second appearance-manipulation criteria, wherein the second appearance-manipulation criteria include a criterion that is met when the characteristic intensity of the contact exceeds a second intensity threshold, greater than the first intensity threshold, during the first input, changing the appearance of the interactive user interface object based on lateral movement of the contact across the touch-sensitive surface that is detected after the characteristic intensity of the contact exceeds the second intensity threshold. | An electronic device: displays an interactive UI object that conditionally exhibits respective interactive behavior responsive to changes in detected contact intensity, and content that does not exhibit the respective interactive behavior. The device detects a first input over the interactive UI object. In accordance with a determination that the first input meets first appearance-manipulation criteria (e.g., an intensity of the contact exceeds a first intensity threshold), the device changes an appearance of the interactive UI object based on the intensity of the contact and independent of lateral movement of the contact. In accordance with a determination that the first input meets second appearance-manipulation criteria (e.g., the intensity of the contact exceeds a second intensity threshold, greater than the first intensity threshold), the device changes the appearance of the interactive UI object based on lateral movement of the contact detected after the intensity of the contact exceeds the second intensity threshold.1. A computer-readable storage medium storing one or more programs, the one or more programs comprising instructions which, when executed by an electronic device with a display, a touch-sensitive surface, and one or more sensors to detect intensities of contacts with the touch-sensitive surface, cause the electronic device to:
display, on the display:
first content that includes an interactive user interface object that conditionally exhibits respective interactive behavior responsive to changes in detected contact intensity, and
second content, distinct from the first content, that does not exhibit the respective interactive behavior responsive to changes in detected contact intensity;
detect a first input by a contact while a focus selector is over the interactive user interface object on the display; in accordance with a determination that the first input meets first appearance-manipulation criteria, wherein the first appearance-manipulation criteria include a criterion that is met when a characteristic intensity of the contact exceeds a first intensity threshold during the first input, change an appearance of the interactive user interface object based on the characteristic intensity of the contact, wherein changing the appearance of the interactive user interface object is independent of lateral movement of the contact across the touch-sensitive surface; and, in accordance with a determination that the first input meets second appearance-manipulation criteria, wherein the second appearance-manipulation criteria include a criterion that is met when the characteristic intensity of the contact exceeds a second intensity threshold, greater than the first intensity threshold, during the first input, change the appearance of the interactive user interface object based on lateral movement of the contact across the touch-sensitive surface that is detected after the characteristic intensity of the contact exceeds the second intensity threshold. 2. The storage medium of claim 1, including instructions which, when executed by the electronic device, cause the electronic device to:
after the determination that the first input meets the second appearance-manipulation criteria, continue to change the appearance of the interactive user interface object as the characteristic intensity of the contact increases above the second intensity threshold. 3. The storage medium of claim 1, including instructions which, when executed by the electronic device, cause the electronic device to:
in accordance with the determination that the first input meets the second appearance-manipulation criteria, cease to display the second content. 4. The storage medium of claim 1, including instructions which, when executed by the electronic device, cause the electronic device to:
in accordance with the determination that the first input meets the first appearance-manipulation criteria, continue to display the second content while changing the appearance of the interactive user interface object. 5. The storage medium of claim 1, including instructions which, when executed by the electronic device, cause the electronic device to:
in accordance with a determination that the first input meets scrolling criteria, wherein the scrolling criteria do not require that a characteristic intensity of the contact increase above the first intensity threshold during the first input in order for the scrolling criteria to be met, scroll the first content and the second content in a first direction on the display. 6. The storage medium of claim 5, wherein scrolling the first content and the second content includes presenting on the display a preview of the respective interactive behavior of the interactive user interface object while scrolling the first content and the second content. 7. The storage medium of claim 6, wherein presenting the preview includes tilting at least one 3D feature within the interactive user interface object out of a frame surrounding the interactive user interface object on the display. 8. The storage medium of claim 6, wherein scrolling the second content includes maintaining an appearance of the second content while presenting the preview of the respective interactive behavior of the interactive user interface object. 9. The storage medium of claim 1, wherein changing the appearance of the interactive user interface object based on the characteristic intensity of the contact includes tilting at least one 3D feature within the interactive user interface object out of a frame surrounding the interactive user interface object on the display. 10. The storage medium of claim 1, including instructions which, when executed by the electronic device, cause the electronic device to:
provide first tactile output, via the electronic device, in accordance with the determination that the first input meets the first appearance-manipulation criteria. 11. The storage medium of claim 1, including instructions which, when executed by the electronic device, cause the electronic device to:
after the determination that the first input meets first appearance-manipulation criteria, detect a decrease in the characteristic intensity of the contact such that the characteristic intensity of the contact falls below the first intensity threshold; while the characteristic intensity of the contact remains below the first intensity threshold, detect vertical movement of the contact on the touch-sensitive surface; and, in response to detecting the vertical movement of the contact, scroll the first content and the second content on the display. 12. The storage medium of claim 1, including instructions which, when executed by the electronic device, cause the electronic device to:
after changing the appearance of the interactive user interface object in accordance with the determination that the first input meets the first appearance-manipulation criteria, detect an end of the first input; and, in response to detecting the end of the first input, revert back to the appearance of the interactive user interface object before the increase in intensity of the contact was detected. 13. The storage medium of claim 1, including instructions which, when executed by the electronic device, cause the electronic device to:
provide second tactile output, via the electronic device, in accordance with the determination that the first input meets the second appearance-manipulation criteria. 14. The storage medium of claim 1, including instructions which, when executed by the electronic device, cause the electronic device to:
after the determination that the first input meets the second appearance-manipulation criteria, detect a decrease in the characteristic intensity of the contact such that the characteristic intensity of the contact falls below the second intensity threshold; and, while the characteristic intensity of the contact remains below the second threshold, continue to change the appearance of the interactive user interface object based on lateral movement of the contact across the touch-sensitive surface. 15. The storage medium of claim 1, including instructions which, when executed by the electronic device, cause the electronic device to:
after the determination that the first input meets the second appearance-manipulation criteria, detect vertical movement of the contact on the touch-sensitive surface; and, in response to detecting the vertical movement of the contact, change the appearance of the interactive user interface object based on the vertical movement without scrolling the second content. 16. The storage medium of claim 1, including instructions which, when executed by the electronic device, cause the electronic device to:
after changing the appearance of the interactive user interface object in accordance with the determination that the first input meets the second appearance-manipulation criteria, detect an end of the first input; and, in response to detecting the end of the first input, revert back to the appearance of the interactive user interface object before the increase in intensity of the contact was detected. 17. The storage medium of claim 1, including instructions which, when executed by the electronic device, cause the electronic device to:
in accordance with a determination that the first input meets third appearance-manipulation criteria, wherein the third appearance-manipulation criteria include a criterion that is met when the characteristic intensity of the contact exceeds a third intensity threshold, greater than the first intensity threshold and greater than the second intensity threshold, during the first input, cease to display the second content and displaying the interactive user interface object in an increased-interaction display mode. 18. The storage medium of claim 17, including instructions which, when executed by the electronic device, cause the electronic device to:
while displaying the interactive user interface object in the increased-interaction display mode, detect an end of the first input; and, in response to detecting the end of the first input, maintain display of the interactive user interface object in the increased-interaction display mode. 19. The storage medium of claim 17, including instructions which, when executed by the electronic device, cause the electronic device to:
provide third tactile output, via the electronic device, in accordance with the determination that the first input meets the third appearance-manipulation criteria. 20. The storage medium of claim 17, including instructions which, when executed by the electronic device, cause the electronic device to:
receive a second input while the interactive user interface object is displayed in the increased-interaction display mode; and, in response to receiving the second input, exit the increased-interaction display mode and displaying the interactive user interface object with the second content. 21. The storage medium of claim 1, wherein:
the interactive object includes a 3D object that is associated with a first axis of rotation and a second axis of rotation; the respective interactive behavior includes rotating the 3D object about the first axis of rotation in accordance with the change in intensity of the contact without rotating the 3D object about the second axis of rotation; and changing the appearance of the interactive user interface object based on lateral movement of the contact across the touch-sensitive surface includes rotating the 3D object about the second axis of rotation in accordance with the lateral movement of the contact across the touch-sensitive surface. 22. The storage medium of claim 1, wherein the interactive user interface object includes a 3D feature having separate component parts, and further wherein changing the appearance of the interactive user interface object based on the characteristic intensity of the contact includes dynamically expanding the 3D feature to reveal the separate component parts. 23. The storage medium of claim 1, wherein the interactive user interface object includes two or more location-based identifiers, and further wherein changing the appearance of the interactive user interface object based on the characteristic intensity of the contact includes updating the interactive user interface object to move between displaying each of the two or more location-based identifiers. 24. An electronic device, comprising:
a display; a touch-sensitive surface; one or more sensors to detect intensities of contacts with the touch-sensitive surface; one or more processors; memory storing one or more programs that are configured to be executed by the one or more processors, the one or more programs including instructions for:
displaying, on the display:
first content that includes an interactive user interface object that conditionally exhibits respective interactive behavior responsive to changes in detected contact intensity, and
second content, distinct from the first content, that does not exhibit the respective interactive behavior responsive to changes in detected contact intensity;
detecting a first input by a contact while a focus selector is over the interactive user interface object on the display;
in accordance with a determination that the first input meets first appearance-manipulation criteria, wherein the first appearance-manipulation criteria include a criterion that is met when a characteristic intensity of the contact exceeds a first intensity threshold during the first input, changing an appearance of the interactive user interface object based on the characteristic intensity of the contact, wherein changing the appearance of the interactive user interface object is independent of lateral movement of the contact across the touch-sensitive surface; and,
in accordance with a determination that the first input meets second appearance-manipulation criteria, wherein the second appearance-manipulation criteria include a criterion that is met when the characteristic intensity of the contact exceeds a second intensity threshold, greater than the first intensity threshold, during the first input, changing the appearance of the interactive user interface object based on lateral movement of the contact across the touch-sensitive surface that is detected after the characteristic intensity of the contact exceeds the second intensity threshold. 25. A method comprising:
at an electronic device with a display, a touch-sensitive surface, and one or more sensors to detect intensities of contacts with the touch-sensitive surface:
displaying, on the display:
first content that includes an interactive user interface object that conditionally exhibits respective interactive behavior responsive to changes in detected contact intensity, and
second content, distinct from the first content, that does not exhibit the respective interactive behavior responsive to changes in detected contact intensity;
detecting a first input by a contact while a focus selector is over the interactive user interface object on the display;
in accordance with a determination that the first input meets first appearance-manipulation criteria, wherein the first appearance-manipulation criteria include a criterion that is met when a characteristic intensity of the contact exceeds a first intensity threshold during the first input, changing an appearance of the interactive user interface object based on the characteristic intensity of the contact, wherein changing the appearance of the interactive user interface object is independent of lateral movement of the contact across the touch-sensitive surface; and,
in accordance with a determination that the first input meets second appearance-manipulation criteria, wherein the second appearance-manipulation criteria include a criterion that is met when the characteristic intensity of the contact exceeds a second intensity threshold, greater than the first intensity threshold, during the first input, changing the appearance of the interactive user interface object based on lateral movement of the contact across the touch-sensitive surface that is detected after the characteristic intensity of the contact exceeds the second intensity threshold. | 2,100 |
5,920 | 5,920 | 14,953,838 | 2,162 | A system for managing items. The system includes a sensor for identifying a first physical object and a second physical object in a building. A computer interface in communication with the sensor is configured to receive a first identification of the first physical object and a second identification of the second physical object. A computer with at least one central processing unit is configured to automatically search for trend information about the first physical object, determine a first interaction between the first physical object and the second physical object based on the first identification of the first physical object and the second identification of the second physical object, and recommend disposition of the first physical object based on the first interaction between the first physical object and the second physical object, and the trend information about the first physical object. | 1. A system for managing items, the system comprising:
a sensor for identifying a first physical object and a second physical object in a building; a computer interface in communication with the sensor to receive a first identification of the first physical object and a second identification of the second physical object; and a computer including at least one central processing unit, the computer to:
automatically search for trend information about the first physical object;
determine a first interaction between the first physical object and the second physical object based on the first identification of the first physical object and the second identification of the second physical object; and
recommend disposition of the first physical object based on at least the first interaction between the first physical object and the second physical object, and the trend information about the first physical object. 2. The system of claim 1, wherein the computer is configured to recommend a replacement object for the first physical object based on at least the first interaction between the first physical object and the second physical object, and the trend information about the first physical object. 3. The system of claim 1, further comprising a database to store identification information for the first physical object and the second physical object, descriptive information about the first physical object and the second physical object, time of ownership information about the first physical object and the second physical object, and descriptive information about the first interaction between the first physical object and the second physical object. 4. The system of claim 1, wherein the trend information about the first physical object includes a frequency that the first physical object is discussed on one or more websites during a unit of time. 5. The system of claim 1, wherein the computer is configured to:
determine a second interaction between the first physical object and a person based on the first identification of the first physical object and a third identification of the person; and recommend disposition of the first physical object based on at least the second interaction between the first physical object and the person, and the trend information about the first physical object. 6. The system of claim 5, wherein the computer is configured to recommend a replacement object for the first physical object based on at least the second interaction between the first physical object and the person, and the trend information about the first physical object. 7. The system of claim 5, wherein the trend information about the first physical object includes a frequency that the first physical object is discussed in a social network of the person during a unit of time. 8. The system of claim 5, further comprising a database to store identification information for the first physical object and the second physical object, descriptive information about the first physical object and the second physical object, descriptive information about the person, time of ownership information about the first physical object and the second physical object, and descriptive information about the second interaction between the first physical object and the person. 9. The system of claim 1, further comprising a transmitter coupled to the first physical object to transmit the first identification of the first physical object to the sensor. 10. A method for managing items, the method comprising:
sensing by a sensor a first identification of a first physical object and a second identification of a second physical object in a building; receiving at a computer interface in communication with the sensor the first identification of the first physical object and the second identification of the second physical object; automatically searching for trend information about the first physical object; determining a first interaction between the first physical object and the second physical object based on the first identification of the first physical object and the second identification of the second physical object; and recommending disposition of the first physical object based on at least the first interaction between the first physical object and the second physical object, and the trend information about the first physical object. 11. The method of claim 10, wherein recommending disposition of the first physical object includes recommending a replacement object for the first physical object based on at least the first interaction between the first physical object and the second physical object, and the trend information about the first physical object. 12. The method of claim 10, further comprising storing, at a database, identification information for the first physical object and the second physical object, descriptive information about the first physical object and the second physical object, time of ownership information about the first physical object and the second physical object, and descriptive information about the first interaction between the first physical object and the second physical object. 13. The method of claim 10, wherein the trend information about the first physical object includes a frequency that the first physical object is discussed on one or more websites during a unit of time. 14. The method of claim 10, further comprising:
determining a second interaction between the first physical object and a person based on the first identification of the first physical object and a third identification of the person; and recommending disposition of the first physical object based on at least the second interaction between the first physical object and the person, and the trend information about the first physical object. 15. The method of claim 14, further comprising recommending a replacement for the first physical object based on at least the second interaction between the first physical object and the person, and the trend information about the first physical object. 16. The method of claim 14, wherein the trend information about the first physical object includes a frequency that the first physical object is discussed in a social network of the person during a unit of time. 17. The method of claim 14, further comprising storing, at a database, identification information for the first physical object and the second physical object, descriptive information about the first physical object and the second physical object, descriptive information about the person, time of ownership information about the first physical object and the second physical object, and descriptive information about the second interaction between the first physical object and the person. 18. The method of claim 10, further comprising transmitting by a transmitter coupled to the first physical object the first identification of the first physical object to the sensor. 19. A computer program product for monitoring traffic characteristics between gateways in a computer network, the computer program product comprising:
a non-transitory computer readable storage medium having computer readable program code embodied therewith, the computer readable program code configured to: receive at a computer interface in communication with a sensor a first identification of a first physical object and a second identification of a second physical object; automatically search for trend information about the first physical object; determine a first interaction between the first physical object and the second physical object based on the first identification of the first physical object and the second identification of the second physical object; recommend disposition of the first physical object based on at least the first interaction between the first physical object and the second physical object, and the trend information about the first physical object; and store at a database identification information for the first physical object and the second physical object, descriptive information about the first physical object and the second physical object, time of ownership information about the first physical object and the second physical object, and descriptive information about the first interaction between the first physical object and the second physical object. 20. The computer program product of claim 19, wherein the computer readable program code is further configured to:
determine a second interaction between the first physical object and a person based on the first identification of the first physical object and a third identification of the person; and recommend disposition of the first physical object based on at least the second interaction between the first physical object and the person, and the trend information about the first physical object; and store at the database descriptive information about the person, and descriptive information about the second interaction between the first physical object and the person. | A system for managing items. The system includes a sensor for identifying a first physical object and a second physical object in a building. A computer interface in communication with the sensor is configured to receive a first identification of the first physical object and a second identification of the second physical object. A computer with at least one central processing unit is configured to automatically search for trend information about the first physical object, determine a first interaction between the first physical object and the second physical object based on the first identification of the first physical object and the second identification of the second physical object, and recommend disposition of the first physical object based on the first interaction between the first physical object and the second physical object, and the trend information about the first physical object.1. A system for managing items, the system comprising:
a sensor for identifying a first physical object and a second physical object in a building; a computer interface in communication with the sensor to receive a first identification of the first physical object and a second identification of the second physical object; and a computer including at least one central processing unit, the computer to:
automatically search for trend information about the first physical object;
determine a first interaction between the first physical object and the second physical object based on the first identification of the first physical object and the second identification of the second physical object; and
recommend disposition of the first physical object based on at least the first interaction between the first physical object and the second physical object, and the trend information about the first physical object. 2. The system of claim 1, wherein the computer is configured to recommend a replacement object for the first physical object based on at least the first interaction between the first physical object and the second physical object, and the trend information about the first physical object. 3. The system of claim 1, further comprising a database to store identification information for the first physical object and the second physical object, descriptive information about the first physical object and the second physical object, time of ownership information about the first physical object and the second physical object, and descriptive information about the first interaction between the first physical object and the second physical object. 4. The system of claim 1, wherein the trend information about the first physical object includes a frequency that the first physical object is discussed on one or more websites during a unit of time. 5. The system of claim 1, wherein the computer is configured to:
determine a second interaction between the first physical object and a person based on the first identification of the first physical object and a third identification of the person; and recommend disposition of the first physical object based on at least the second interaction between the first physical object and the person, and the trend information about the first physical object. 6. The system of claim 5, wherein the computer is configured to recommend a replacement object for the first physical object based on at least the second interaction between the first physical object and the person, and the trend information about the first physical object. 7. The system of claim 5, wherein the trend information about the first physical object includes a frequency that the first physical object is discussed in a social network of the person during a unit of time. 8. The system of claim 5, further comprising a database to store identification information for the first physical object and the second physical object, descriptive information about the first physical object and the second physical object, descriptive information about the person, time of ownership information about the first physical object and the second physical object, and descriptive information about the second interaction between the first physical object and the person. 9. The system of claim 1, further comprising a transmitter coupled to the first physical object to transmit the first identification of the first physical object to the sensor. 10. A method for managing items, the method comprising:
sensing by a sensor a first identification of a first physical object and a second identification of a second physical object in a building; receiving at a computer interface in communication with the sensor the first identification of the first physical object and the second identification of the second physical object; automatically searching for trend information about the first physical object; determining a first interaction between the first physical object and the second physical object based on the first identification of the first physical object and the second identification of the second physical object; and recommending disposition of the first physical object based on at least the first interaction between the first physical object and the second physical object, and the trend information about the first physical object. 11. The method of claim 10, wherein recommending disposition of the first physical object includes recommending a replacement object for the first physical object based on at least the first interaction between the first physical object and the second physical object, and the trend information about the first physical object. 12. The method of claim 10, further comprising storing, at a database, identification information for the first physical object and the second physical object, descriptive information about the first physical object and the second physical object, time of ownership information about the first physical object and the second physical object, and descriptive information about the first interaction between the first physical object and the second physical object. 13. The method of claim 10, wherein the trend information about the first physical object includes a frequency that the first physical object is discussed on one or more websites during a unit of time. 14. The method of claim 10, further comprising:
determining a second interaction between the first physical object and a person based on the first identification of the first physical object and a third identification of the person; and recommending disposition of the first physical object based on at least the second interaction between the first physical object and the person, and the trend information about the first physical object. 15. The method of claim 14, further comprising recommending a replacement for the first physical object based on at least the second interaction between the first physical object and the person, and the trend information about the first physical object. 16. The method of claim 14, wherein the trend information about the first physical object includes a frequency that the first physical object is discussed in a social network of the person during a unit of time. 17. The method of claim 14, further comprising storing, at a database, identification information for the first physical object and the second physical object, descriptive information about the first physical object and the second physical object, descriptive information about the person, time of ownership information about the first physical object and the second physical object, and descriptive information about the second interaction between the first physical object and the person. 18. The method of claim 10, further comprising transmitting by a transmitter coupled to the first physical object the first identification of the first physical object to the sensor. 19. A computer program product for monitoring traffic characteristics between gateways in a computer network, the computer program product comprising:
a non-transitory computer readable storage medium having computer readable program code embodied therewith, the computer readable program code configured to: receive at a computer interface in communication with a sensor a first identification of a first physical object and a second identification of a second physical object; automatically search for trend information about the first physical object; determine a first interaction between the first physical object and the second physical object based on the first identification of the first physical object and the second identification of the second physical object; recommend disposition of the first physical object based on at least the first interaction between the first physical object and the second physical object, and the trend information about the first physical object; and store at a database identification information for the first physical object and the second physical object, descriptive information about the first physical object and the second physical object, time of ownership information about the first physical object and the second physical object, and descriptive information about the first interaction between the first physical object and the second physical object. 20. The computer program product of claim 19, wherein the computer readable program code is further configured to:
determine a second interaction between the first physical object and a person based on the first identification of the first physical object and a third identification of the person; and recommend disposition of the first physical object based on at least the second interaction between the first physical object and the person, and the trend information about the first physical object; and store at the database descriptive information about the person, and descriptive information about the second interaction between the first physical object and the person. | 2,100 |
5,921 | 5,921 | 15,451,105 | 2,117 | A method for customizing an interactive control boundary based on a patient-specific anatomy includes obtaining a standard control boundary and determining an intersection between a reference feature associated with the standard control boundary and a virtual representation of an anatomy of the patient. The method further includes identifying an anatomic perimeter at the intersection between the identified reference feature and the virtual representation of the anatomy. An anatomic feature on the virtual representation of the anatomy is determined from the intersection of the reference feature and the virtual representation of the anatomy. The standard control boundary is modified based on at least one anatomic feature to generate a customized control boundary. | 1. A method for customizing an interactive control boundary based on a patient-specific anatomy, comprising:
obtaining a standard control boundary; determining an intersection between a reference feature associated with the standard control boundary and a virtual representation of an anatomy of the patient; identifying an anatomic perimeter at the intersection between the identified reference feature and the virtual representation of the anatomy; determining an anatomic feature on the virtual representation of the anatomy from the intersection of the reference feature and the virtual representation of the anatomy; and modifying the standard control boundary based on at least one anatomic feature to generate a customized control boundary. 2. The method of claim 1, wherein the anatomic feature is identified on the anatomic perimeter. 3. The method of claim 1, wherein the anatomic feature is identified on the virtual representation of the anatomy within or on the outside of the anatomic perimeter, or both. 4. The method of claim 1, wherein the anatomic feature is automatically determined. 5. The method of claim 1, wherein the anatomic feature is determined based on a received signal including information indicative of an anatomic feature manually identified by a user. 6. The method of claim 1, wherein the standard control boundary is based on at least one of a size and a shape of an implant model. 7. The method of claim 1, wherein the standard control boundary is based on at least one of a size of a cutting tool, a shape of a cutting tool, a planned approach angle of the cutting tool for performing a procedure, and a planned path of the cutting tool for performing the procedure. 8. The method of claim 7, wherein the standard control boundary is configured to accommodate the size and the shape of the cutting tool. 9. The method of claim 1, further including displaying at least one of the generated customized control boundary and the virtual representation associated with the anatomy of the patient on a display. 10. The method of claim 1, wherein the reference feature is a plane associated with the standard control boundary. 11. The method of claim 1, further comprising:
receiving a signal indicative of a user-defined modification to the standard control boundary or the customized control boundary; and modifying the standard control boundary or the customized control boundary based on the received signal. 12. The method of claim 1, further comprising:
receiving a signal indicative of a user-defined offset of the standard control boundary; and modifying the standard control boundary based, at least in part, on the received signal indicative of a user-defined offset. 13. The method of claim 1, further comprising:
receiving computed tomography (CT) data associated with the patient's anatomy; and generating a virtual model associated with the patient's anatomy based on the CT data. 14. The method of claim 1, further comprising:
generating the virtual representation associated with the patient's anatomy utilizing an imageless system. 15. The method of claim 1, wherein customizing the control boundary is performed pre-operatively. 16. The method of claim 1, wherein customizing the control boundary is performed intra-operatively. 17. The method of claim 1, further comprising modifying the customized control boundary to generate a second customized control boundary. 18. The method of claim 17, wherein the customized control boundary and the second customized control boundary are both generated pre-operatively or intra-operatively. 19. The method of claim 17, wherein the customized control boundary is generated pre-operatively and the second customized control boundary is generated intra-operatively. 20. A computer-assisted surgery system comprising:
a display; an input device configured to receive data input by a user; and a processor operatively coupled to the input device and the display and configured to:
identify a standard control boundary;
determine an intersection between a reference feature associated with the standard control boundary and a virtual representation of an anatomy of the patient;
identify an anatomic perimeter at the intersection between the identified reference feature and the virtual representation of the anatomy;
determine an anatomic feature on the virtual representation of the anatomy from the intersection of the reference feature and the virtual representation of the anatomy; and
modify the standard control boundary based on at least one anatomic feature to generate a customized control boundary; and
display the customized control boundary on the display. 21. The computer assisted surgery system of claim 20, wherein the anatomic feature is identified on the anatomic perimeter. 22. The computer assisted surgery system of claim 20, wherein the anatomic feature is identified on the virtual representation of the anatomy within or on the outside of the anatomic perimeter, or both. 23. The computer assisted surgery system of claim 20, wherein the anatomic feature is automatically determined. 24. The computer assisted surgery system of claim 20, wherein the anatomic feature is determined based on a received signal including information indicative of an anatomic feature manually identified by a user. 25. The computer assisted surgery system of claim 20, wherein the standard control boundary is based on at least one of a size and a shape of an implant model. 26. The computer assisted surgery system of claim 20, wherein the standard control boundary is based on at least one of a size of a cutting tool, a shape of a cutting tool, a planned approach angle of the cutting tool for performing a procedure, and a planned path of the cutting tool for performing the procedure. 27. The computer assisted surgery system of claim 26, wherein the standard control boundary is configured to accommodate the size and the shape of the cutting tool. 28. The computer assisted surgery system of claim 20, wherein the reference features is a plane associated with the standard control boundary. 29. The computer assisted surgery system of claim 20, wherein the processor is further configured to:
receive a signal including information indicative of anatomic features defined by a user; and identify the anatomic features based on the received signal. 30. The computer assisted surgery system of claim 20, wherein the processor is further configured to:
receive a signal indicative of a user-defined offset of the standard control boundary; and adjust the position of the standard control boundary based on the received signal. | A method for customizing an interactive control boundary based on a patient-specific anatomy includes obtaining a standard control boundary and determining an intersection between a reference feature associated with the standard control boundary and a virtual representation of an anatomy of the patient. The method further includes identifying an anatomic perimeter at the intersection between the identified reference feature and the virtual representation of the anatomy. An anatomic feature on the virtual representation of the anatomy is determined from the intersection of the reference feature and the virtual representation of the anatomy. The standard control boundary is modified based on at least one anatomic feature to generate a customized control boundary.1. A method for customizing an interactive control boundary based on a patient-specific anatomy, comprising:
obtaining a standard control boundary; determining an intersection between a reference feature associated with the standard control boundary and a virtual representation of an anatomy of the patient; identifying an anatomic perimeter at the intersection between the identified reference feature and the virtual representation of the anatomy; determining an anatomic feature on the virtual representation of the anatomy from the intersection of the reference feature and the virtual representation of the anatomy; and modifying the standard control boundary based on at least one anatomic feature to generate a customized control boundary. 2. The method of claim 1, wherein the anatomic feature is identified on the anatomic perimeter. 3. The method of claim 1, wherein the anatomic feature is identified on the virtual representation of the anatomy within or on the outside of the anatomic perimeter, or both. 4. The method of claim 1, wherein the anatomic feature is automatically determined. 5. The method of claim 1, wherein the anatomic feature is determined based on a received signal including information indicative of an anatomic feature manually identified by a user. 6. The method of claim 1, wherein the standard control boundary is based on at least one of a size and a shape of an implant model. 7. The method of claim 1, wherein the standard control boundary is based on at least one of a size of a cutting tool, a shape of a cutting tool, a planned approach angle of the cutting tool for performing a procedure, and a planned path of the cutting tool for performing the procedure. 8. The method of claim 7, wherein the standard control boundary is configured to accommodate the size and the shape of the cutting tool. 9. The method of claim 1, further including displaying at least one of the generated customized control boundary and the virtual representation associated with the anatomy of the patient on a display. 10. The method of claim 1, wherein the reference feature is a plane associated with the standard control boundary. 11. The method of claim 1, further comprising:
receiving a signal indicative of a user-defined modification to the standard control boundary or the customized control boundary; and modifying the standard control boundary or the customized control boundary based on the received signal. 12. The method of claim 1, further comprising:
receiving a signal indicative of a user-defined offset of the standard control boundary; and modifying the standard control boundary based, at least in part, on the received signal indicative of a user-defined offset. 13. The method of claim 1, further comprising:
receiving computed tomography (CT) data associated with the patient's anatomy; and generating a virtual model associated with the patient's anatomy based on the CT data. 14. The method of claim 1, further comprising:
generating the virtual representation associated with the patient's anatomy utilizing an imageless system. 15. The method of claim 1, wherein customizing the control boundary is performed pre-operatively. 16. The method of claim 1, wherein customizing the control boundary is performed intra-operatively. 17. The method of claim 1, further comprising modifying the customized control boundary to generate a second customized control boundary. 18. The method of claim 17, wherein the customized control boundary and the second customized control boundary are both generated pre-operatively or intra-operatively. 19. The method of claim 17, wherein the customized control boundary is generated pre-operatively and the second customized control boundary is generated intra-operatively. 20. A computer-assisted surgery system comprising:
a display; an input device configured to receive data input by a user; and a processor operatively coupled to the input device and the display and configured to:
identify a standard control boundary;
determine an intersection between a reference feature associated with the standard control boundary and a virtual representation of an anatomy of the patient;
identify an anatomic perimeter at the intersection between the identified reference feature and the virtual representation of the anatomy;
determine an anatomic feature on the virtual representation of the anatomy from the intersection of the reference feature and the virtual representation of the anatomy; and
modify the standard control boundary based on at least one anatomic feature to generate a customized control boundary; and
display the customized control boundary on the display. 21. The computer assisted surgery system of claim 20, wherein the anatomic feature is identified on the anatomic perimeter. 22. The computer assisted surgery system of claim 20, wherein the anatomic feature is identified on the virtual representation of the anatomy within or on the outside of the anatomic perimeter, or both. 23. The computer assisted surgery system of claim 20, wherein the anatomic feature is automatically determined. 24. The computer assisted surgery system of claim 20, wherein the anatomic feature is determined based on a received signal including information indicative of an anatomic feature manually identified by a user. 25. The computer assisted surgery system of claim 20, wherein the standard control boundary is based on at least one of a size and a shape of an implant model. 26. The computer assisted surgery system of claim 20, wherein the standard control boundary is based on at least one of a size of a cutting tool, a shape of a cutting tool, a planned approach angle of the cutting tool for performing a procedure, and a planned path of the cutting tool for performing the procedure. 27. The computer assisted surgery system of claim 26, wherein the standard control boundary is configured to accommodate the size and the shape of the cutting tool. 28. The computer assisted surgery system of claim 20, wherein the reference features is a plane associated with the standard control boundary. 29. The computer assisted surgery system of claim 20, wherein the processor is further configured to:
receive a signal including information indicative of anatomic features defined by a user; and identify the anatomic features based on the received signal. 30. The computer assisted surgery system of claim 20, wherein the processor is further configured to:
receive a signal indicative of a user-defined offset of the standard control boundary; and adjust the position of the standard control boundary based on the received signal. | 2,100 |
5,922 | 5,922 | 15,209,323 | 2,117 | Manufacturing of a shoe or a portion of a shoe is enhanced by automated placement of shoe parts. For example, a part-recognition system analyzes an image of a shoe part to identify the part and determine a location of the part. Once the part is identified and located, the part can be manipulated in an automated manner. | 1. A method for positioning a shoe part in an automated manner during a shoe-manufacturing process, the method comprising:
recording a first image of an attachment shoe part using at least one camera, the first image depicting a two-dimensional representation of the attachment shoe part; recording a second image of a base shoe part using the at least one camera, the second image depicting a two-dimensional representation of the base shoe part; identifying at least one reference feature of the attachment shoe part in the first image depicting the two-dimensional representation of the attachment shoe part; determining an identity of the attachment shoe part by comparing the at least one reference feature identified in the first image to at least one pre-determined reference feature of a shoe-part reference image; determining, from the first image, a first geometric coordinate of the attachment shoe part in a geometric coordinate system; determining, from the second image, a second geometric coordinate of the base shoe part in the geometric coordinate system; communicating the first and the second geometric coordinates to a part-transfer apparatus which operates in the geometric coordinate system; and transferring, by the part-transfer apparatus, the attachment shoe part from the first geometric coordinate to the second geometric coordinate. 2. The method of claim 1, further comprising attaching the attachment shoe part to the base shoe part at the second geometric coordinate. 3. The method of claim 1, further comprising determining an orientation of the attachment shoe part from the at least one reference feature of the first image. 4. The method of claim 3, further comprising determining, from the first image and the second image, a degree of rotation of the attachment shoe part required for alignment with the base shoe part. 5. The method of claim 4, wherein transferring, by the part-transfer apparatus, the attachment shoe part from the first geometric coordinate to the second geometric coordinate comprises transferring the attachment shoe part through the degree of rotation so that the attachment shoe part is aligned with the base shoe part for attachment. 6. The method of claim 1, wherein when the first image is recorded, the attachment shoe part is either:
held by the part-transfer apparatus; or maintained at a supply station from which the attachment shoe part is acquired by the part-transfer apparatus. 7. The method of claim 1, wherein the two-dimensional representation of the attachment shoe part is comprised of a two-dimensional shape having a perimeter, and wherein the at least one reference feature of the two-dimensional representation of the attachment shoe part is associated with the perimeter. 8. The method of claim 1, further comprising:
determining pixel coordinates of the first image that correspond to the at least one pre-determined reference feature; and converting, by a computer processor, the pixel coordinates of the first image to the first geometric coordinate, wherein the shoe-part reference image is stored in a datastore, and wherein the datastore stores a plurality of shoe-part reference images. 9. The method of claim 1, wherein the part transfer apparatus utilizes at least one of a gripping structure, suction, electromagnetic forces, and surface tack. 10. A system that positions a shoe part in an automated manner during a shoe-manufacturing process, the system comprising:
at least one image recorder that records a first image of an attachment shoe part and a second image of a base shoe part, the first image depicting a two-dimensional representation of the attachment shoe part and the second image depicting a two-dimensional representation of the base shoe part; computer storage media having stored thereon computer-executable instructions that, when executed, cause a computing device to:
identify at least one reference feature of the two-dimensional representation of the attachment shoe part,
determine an identity of the attachment shoe part by comparing the at least one reference feature of the attachment shoe part to at least one predetermined reference feature of a shoe-part reference image,
determine, by analyzing the first image, a first geometric coordinate of the attachment shoe part in a geometric coordinate system, and
determine, by analyzing the second image, a second geometric coordinate of the base shoe part in the geometric coordinate system; and
a part-transfer apparatus, which operates in the geometric coordinate system, that is configured to:
receive the first geometric coordinate and the second geometric coordinate, and
transfer the attachment shoe part from the first geometric coordinate to the second geometric coordinate for attachment to the base shoe part. 11. The system of claim 10, wherein the at least one image recorder further comprises at least one first image recorder that records the first image and at least one second image recorder that records the second image. 12. The system of claim 10, wherein, when the first image is recorded, the attachment shoe part is either:
provided at a part-supply apparatus; or held by the part transfer apparatus. 13. The system of claim 10, further comprising a light-emitting device that provides a backlight to the attachment shoe part when the first image is recorded. 14. A method for positioning a shoe part in an automated manner during a shoe-manufacturing process, the method comprising:
providing an attachment shoe part and a base shoe part; recording a first image of the attachment shoe part using at least one camera, the first image depicting a two-dimensional representation of the attachment shoe part; recording a second image of the base shoe part using the at least one camera, the second image depicting a two-dimensional representation of the base shoe part; identifying at least one reference feature of the attachment shoe part in the two-dimensional representation of the attachment shoe part; determining an identity of the attachment shoe part by comparing the at least one reference feature to at least one pre-determined reference feature of a shoe-part reference image; determining, from the first image, a first geometric coordinate of the attachment shoe part in a geometric coordinate system; determining, from the second image, a second geometric coordinate of the base shoe part in the geometric coordinate system; determining, from the first image and the second image, a degree of rotation of the attachment shoe part required for alignment with the base shoe part; and transferring, by a part-transfer apparatus, the attachment shoe part from the first geometric coordinate to the second geometric coordinate through the degree of rotation for alignment with the base shoe part. 15. The method of claim 14, further comprising attaching the attachment shoe part to the base shoe part at the second geometric coordinate. 16. The method of claim 14, wherein the two-dimensional representation of the attachment shoe part is comprised of a two-dimensional shape having a perimeter, and wherein the at least one reference feature is associated with the perimeter. 17. The method of claim 14, wherein, when the first image is recorded, the attachment shoe part is either:
held by the part-transfer apparatus; or maintained at a supply station from which the attachment shoe part is acquired by the part-transfer apparatus. 18. The method of claim 14, wherein the part transfer apparatus utilizes at least one of a gripping structure, suction, electromagnetic forces, and surface tack. 19. The method of claim 14, wherein the first image comprises a plurality of representations of attachment shoe parts, wherein each representation of the plurality of representations depicts a respective attachment shoe part that is to be transferred to the second geometric coordinate for attachment to the base shoe part. 20. The method of claim 19, further comprising analyzing the plurality of representations of attachment shoe parts to determine a respective geometric coordinate of each of the attachment shoe parts that are depicted in the first image. | Manufacturing of a shoe or a portion of a shoe is enhanced by automated placement of shoe parts. For example, a part-recognition system analyzes an image of a shoe part to identify the part and determine a location of the part. Once the part is identified and located, the part can be manipulated in an automated manner.1. A method for positioning a shoe part in an automated manner during a shoe-manufacturing process, the method comprising:
recording a first image of an attachment shoe part using at least one camera, the first image depicting a two-dimensional representation of the attachment shoe part; recording a second image of a base shoe part using the at least one camera, the second image depicting a two-dimensional representation of the base shoe part; identifying at least one reference feature of the attachment shoe part in the first image depicting the two-dimensional representation of the attachment shoe part; determining an identity of the attachment shoe part by comparing the at least one reference feature identified in the first image to at least one pre-determined reference feature of a shoe-part reference image; determining, from the first image, a first geometric coordinate of the attachment shoe part in a geometric coordinate system; determining, from the second image, a second geometric coordinate of the base shoe part in the geometric coordinate system; communicating the first and the second geometric coordinates to a part-transfer apparatus which operates in the geometric coordinate system; and transferring, by the part-transfer apparatus, the attachment shoe part from the first geometric coordinate to the second geometric coordinate. 2. The method of claim 1, further comprising attaching the attachment shoe part to the base shoe part at the second geometric coordinate. 3. The method of claim 1, further comprising determining an orientation of the attachment shoe part from the at least one reference feature of the first image. 4. The method of claim 3, further comprising determining, from the first image and the second image, a degree of rotation of the attachment shoe part required for alignment with the base shoe part. 5. The method of claim 4, wherein transferring, by the part-transfer apparatus, the attachment shoe part from the first geometric coordinate to the second geometric coordinate comprises transferring the attachment shoe part through the degree of rotation so that the attachment shoe part is aligned with the base shoe part for attachment. 6. The method of claim 1, wherein when the first image is recorded, the attachment shoe part is either:
held by the part-transfer apparatus; or maintained at a supply station from which the attachment shoe part is acquired by the part-transfer apparatus. 7. The method of claim 1, wherein the two-dimensional representation of the attachment shoe part is comprised of a two-dimensional shape having a perimeter, and wherein the at least one reference feature of the two-dimensional representation of the attachment shoe part is associated with the perimeter. 8. The method of claim 1, further comprising:
determining pixel coordinates of the first image that correspond to the at least one pre-determined reference feature; and converting, by a computer processor, the pixel coordinates of the first image to the first geometric coordinate, wherein the shoe-part reference image is stored in a datastore, and wherein the datastore stores a plurality of shoe-part reference images. 9. The method of claim 1, wherein the part transfer apparatus utilizes at least one of a gripping structure, suction, electromagnetic forces, and surface tack. 10. A system that positions a shoe part in an automated manner during a shoe-manufacturing process, the system comprising:
at least one image recorder that records a first image of an attachment shoe part and a second image of a base shoe part, the first image depicting a two-dimensional representation of the attachment shoe part and the second image depicting a two-dimensional representation of the base shoe part; computer storage media having stored thereon computer-executable instructions that, when executed, cause a computing device to:
identify at least one reference feature of the two-dimensional representation of the attachment shoe part,
determine an identity of the attachment shoe part by comparing the at least one reference feature of the attachment shoe part to at least one predetermined reference feature of a shoe-part reference image,
determine, by analyzing the first image, a first geometric coordinate of the attachment shoe part in a geometric coordinate system, and
determine, by analyzing the second image, a second geometric coordinate of the base shoe part in the geometric coordinate system; and
a part-transfer apparatus, which operates in the geometric coordinate system, that is configured to:
receive the first geometric coordinate and the second geometric coordinate, and
transfer the attachment shoe part from the first geometric coordinate to the second geometric coordinate for attachment to the base shoe part. 11. The system of claim 10, wherein the at least one image recorder further comprises at least one first image recorder that records the first image and at least one second image recorder that records the second image. 12. The system of claim 10, wherein, when the first image is recorded, the attachment shoe part is either:
provided at a part-supply apparatus; or held by the part transfer apparatus. 13. The system of claim 10, further comprising a light-emitting device that provides a backlight to the attachment shoe part when the first image is recorded. 14. A method for positioning a shoe part in an automated manner during a shoe-manufacturing process, the method comprising:
providing an attachment shoe part and a base shoe part; recording a first image of the attachment shoe part using at least one camera, the first image depicting a two-dimensional representation of the attachment shoe part; recording a second image of the base shoe part using the at least one camera, the second image depicting a two-dimensional representation of the base shoe part; identifying at least one reference feature of the attachment shoe part in the two-dimensional representation of the attachment shoe part; determining an identity of the attachment shoe part by comparing the at least one reference feature to at least one pre-determined reference feature of a shoe-part reference image; determining, from the first image, a first geometric coordinate of the attachment shoe part in a geometric coordinate system; determining, from the second image, a second geometric coordinate of the base shoe part in the geometric coordinate system; determining, from the first image and the second image, a degree of rotation of the attachment shoe part required for alignment with the base shoe part; and transferring, by a part-transfer apparatus, the attachment shoe part from the first geometric coordinate to the second geometric coordinate through the degree of rotation for alignment with the base shoe part. 15. The method of claim 14, further comprising attaching the attachment shoe part to the base shoe part at the second geometric coordinate. 16. The method of claim 14, wherein the two-dimensional representation of the attachment shoe part is comprised of a two-dimensional shape having a perimeter, and wherein the at least one reference feature is associated with the perimeter. 17. The method of claim 14, wherein, when the first image is recorded, the attachment shoe part is either:
held by the part-transfer apparatus; or maintained at a supply station from which the attachment shoe part is acquired by the part-transfer apparatus. 18. The method of claim 14, wherein the part transfer apparatus utilizes at least one of a gripping structure, suction, electromagnetic forces, and surface tack. 19. The method of claim 14, wherein the first image comprises a plurality of representations of attachment shoe parts, wherein each representation of the plurality of representations depicts a respective attachment shoe part that is to be transferred to the second geometric coordinate for attachment to the base shoe part. 20. The method of claim 19, further comprising analyzing the plurality of representations of attachment shoe parts to determine a respective geometric coordinate of each of the attachment shoe parts that are depicted in the first image. | 2,100 |
5,923 | 5,923 | 15,008,266 | 2,176 | Systems and methods for rendering dynamic content when converting a website to its static representation. A set of commands may be created with a syntax using the data attribute in HTML 5. Web designers may inject these attributes into the code of the webpages without affecting how the webpages will render in any browser that supports HTML 5. A specific and documented set of data attributes may indicate that the given element is a type of dynamic content. These data attributes will also indicate how to handle the dynamic elements such that a static representation of each visual state rendered in the browser may be generated accordingly. | 1. A computer-implemented method for converting a website to static representations, the method comprising:
receiving a request for static representations of a website over a network, wherein the website comprises a first webpage; determining that code of the first webpage comprises a dynamic content rendering attribute, wherein the dynamic content rendering attribute defines a dynamic interaction with the first webpage and comprises a type of the dynamic interaction and an area on the first webpage for receiving the interaction; enabling the dynamic interaction with the first webpage based on the dynamic content rendering attribute, comprising enabling the type of the dynamic interaction on the area on the first webpage for receiving the interaction; generating a static representation of the first webpage updated with the dynamic interaction; and sending the static representation in response to the request. 2. The method of claim 1, further comprising: determining that the website allows crawling and then crawling the website. 3. The method of claim 1, wherein the static representation comprises a PDF document. 4. The method of claim 1, wherein the dynamic interaction is to click on a first area on the first webpage. 5. The method of claim 1, wherein the dynamic interaction is to repeat an action on a second webpage for a predetermined number of times, and the method further comprises determining if the predetermined number of times has been reached. 6. The method of claim 1, wherein the dynamic interaction is to hover over a third webpage. 7. The method of claim 1, wherein the dynamic interaction is to scroll a fourth webpage. 8. The method of claim 1, wherein the dynamic interaction is to remove a floating ISI section on a fifth webpage. 9. The method of claim 1, wherein the dynamic interaction is to wait for a predetermined period of time before generating an image of a sixth webpage. 10. The method of claim 1, wherein the dynamic interaction is to fill a field on a seventh webpage with a specific value. 11. A system for converting a website to static representations, comprising:
a storage device; and a content converting server for:
receiving a request for static representations of a website over a network, wherein the website comprises a first webpage;
determining that code of the first webpage comprises a dynamic content rendering attribute, wherein the dynamic content rendering attribute defines a dynamic interaction with the first webpage and comprises a type of the dynamic interaction and an area on the first webpage for receiving the interaction;
enabling the dynamic interaction with the first webpage based on the dynamic content rendering attribute, comprising enabling the type of the dynamic interaction on the area on the first webpage for receiving the interaction;
generating a static representation of the first webpage updated with the dynamic interaction;
storing the static representation to the storage device; and
sending the static representation in response to the request. 12. The system of claim 11, wherein the static representation comprises a PDF document. 13. The system of claim 11, wherein the dynamic interaction is to click on a first area on the first webpage. 14. The system of claim 11, wherein the dynamic interaction is to repeat an action on a second webpage for a predetermined number of times, and the method further comprises determining if the predetermined number of times has been reached. 15. The system of claim 11, wherein the dynamic interaction is to hover over a third webpage. 16. The system of claim 11, wherein the dynamic interaction is to scroll a fourth webpage. 17. The system of claim 11, wherein the dynamic interaction is to remove a floating ISI section on a fifth webpage. 18. The system of claim 11, wherein the dynamic interaction is to wait for a predetermined period of time before generating an image of a sixth webpage. 19. The system of claim 11, wherein the dynamic interaction is to fill a field on a seventh webpage with a specific value. 20. A non-transitory computer-readable medium for rendering dynamic content when converting a website to its static representation, the computer-readable medium comprising instructions that, when executed by a computer, cause the computer to:
receive a request for static representations of a website over a network, wherein the website comprises a first webpage; determine that code of the first webpage comprises a dynamic content rendering attribute, wherein the dynamic content rendering attribute defines a dynamic interaction with the first webpage and comprises a type of the dynamic interaction and an area on the first webpage for receiving the interaction; enable the dynamic interaction with the first webpage based on the dynamic content rendering attribute, comprising enabling the type of the dynamic interaction on the area on the first webpage for receiving the interaction; generate a static representation of the first webpage updated with the dynamic interaction; and send the static representation in response to the request. | Systems and methods for rendering dynamic content when converting a website to its static representation. A set of commands may be created with a syntax using the data attribute in HTML 5. Web designers may inject these attributes into the code of the webpages without affecting how the webpages will render in any browser that supports HTML 5. A specific and documented set of data attributes may indicate that the given element is a type of dynamic content. These data attributes will also indicate how to handle the dynamic elements such that a static representation of each visual state rendered in the browser may be generated accordingly.1. A computer-implemented method for converting a website to static representations, the method comprising:
receiving a request for static representations of a website over a network, wherein the website comprises a first webpage; determining that code of the first webpage comprises a dynamic content rendering attribute, wherein the dynamic content rendering attribute defines a dynamic interaction with the first webpage and comprises a type of the dynamic interaction and an area on the first webpage for receiving the interaction; enabling the dynamic interaction with the first webpage based on the dynamic content rendering attribute, comprising enabling the type of the dynamic interaction on the area on the first webpage for receiving the interaction; generating a static representation of the first webpage updated with the dynamic interaction; and sending the static representation in response to the request. 2. The method of claim 1, further comprising: determining that the website allows crawling and then crawling the website. 3. The method of claim 1, wherein the static representation comprises a PDF document. 4. The method of claim 1, wherein the dynamic interaction is to click on a first area on the first webpage. 5. The method of claim 1, wherein the dynamic interaction is to repeat an action on a second webpage for a predetermined number of times, and the method further comprises determining if the predetermined number of times has been reached. 6. The method of claim 1, wherein the dynamic interaction is to hover over a third webpage. 7. The method of claim 1, wherein the dynamic interaction is to scroll a fourth webpage. 8. The method of claim 1, wherein the dynamic interaction is to remove a floating ISI section on a fifth webpage. 9. The method of claim 1, wherein the dynamic interaction is to wait for a predetermined period of time before generating an image of a sixth webpage. 10. The method of claim 1, wherein the dynamic interaction is to fill a field on a seventh webpage with a specific value. 11. A system for converting a website to static representations, comprising:
a storage device; and a content converting server for:
receiving a request for static representations of a website over a network, wherein the website comprises a first webpage;
determining that code of the first webpage comprises a dynamic content rendering attribute, wherein the dynamic content rendering attribute defines a dynamic interaction with the first webpage and comprises a type of the dynamic interaction and an area on the first webpage for receiving the interaction;
enabling the dynamic interaction with the first webpage based on the dynamic content rendering attribute, comprising enabling the type of the dynamic interaction on the area on the first webpage for receiving the interaction;
generating a static representation of the first webpage updated with the dynamic interaction;
storing the static representation to the storage device; and
sending the static representation in response to the request. 12. The system of claim 11, wherein the static representation comprises a PDF document. 13. The system of claim 11, wherein the dynamic interaction is to click on a first area on the first webpage. 14. The system of claim 11, wherein the dynamic interaction is to repeat an action on a second webpage for a predetermined number of times, and the method further comprises determining if the predetermined number of times has been reached. 15. The system of claim 11, wherein the dynamic interaction is to hover over a third webpage. 16. The system of claim 11, wherein the dynamic interaction is to scroll a fourth webpage. 17. The system of claim 11, wherein the dynamic interaction is to remove a floating ISI section on a fifth webpage. 18. The system of claim 11, wherein the dynamic interaction is to wait for a predetermined period of time before generating an image of a sixth webpage. 19. The system of claim 11, wherein the dynamic interaction is to fill a field on a seventh webpage with a specific value. 20. A non-transitory computer-readable medium for rendering dynamic content when converting a website to its static representation, the computer-readable medium comprising instructions that, when executed by a computer, cause the computer to:
receive a request for static representations of a website over a network, wherein the website comprises a first webpage; determine that code of the first webpage comprises a dynamic content rendering attribute, wherein the dynamic content rendering attribute defines a dynamic interaction with the first webpage and comprises a type of the dynamic interaction and an area on the first webpage for receiving the interaction; enable the dynamic interaction with the first webpage based on the dynamic content rendering attribute, comprising enabling the type of the dynamic interaction on the area on the first webpage for receiving the interaction; generate a static representation of the first webpage updated with the dynamic interaction; and send the static representation in response to the request. | 2,100 |
5,924 | 5,924 | 15,413,180 | 2,177 | Aspects provided herein are relevant to input systems, such as virtual input elements that allow for entry of text and other input by a user. Aspects can provide the user with, for example, context-aware communication options presented at a sentence or phrase level that are customized to the user's personal communication style. | 1. A computer-implemented method for a virtual input system, the method comprising:
obtaining user data from one or more data sources, the user data indicative of a personal communication style of a user; generating a user communication model based, in part, on the user data; obtaining data regarding a current communication context, the data comprising data regarding a communication medium; generating a plurality of sentences for use in the current communication context based, in part, on the user communication model and the data regarding the current communication context; and causing the plurality of sentences to be provided to the user for use over the communication medium. 2. The method of claim 1, wherein the plurality of sentences is a first plurality of sentences, and wherein the method further comprises:
receiving a reword command; responsive to receiving the reword command, generating a second plurality of sentences for use in the current communication context based, in part, on the user communication model and the data regarding the current communication context, wherein at least one of the second plurality of sentences is different from the sentences of the first plurality of sentences. 3. The method of claim 2, further comprising:
updating the user communication model based in part on the received reword command. 4. The method of claim 1, further comprising:
receiving a selection of a word input mode; and responsive to receiving the selection of the word input mode, generating a first plurality of words, the first plurality of words matching a communication style of the user in the current communication context based on the user communication model; and causing the plurality of words to be provided to the user for individual selection and use over the communication medium. 5. The method of claim 1, further comprising:
receiving a selection of an alternate communication model; and wherein generating the plurality of sentences for use in the current communication context is further based, in part, on the alternate communication model. 6. The method of claim 1, wherein generating the user communication model comprises:
generating a diction model for the user; and generating a syntax model for the user. 7. The method of claim 6, wherein generating the plurality of sentences comprises:
for each sentence of the plurality of sentences, selecting a word of the respective sentence based on the diction model of the user, and selecting the word based on the syntax model of the user. 8. The method of claim 1, wherein the one or more data sources comprise a data source selected from the group consisting of: language corpus data, social media data, communication history data, social media data, and user preferences. 9. The method of claim 1, wherein the data regarding the current communication context comprises data indicating one or more of: a user's location, calendar events of the user, a time of day, a communication target of the communication medium, a current activity of the user, and recent activity regarding the communication medium. 10. The method of claim 1, wherein the communication medium comprises software that enables person to initiate or respond to data transfer. 11. A non-transitory computer-readable medium having computer-executable instructions stored thereon that, when executed by a processor, cause the processor to:
receive a request for input to a communication medium; obtain a communication context, the communication context comprising data regarding the communication medium; provide the communication context to a communication engine, the communication engine configured to emulate a communication style of a user; receive, from the communication engine, a plurality of sentences generated based on the communication context and the communication style of the user; and make the plurality of sentences available for selection by the user at a user interface as the input to the communication medium. 12. The non-transitory computer-readable medium of claim 11, wherein the instructions further comprise instructions that when executed by the processor cause the processor to:
receive a selected sentence of the plurality of sentences; receive a send command; and provide the selected sentence as the input to the communication medium. 13. The non-transitory computer-readable medium of claim 11, wherein the plurality of sentences is a first plurality of sentences, and wherein the instructions further comprise instructions that when executed by the processor cause the processor to:
receive a reword command; and responsive to receiving the reword command, obtain a second plurality of sentences from the communication engine, the second plurality of sentences generated based on the communication style of the user and the communication context, wherein the second plurality of sentences is different from the first plurality of sentences. 14. The non-transitory computer-readable medium of claim 11, wherein the instructions further comprise instructions that when executed by the processor cause the processor to:
receive, from the communication engine, a plurality of information packages generated based on the communication context and the communication style of the user; and make the plurality of information packages available for selection by the user at the user interface as the input to the communication medium. 15. The non-transitory computer-readable medium of claim 14, wherein the plurality of information packages comprise an information package selected from the group consisting of: an event from a calendar application, a location from a mapping application, and a contact from a contacts application. 16. A computer-implemented method comprising:
obtaining a first plurality of sentences from a communication engine, the first plurality of sentences matching a communication style in a current communication context based on a communication model, the current communication context comprising a communication medium; making the first plurality of sentences available for selection by a user over a user interface; receiving a selection of a sentence of the first plurality of sentences over the user interface; receiving a reword command from the user over the user interface; responsive to receiving the reword command, obtaining a second plurality of sentences based on the selected sentence from the communication engine, the second plurality of sentences matching the communication style in the current communication context based on the communication model and at least one of the second plurality of sentences being different from the sentences of the first plurality of sentences; and making the second plurality of sentences available for selection by the user over the user interface. 17. The method of claim 16, wherein the communication style is a communication style of the user; and wherein the communication model is a communication model of the user. 18. The method of claim 16, further comprising:
receiving a selection of an alternate communication model; and setting the communication style to an alternate communication style modeled by the alternate communication model; and setting the communication model to the alternate communication model. 19. The method of claim 16, further comprising:
receiving a selection of a word input mode; and responsive to receiving the selection of the word input mode, making a first plurality of words available for selection by the user at the user interface, the first plurality of words matching the communication style in the current communication context based on the communication model. 20. The method of claim 16, further comprising:
receiving a selection of a second sentence of the second plurality of sentences; receiving a send command from the user over the user interface; and providing the second sentence to the communication medium. | Aspects provided herein are relevant to input systems, such as virtual input elements that allow for entry of text and other input by a user. Aspects can provide the user with, for example, context-aware communication options presented at a sentence or phrase level that are customized to the user's personal communication style.1. A computer-implemented method for a virtual input system, the method comprising:
obtaining user data from one or more data sources, the user data indicative of a personal communication style of a user; generating a user communication model based, in part, on the user data; obtaining data regarding a current communication context, the data comprising data regarding a communication medium; generating a plurality of sentences for use in the current communication context based, in part, on the user communication model and the data regarding the current communication context; and causing the plurality of sentences to be provided to the user for use over the communication medium. 2. The method of claim 1, wherein the plurality of sentences is a first plurality of sentences, and wherein the method further comprises:
receiving a reword command; responsive to receiving the reword command, generating a second plurality of sentences for use in the current communication context based, in part, on the user communication model and the data regarding the current communication context, wherein at least one of the second plurality of sentences is different from the sentences of the first plurality of sentences. 3. The method of claim 2, further comprising:
updating the user communication model based in part on the received reword command. 4. The method of claim 1, further comprising:
receiving a selection of a word input mode; and responsive to receiving the selection of the word input mode, generating a first plurality of words, the first plurality of words matching a communication style of the user in the current communication context based on the user communication model; and causing the plurality of words to be provided to the user for individual selection and use over the communication medium. 5. The method of claim 1, further comprising:
receiving a selection of an alternate communication model; and wherein generating the plurality of sentences for use in the current communication context is further based, in part, on the alternate communication model. 6. The method of claim 1, wherein generating the user communication model comprises:
generating a diction model for the user; and generating a syntax model for the user. 7. The method of claim 6, wherein generating the plurality of sentences comprises:
for each sentence of the plurality of sentences, selecting a word of the respective sentence based on the diction model of the user, and selecting the word based on the syntax model of the user. 8. The method of claim 1, wherein the one or more data sources comprise a data source selected from the group consisting of: language corpus data, social media data, communication history data, social media data, and user preferences. 9. The method of claim 1, wherein the data regarding the current communication context comprises data indicating one or more of: a user's location, calendar events of the user, a time of day, a communication target of the communication medium, a current activity of the user, and recent activity regarding the communication medium. 10. The method of claim 1, wherein the communication medium comprises software that enables person to initiate or respond to data transfer. 11. A non-transitory computer-readable medium having computer-executable instructions stored thereon that, when executed by a processor, cause the processor to:
receive a request for input to a communication medium; obtain a communication context, the communication context comprising data regarding the communication medium; provide the communication context to a communication engine, the communication engine configured to emulate a communication style of a user; receive, from the communication engine, a plurality of sentences generated based on the communication context and the communication style of the user; and make the plurality of sentences available for selection by the user at a user interface as the input to the communication medium. 12. The non-transitory computer-readable medium of claim 11, wherein the instructions further comprise instructions that when executed by the processor cause the processor to:
receive a selected sentence of the plurality of sentences; receive a send command; and provide the selected sentence as the input to the communication medium. 13. The non-transitory computer-readable medium of claim 11, wherein the plurality of sentences is a first plurality of sentences, and wherein the instructions further comprise instructions that when executed by the processor cause the processor to:
receive a reword command; and responsive to receiving the reword command, obtain a second plurality of sentences from the communication engine, the second plurality of sentences generated based on the communication style of the user and the communication context, wherein the second plurality of sentences is different from the first plurality of sentences. 14. The non-transitory computer-readable medium of claim 11, wherein the instructions further comprise instructions that when executed by the processor cause the processor to:
receive, from the communication engine, a plurality of information packages generated based on the communication context and the communication style of the user; and make the plurality of information packages available for selection by the user at the user interface as the input to the communication medium. 15. The non-transitory computer-readable medium of claim 14, wherein the plurality of information packages comprise an information package selected from the group consisting of: an event from a calendar application, a location from a mapping application, and a contact from a contacts application. 16. A computer-implemented method comprising:
obtaining a first plurality of sentences from a communication engine, the first plurality of sentences matching a communication style in a current communication context based on a communication model, the current communication context comprising a communication medium; making the first plurality of sentences available for selection by a user over a user interface; receiving a selection of a sentence of the first plurality of sentences over the user interface; receiving a reword command from the user over the user interface; responsive to receiving the reword command, obtaining a second plurality of sentences based on the selected sentence from the communication engine, the second plurality of sentences matching the communication style in the current communication context based on the communication model and at least one of the second plurality of sentences being different from the sentences of the first plurality of sentences; and making the second plurality of sentences available for selection by the user over the user interface. 17. The method of claim 16, wherein the communication style is a communication style of the user; and wherein the communication model is a communication model of the user. 18. The method of claim 16, further comprising:
receiving a selection of an alternate communication model; and setting the communication style to an alternate communication style modeled by the alternate communication model; and setting the communication model to the alternate communication model. 19. The method of claim 16, further comprising:
receiving a selection of a word input mode; and responsive to receiving the selection of the word input mode, making a first plurality of words available for selection by the user at the user interface, the first plurality of words matching the communication style in the current communication context based on the communication model. 20. The method of claim 16, further comprising:
receiving a selection of a second sentence of the second plurality of sentences; receiving a send command from the user over the user interface; and providing the second sentence to the communication medium. | 2,100 |
5,925 | 5,925 | 13,769,453 | 2,158 | A method for facilitating discovery of electronic data stored in a data storage system. The method includes generating a snapshot of the electronic data, wherein the snapshot permits read access to the data, and a copy-on-write technique is used to perform modifications to the data, such that the snapshot is immutable but ongoing user operations with respect to the data can be performed substantially without interruption. The method also includes transmitting data of the snapshot over a network to a data cache server to which an analysis computer system is communicatively coupled. In some embodiments, the data cache server may store a local copy of the transmitted data of the snapshot. In this regard, the data cache server may determine whether data requested by the analysis computer system is stored locally, and if so, the data cache server may transmit the data requested directly to the analysis computer system. | 1. A method for facilitating discovery of electronic data stored in a data storage system, the method comprising:
generating a snapshot of the electronic data, wherein the snapshot permits read access to the data, and a copy-on-write technique is used to perform modifications to the data, such that the snapshot is immutable but ongoing user operations with respect to the data can be performed substantially without interruption; and providing access to the snapshot of the electronic data to an analysis computer system. 2. The method of claim 1, wherein providing the access to the snapshot is provided over a network. 3. The method of claim 2, wherein the analysis computer system is remote from the data storage system. 4. The method of claim 2, wherein providing the access to the snapshot comprises transmitting data of the snapshot over the network to a data cache server to which the analysis computer system is communicatively coupled. 5. The method of claim 4, wherein the data of the snapshot is transmitted upon request from the data cache server. 6. The method of claim 5, further comprising storing a local copy of the transmitted data of the snapshot at the data cache server. 7. The method of claim 4, further comprising generating a security key required for accessing the snapshot data and transmitting the security key to the data cache server. 8. The method of claim 4, wherein transmitting data of the snapshot over the network comprises transmitting the data utilizing a block-based protocol. 9. The method of claim 6, further comprising receiving a request for particular data of the snapshot from the analysis computer system at the data cache server. 10. The method of claim 9, further comprising determining whether the data requested by the analysis computer system is stored locally at the data cache server and if the data requested by the analysis computer system is stored locally at the data cache server, transmitting the data requested by the analysis computer system from the data cache server to the analysis computer system. 11. The method of claim 10, wherein if the data requested by the analysis computer system is not stored locally at the data cache server, transmitting data of the snapshot, corresponding to the data requested by the analysis computer system, over the network to the data cache server. 12. The method of claim 11, further comprising encrypting the data of the snapshot for transmission over the network. 13. An information handling system comprising:
a data storage site comprising a data storage system and a system controller, the system controller communicatively coupled with the data storage system and managing access to the data storage system by one or more user computer systems, the system controller further managing a snapshot of data of the data storage system, wherein the snapshot permits read access to the corresponding data, and the system controller utilizes a copy-on-write technique to perform modifications to the corresponding data, such that the snapshot is immutable but ongoing input/output (I/O) requests from the one or more user computer systems with respect to the data are performed substantially without interruption; and an e-discovery site comprising a data cache server and one or more analysis computer systems communicatively coupled with the data cache server, wherein the system controller and data cache server are communicatively coupled via a computer network; wherein, upon request from the data cache server, the system controller provides access to the snapshot of data. 14. The information handling system of claim 13, wherein the system controller and data cache server are remotely connected via the computer network. 15. The information handling system of claim 14, wherein, upon request from the data cache server, the system controller transmits data of the snapshot over the network. 16. The information handling system of claim 15, wherein the data cache server stores a local copy of the transmitted data of the snapshot. 17. The information handling system of claim 14, wherein the data cache server is configured to receive requests for particular data of the snapshot from the one or more analysis computer systems. 18. The information handling system of claim 17, wherein, for each request received from the one or more analysis computer systems, the data cache server is configured to determine whether the data requested is stored locally at the data cache server and based on the determination, fulfill the request locally or request data from the system controller. 19. The information handling system of claim 17, wherein data transmitted by the system controller is encrypted. 20. The information handling system of claim 17, wherein data transmitted by the system controller follows a block-based protocol. | A method for facilitating discovery of electronic data stored in a data storage system. The method includes generating a snapshot of the electronic data, wherein the snapshot permits read access to the data, and a copy-on-write technique is used to perform modifications to the data, such that the snapshot is immutable but ongoing user operations with respect to the data can be performed substantially without interruption. The method also includes transmitting data of the snapshot over a network to a data cache server to which an analysis computer system is communicatively coupled. In some embodiments, the data cache server may store a local copy of the transmitted data of the snapshot. In this regard, the data cache server may determine whether data requested by the analysis computer system is stored locally, and if so, the data cache server may transmit the data requested directly to the analysis computer system.1. A method for facilitating discovery of electronic data stored in a data storage system, the method comprising:
generating a snapshot of the electronic data, wherein the snapshot permits read access to the data, and a copy-on-write technique is used to perform modifications to the data, such that the snapshot is immutable but ongoing user operations with respect to the data can be performed substantially without interruption; and providing access to the snapshot of the electronic data to an analysis computer system. 2. The method of claim 1, wherein providing the access to the snapshot is provided over a network. 3. The method of claim 2, wherein the analysis computer system is remote from the data storage system. 4. The method of claim 2, wherein providing the access to the snapshot comprises transmitting data of the snapshot over the network to a data cache server to which the analysis computer system is communicatively coupled. 5. The method of claim 4, wherein the data of the snapshot is transmitted upon request from the data cache server. 6. The method of claim 5, further comprising storing a local copy of the transmitted data of the snapshot at the data cache server. 7. The method of claim 4, further comprising generating a security key required for accessing the snapshot data and transmitting the security key to the data cache server. 8. The method of claim 4, wherein transmitting data of the snapshot over the network comprises transmitting the data utilizing a block-based protocol. 9. The method of claim 6, further comprising receiving a request for particular data of the snapshot from the analysis computer system at the data cache server. 10. The method of claim 9, further comprising determining whether the data requested by the analysis computer system is stored locally at the data cache server and if the data requested by the analysis computer system is stored locally at the data cache server, transmitting the data requested by the analysis computer system from the data cache server to the analysis computer system. 11. The method of claim 10, wherein if the data requested by the analysis computer system is not stored locally at the data cache server, transmitting data of the snapshot, corresponding to the data requested by the analysis computer system, over the network to the data cache server. 12. The method of claim 11, further comprising encrypting the data of the snapshot for transmission over the network. 13. An information handling system comprising:
a data storage site comprising a data storage system and a system controller, the system controller communicatively coupled with the data storage system and managing access to the data storage system by one or more user computer systems, the system controller further managing a snapshot of data of the data storage system, wherein the snapshot permits read access to the corresponding data, and the system controller utilizes a copy-on-write technique to perform modifications to the corresponding data, such that the snapshot is immutable but ongoing input/output (I/O) requests from the one or more user computer systems with respect to the data are performed substantially without interruption; and an e-discovery site comprising a data cache server and one or more analysis computer systems communicatively coupled with the data cache server, wherein the system controller and data cache server are communicatively coupled via a computer network; wherein, upon request from the data cache server, the system controller provides access to the snapshot of data. 14. The information handling system of claim 13, wherein the system controller and data cache server are remotely connected via the computer network. 15. The information handling system of claim 14, wherein, upon request from the data cache server, the system controller transmits data of the snapshot over the network. 16. The information handling system of claim 15, wherein the data cache server stores a local copy of the transmitted data of the snapshot. 17. The information handling system of claim 14, wherein the data cache server is configured to receive requests for particular data of the snapshot from the one or more analysis computer systems. 18. The information handling system of claim 17, wherein, for each request received from the one or more analysis computer systems, the data cache server is configured to determine whether the data requested is stored locally at the data cache server and based on the determination, fulfill the request locally or request data from the system controller. 19. The information handling system of claim 17, wherein data transmitted by the system controller is encrypted. 20. The information handling system of claim 17, wherein data transmitted by the system controller follows a block-based protocol. | 2,100 |
5,926 | 5,926 | 15,387,332 | 2,186 | Examples construct a bootloader address space using a page fault exception. A bootloader executing in machine address (MA) space determines the MA at which the bootloader has been loaded into memory. The bootloader calculates a difference between an expected virtual address (VA) and the loaded MA. The bootloader defines a page table mapping the bootloader MA to an expected VA, and sets an exception handling vector to point to the expected VA. When a memory management unit (MMU) utilizing the defined page table for address translation is enabled, a page fault exception occurs. The page fault exception handling resumes execution of the bootloader at the expected VA via an exception handling vector pointing thereto. | 1. A computer-implemented method for constructing bootloader address space, the method comprising:
determining a machine address (MA) at which a bootloader has been loaded into memory; determining a difference between an expected virtual address (VA) and a loaded MA; defining, based on the determined difference, a page table that maps the bootloader loaded at the determined MA to the expected VA; setting an exception handling vector to point to the expected VA associated with the bootloader; and enabling a memory management unit (MMU) which uses the defined page table for address translation; and in response to a page fault exception resulting from enabling the MMU, executing the bootloader at the expected VA via the exception handling vector. 2. The method of claim 1, further comprising determining that an alignment of the loaded MA of the bootloader matches an alignment of the expected VA prior to defining the page table. 3. The method of claim 1, wherein determining the difference further comprises checking a size criteria associated with the bootloader, wherein execution is continued if the size is less than or equal to an alignment value, and wherein the attempted boot fails if the size of the bootloader exceeds the alignment value. 4. The method of claim 1, further comprising converting a VA of the bootloader to a MA based on the determined difference. 5. The method of claim 1, wherein a first portion of the bootloader executes in a machine address space and further comprising resuming execution of a second portion of the bootloader in a virtual address space following resolution of the page fault exception. 6. The method of claim 1, further comprising flushing a translation lookaside buffer (TLB) upon enabling the MMU to generate the page fault exception. 7. The method of claim 1, further comprising failing attempted execution of the bootloader on determining an alignment of the loaded MA does not match an alignment of the expected VA associated with the bootloader. 8. A system for constructing bootloader address space, said system comprising:
at least one memory storing a bootloader and an exception handler, the bootloader comprising a first portion of the bootloader and a positon dependent second portion of the bootloader; a memory management unit (MMU); and at least one processor programmed to execute the bootloader to:
determine a machine address (MA) at which the bootloader has been loaded into memory;
determine a difference between an expected virtual address (VA) and the loaded MA;
define, based on the determined difference, a page table that maps the bootloader loaded at the determined MA to the expected VA;
set an exception handling vector to point to the expected VA associated with the bootloader;
enable the MMU which uses the defined page table for address translation; and
in response to a page fault exception resulting from enabling the MMU, execute the bootloader at the expected VA via the exception handling vector. 9. The system of claim 8, wherein the at least one processor is further programmed to execute the bootloader to determine that an alignment of the loaded MA of the bootloader matches an alignment of the expected VA prior to defining the page table. 10. The system of claim 8, wherein the at least one processor is further programmed to check a size criteria associated with the bootloader, wherein execution is continued if the size is less than or equal to an alignment value, and wherein the attempted boot fails if the size of the bootloader exceeds the alignment value. 11. The system of claim 8, wherein the at least one processor is further programmed to convert a VA of the bootloader to a MA based on the determined difference. 12. The system of claim 8, wherein a first portion of the bootloader executes in a machine address space and further comprising:
a virtual address space, wherein execution of a second portion of the bootloader resumes in a virtual address space following resolution of the page fault exception. 13. The system of claim 8, wherein the at least one processor programmed to fail attempted execution of the bootloader on determining an alignment of the loaded MA does not match an alignment of the expected VA associated with the bootloader. 14. The system of claim 8, wherein the at least one processor programmed to execute the bootloader to flushing a translation lookaside buffer (TLB) upon enabling the MMU to generate the page fault exception. 15. One or more computer storage media embodying computer-executable components, said components comprising:
a bootloader component having a first portion that is executed to cause at least one processor to determine a machine address (MA) at which the bootloader component has been loaded into memory; determine a difference between an expected virtual address (VA) and the loaded MA; define a page table that maps the bootloader component loaded at the determined MA to the expected VA based on the determined difference; set an exception handling vector to point to the expected VA associated with a second portion of the bootloader component; and enable a memory management unit (MMU) which uses the defined page table for address translation; and execute the second portion of the bootloader component at the expected VA via the exception handling vector in response to a page fault exception resulting from enabling the MMU. 16. The computer storage media of claim 15, wherein the bootloader component, upon further execution, causes the at least one processor to determine that an alignment of the loaded MA of the bootloader component matches an alignment of the expected VA prior to defining the page table. 17. The computer storage media of claim 15, wherein the bootloader component, upon further execution, causes the at least one processor to check a size criteria associated with the bootloader component, wherein execution is continued if the size is less than or equal to an alignment value, and wherein the attempted boot fails if the size of the bootloader component exceeds the alignment value. 18. The computer storage media of claim 15, wherein the bootloader component, upon further execution, causes the at least one processor to convert a VA of the bootloader component to a MA based on the determined difference. 19. The computer storage media of claim 15, wherein the bootloader component, upon further execution, causes the at least one processor to execute the first portion of the bootloader in a machine address space and resume execution of the second portion of the bootloader in a virtual address space following resolution of the page fault exception. 20. The computer storage media of claim 15, wherein the bootloader component, upon further execution, causes the at least one processor to fail attempted execution of the bootloader component on determining an alignment of the loaded MA does not match an alignment of the expected VA associated with the bootloader component. | Examples construct a bootloader address space using a page fault exception. A bootloader executing in machine address (MA) space determines the MA at which the bootloader has been loaded into memory. The bootloader calculates a difference between an expected virtual address (VA) and the loaded MA. The bootloader defines a page table mapping the bootloader MA to an expected VA, and sets an exception handling vector to point to the expected VA. When a memory management unit (MMU) utilizing the defined page table for address translation is enabled, a page fault exception occurs. The page fault exception handling resumes execution of the bootloader at the expected VA via an exception handling vector pointing thereto.1. A computer-implemented method for constructing bootloader address space, the method comprising:
determining a machine address (MA) at which a bootloader has been loaded into memory; determining a difference between an expected virtual address (VA) and a loaded MA; defining, based on the determined difference, a page table that maps the bootloader loaded at the determined MA to the expected VA; setting an exception handling vector to point to the expected VA associated with the bootloader; and enabling a memory management unit (MMU) which uses the defined page table for address translation; and in response to a page fault exception resulting from enabling the MMU, executing the bootloader at the expected VA via the exception handling vector. 2. The method of claim 1, further comprising determining that an alignment of the loaded MA of the bootloader matches an alignment of the expected VA prior to defining the page table. 3. The method of claim 1, wherein determining the difference further comprises checking a size criteria associated with the bootloader, wherein execution is continued if the size is less than or equal to an alignment value, and wherein the attempted boot fails if the size of the bootloader exceeds the alignment value. 4. The method of claim 1, further comprising converting a VA of the bootloader to a MA based on the determined difference. 5. The method of claim 1, wherein a first portion of the bootloader executes in a machine address space and further comprising resuming execution of a second portion of the bootloader in a virtual address space following resolution of the page fault exception. 6. The method of claim 1, further comprising flushing a translation lookaside buffer (TLB) upon enabling the MMU to generate the page fault exception. 7. The method of claim 1, further comprising failing attempted execution of the bootloader on determining an alignment of the loaded MA does not match an alignment of the expected VA associated with the bootloader. 8. A system for constructing bootloader address space, said system comprising:
at least one memory storing a bootloader and an exception handler, the bootloader comprising a first portion of the bootloader and a positon dependent second portion of the bootloader; a memory management unit (MMU); and at least one processor programmed to execute the bootloader to:
determine a machine address (MA) at which the bootloader has been loaded into memory;
determine a difference between an expected virtual address (VA) and the loaded MA;
define, based on the determined difference, a page table that maps the bootloader loaded at the determined MA to the expected VA;
set an exception handling vector to point to the expected VA associated with the bootloader;
enable the MMU which uses the defined page table for address translation; and
in response to a page fault exception resulting from enabling the MMU, execute the bootloader at the expected VA via the exception handling vector. 9. The system of claim 8, wherein the at least one processor is further programmed to execute the bootloader to determine that an alignment of the loaded MA of the bootloader matches an alignment of the expected VA prior to defining the page table. 10. The system of claim 8, wherein the at least one processor is further programmed to check a size criteria associated with the bootloader, wherein execution is continued if the size is less than or equal to an alignment value, and wherein the attempted boot fails if the size of the bootloader exceeds the alignment value. 11. The system of claim 8, wherein the at least one processor is further programmed to convert a VA of the bootloader to a MA based on the determined difference. 12. The system of claim 8, wherein a first portion of the bootloader executes in a machine address space and further comprising:
a virtual address space, wherein execution of a second portion of the bootloader resumes in a virtual address space following resolution of the page fault exception. 13. The system of claim 8, wherein the at least one processor programmed to fail attempted execution of the bootloader on determining an alignment of the loaded MA does not match an alignment of the expected VA associated with the bootloader. 14. The system of claim 8, wherein the at least one processor programmed to execute the bootloader to flushing a translation lookaside buffer (TLB) upon enabling the MMU to generate the page fault exception. 15. One or more computer storage media embodying computer-executable components, said components comprising:
a bootloader component having a first portion that is executed to cause at least one processor to determine a machine address (MA) at which the bootloader component has been loaded into memory; determine a difference between an expected virtual address (VA) and the loaded MA; define a page table that maps the bootloader component loaded at the determined MA to the expected VA based on the determined difference; set an exception handling vector to point to the expected VA associated with a second portion of the bootloader component; and enable a memory management unit (MMU) which uses the defined page table for address translation; and execute the second portion of the bootloader component at the expected VA via the exception handling vector in response to a page fault exception resulting from enabling the MMU. 16. The computer storage media of claim 15, wherein the bootloader component, upon further execution, causes the at least one processor to determine that an alignment of the loaded MA of the bootloader component matches an alignment of the expected VA prior to defining the page table. 17. The computer storage media of claim 15, wherein the bootloader component, upon further execution, causes the at least one processor to check a size criteria associated with the bootloader component, wherein execution is continued if the size is less than or equal to an alignment value, and wherein the attempted boot fails if the size of the bootloader component exceeds the alignment value. 18. The computer storage media of claim 15, wherein the bootloader component, upon further execution, causes the at least one processor to convert a VA of the bootloader component to a MA based on the determined difference. 19. The computer storage media of claim 15, wherein the bootloader component, upon further execution, causes the at least one processor to execute the first portion of the bootloader in a machine address space and resume execution of the second portion of the bootloader in a virtual address space following resolution of the page fault exception. 20. The computer storage media of claim 15, wherein the bootloader component, upon further execution, causes the at least one processor to fail attempted execution of the bootloader component on determining an alignment of the loaded MA does not match an alignment of the expected VA associated with the bootloader component. | 2,100 |
5,927 | 5,927 | 14,540,656 | 2,136 | An input/output bridge controls access to a memory by a number of devices and maintains an order of access requests under virtualization. In particular, the bridge manages and enforces order among multiple independent threads of requests to a memory. The bridge populates a number of ordered lists with received access requests based on a corresponding identifier of each access request. A top list is also maintained, where the top list is populated with access requests and a corresponding translated physical address. The bridge forwards access requests from the top list, maintaining the order of each of the independent threads. | 1. A memory control circuit comprising:
a device interface configured to:
receive a plurality of access requests to access a memory from a plurality of devices;
parse each of the plurality of access requests to retrieve a respective transaction identifier; and
update a plurality of ordered lists having entries corresponding to the plurality of access requests, each of the plurality of ordered lists corresponding to a distinct transaction identifier; and
a memory interface configured to:
update a top list, the top list being an ordered list including entries from each of the plurality of ordered lists; and
forward the plurality of access requests to the memory in an order corresponding to the top list. 2. The memory control circuit of claim 1, further comprising a translation circuit configured to translate a virtual address component of each of the plurality of access requests to a corresponding physical address of the memory. 3. The memory control circuit of claim 2, wherein the translation circuit updates the plurality of access requests to include the corresponding physical address of the memory. 4. The memory control circuit of claim 3, wherein the memory interface is further configured to populate the top list based on an indication of which of the plurality of access requests have been updated with the corresponding physical address of the memory. 5. The memory control circuit of claim 1, wherein the memory interface is further configured to populate the top list with the entries from each of the plurality of ordered lists using a round-robin selection. 6. The memory control circuit of claim 1, wherein the memory interface is further configured to remove an entry from the top list upon forwarding a corresponding one of the plurality of access requests to the memory. 7. A method of controlling access to a memory, comprising:
receiving a plurality of access requests to access a memory from a plurality of devices; parsing each of the plurality of access requests to retrieve a respective transaction identifier; updating a plurality of ordered lists having entries corresponding to the plurality of access requests, each of the plurality of ordered lists corresponding to a distinct transaction identifier; updating a top list, the top list being an ordered list including entries from each of the plurality of ordered lists; and forwarding the plurality of access requests to the memory in an order corresponding to the top list. 8. The method of claim 1, further comprising translating a virtual address component of each of the plurality of access requests to a corresponding physical address of the memory. 9. The method of claim 2, further comprising updating the plurality of access requests to include the corresponding physical address of the memory. 10. The method of claim 3, further comprising populating the top list based on an indication of which of the plurality of access requests have been updated with the corresponding physical address of the memory. 11. The method of claim 1, further comprising populating the top list with the entries from each of the plurality of ordered lists using a round-robin selection. 12. The method of claim 1, further comprising removing an entry from the top list upon forwarding a corresponding one of the plurality of access requests to the memory. | An input/output bridge controls access to a memory by a number of devices and maintains an order of access requests under virtualization. In particular, the bridge manages and enforces order among multiple independent threads of requests to a memory. The bridge populates a number of ordered lists with received access requests based on a corresponding identifier of each access request. A top list is also maintained, where the top list is populated with access requests and a corresponding translated physical address. The bridge forwards access requests from the top list, maintaining the order of each of the independent threads.1. A memory control circuit comprising:
a device interface configured to:
receive a plurality of access requests to access a memory from a plurality of devices;
parse each of the plurality of access requests to retrieve a respective transaction identifier; and
update a plurality of ordered lists having entries corresponding to the plurality of access requests, each of the plurality of ordered lists corresponding to a distinct transaction identifier; and
a memory interface configured to:
update a top list, the top list being an ordered list including entries from each of the plurality of ordered lists; and
forward the plurality of access requests to the memory in an order corresponding to the top list. 2. The memory control circuit of claim 1, further comprising a translation circuit configured to translate a virtual address component of each of the plurality of access requests to a corresponding physical address of the memory. 3. The memory control circuit of claim 2, wherein the translation circuit updates the plurality of access requests to include the corresponding physical address of the memory. 4. The memory control circuit of claim 3, wherein the memory interface is further configured to populate the top list based on an indication of which of the plurality of access requests have been updated with the corresponding physical address of the memory. 5. The memory control circuit of claim 1, wherein the memory interface is further configured to populate the top list with the entries from each of the plurality of ordered lists using a round-robin selection. 6. The memory control circuit of claim 1, wherein the memory interface is further configured to remove an entry from the top list upon forwarding a corresponding one of the plurality of access requests to the memory. 7. A method of controlling access to a memory, comprising:
receiving a plurality of access requests to access a memory from a plurality of devices; parsing each of the plurality of access requests to retrieve a respective transaction identifier; updating a plurality of ordered lists having entries corresponding to the plurality of access requests, each of the plurality of ordered lists corresponding to a distinct transaction identifier; updating a top list, the top list being an ordered list including entries from each of the plurality of ordered lists; and forwarding the plurality of access requests to the memory in an order corresponding to the top list. 8. The method of claim 1, further comprising translating a virtual address component of each of the plurality of access requests to a corresponding physical address of the memory. 9. The method of claim 2, further comprising updating the plurality of access requests to include the corresponding physical address of the memory. 10. The method of claim 3, further comprising populating the top list based on an indication of which of the plurality of access requests have been updated with the corresponding physical address of the memory. 11. The method of claim 1, further comprising populating the top list with the entries from each of the plurality of ordered lists using a round-robin selection. 12. The method of claim 1, further comprising removing an entry from the top list upon forwarding a corresponding one of the plurality of access requests to the memory. | 2,100 |
5,928 | 5,928 | 14,764,006 | 2,119 | A solar powered device (LU 10 ) and methods for (FIGS. 8 and 9 ) controlling a power override function for the solar powered device (LU 10 ) are disclosed. The solar powered device (LU 10 ) includes a photo voltaic unit ( 1 ), a solar charger ( 2 ) coupled to the photo voltaic unit, an energy storage unit ( 3 ), a control engine ( 4 ) arranged to control the energy supply to a load ( 7 ), a communication interface ( 6 ), and a controller ( 5 ). The controller ( 5 ) is arranged to receive an override function signal via the communication interface ( 6 ). The override function signal requests a change related to an energy consumption of the load ( 7 ). The controller ( 5 ) is further arranged to determine if a current available stored energy amount in the energy storage unit ( 3 ) can provide enough energy for the change in the energy consumption of the load ( 7 ), and estimate if an amount of energy depleted due to the change in the energy consumption can be recovered by solar generation in at least one or more subsequent days after the amount is depleted. The controller changes the energy consumption of the load ( 7 ) if the current available stored energy can provide enough energy and the amount of energy depleted can be recovered. | 1. A method to control a power override function for a solar powered device, said method comprising the steps of:
receiving an override function signal, where the override function signal requests a change related to an energy consumption of a load of the solar powered device; determining if a current available stored energy amount in the solar powered device can provide enough energy for the change in the energy consumption of the load; estimating if an amount of energy depleted due to the change in the energy consumption can be recovered by solar generation in at least one or more subsequent days after the amount is depleted, before a next expected use of the load; and changing the energy consumption of the load if the current available stored energy can provide enough energy and the amount of energy depleted can be recovered. 2. (canceled) 3. The method according to claim 1, wherein the change in the energy consumption of the load is in accordance with an energy use preservation profile. 4. The method according to claim 1, further comprising the step of notifying a user that the override function signal is possible which means that the current available stored energy can provide enough energy and the amount of energy depleted can be recovered. 5. (canceled) 6. The method according to claim 1, further comprising the step of returning the energy consumption of the load to the normal or previous level after a predetermined amount of time has passed after receiving the override function signal. 7. The method according to claim 1, wherein the solar powered device is an off grid lighting unit and the load is a light producing means. 8. The method according to claim 1, wherein the estimating step estimates a solar dependent energy budget that can be expected in the one or more subsequent days and decides how much of the current available stored energy amount can be depleted related to the override function signal. 9. A method for energy consumption control for a solar powered device, the method comprising the steps of:
creating a solar power supply model for the solar powered device; creating a power demand model for the solar powered device; refining the solar power supply model and the power demand model using augmented data; computing an energy balance model for a first and second period of load of the solar powered device using the solar power supply model and the power demand model; and determining, based on the energy balance model, if an override capability is possible that would increase energy consumption of the load in the first period and provide energy for the load in the second period. 10. The method according to claim 9, wherein the step of refining the power demand model includes one or more of the following techniques and algorithms: statistic averaging of duration of increased energy consumption, statistic averaging of local weather phenomena, accounting for parasitic loads of other components related to the solar powered device, anti-freeze operations related to the solar powered device, and/or backup capacity limits. 11. The method according to claim 9, wherein the step of refining the power supply model includes one or more of the following techniques and algorithms that use additional data related to required period in day cycles to restore backup capacity, solar line of sight obstructions, recorded local bad weather phenomena, Linke Turbidity data, local average daily and daytime temperatures, and past and recorded solar energy collector performance. 12. (canceled) 13. The method according to claim 9, wherein the step of determining if an override capability is possible is implemented as a decision report including various override durations, energy intensity levels or risk factors. 14. A solar powered device, comprising:
a photo voltaic unit; a solar charger coupled to the photo voltaic unit; an energy storage unit coupled to the solar charger arranged to store energy from the photo voltaic unit; a control engine arranged to control the energy supply to a load; a communication interface; and a controller coupled to the control engine and the communication interface, wherein the controller is arranged to receive an override function signal via the communication interface, wherein the override function signal requests a change related to an energy consumption of the load, the controller is further arranged to determine if a current available stored energy amount in the energy storage unit can provide enough energy for the change in the energy consumption of the load, and estimate if an amount of energy depleted due to the change in the energy consumption can be recovered by solar generation in at least one or more subsequent days after the amount is depleted, and change the energy consumption of the load if the current available stored energy can provide enough energy and the amount of energy depleted can be recovered. 15. (canceled) 16. The solar powered device according to claim 14, wherein the change in the energy consumption of the load is in accordance with an energy use preservation profile. 17. The solar powered device according to claim 14, further comprising a user indicator that the override function signal is possible which means that the current available stored energy can provide enough energy and the amount of energy depleted can be recovered. 18. (canceled) 19. The solar powered device according to claim 14, wherein the controller is further arranged to return the energy consumption of the load to the normal or previous level after a predetermined amount of time has passed after receiving the override function signal. 20. The solar powered device according to claim 14, wherein the solar powered device is an off grid lighting unit and the load is a light producing means. | A solar powered device (LU 10 ) and methods for (FIGS. 8 and 9 ) controlling a power override function for the solar powered device (LU 10 ) are disclosed. The solar powered device (LU 10 ) includes a photo voltaic unit ( 1 ), a solar charger ( 2 ) coupled to the photo voltaic unit, an energy storage unit ( 3 ), a control engine ( 4 ) arranged to control the energy supply to a load ( 7 ), a communication interface ( 6 ), and a controller ( 5 ). The controller ( 5 ) is arranged to receive an override function signal via the communication interface ( 6 ). The override function signal requests a change related to an energy consumption of the load ( 7 ). The controller ( 5 ) is further arranged to determine if a current available stored energy amount in the energy storage unit ( 3 ) can provide enough energy for the change in the energy consumption of the load ( 7 ), and estimate if an amount of energy depleted due to the change in the energy consumption can be recovered by solar generation in at least one or more subsequent days after the amount is depleted. The controller changes the energy consumption of the load ( 7 ) if the current available stored energy can provide enough energy and the amount of energy depleted can be recovered.1. A method to control a power override function for a solar powered device, said method comprising the steps of:
receiving an override function signal, where the override function signal requests a change related to an energy consumption of a load of the solar powered device; determining if a current available stored energy amount in the solar powered device can provide enough energy for the change in the energy consumption of the load; estimating if an amount of energy depleted due to the change in the energy consumption can be recovered by solar generation in at least one or more subsequent days after the amount is depleted, before a next expected use of the load; and changing the energy consumption of the load if the current available stored energy can provide enough energy and the amount of energy depleted can be recovered. 2. (canceled) 3. The method according to claim 1, wherein the change in the energy consumption of the load is in accordance with an energy use preservation profile. 4. The method according to claim 1, further comprising the step of notifying a user that the override function signal is possible which means that the current available stored energy can provide enough energy and the amount of energy depleted can be recovered. 5. (canceled) 6. The method according to claim 1, further comprising the step of returning the energy consumption of the load to the normal or previous level after a predetermined amount of time has passed after receiving the override function signal. 7. The method according to claim 1, wherein the solar powered device is an off grid lighting unit and the load is a light producing means. 8. The method according to claim 1, wherein the estimating step estimates a solar dependent energy budget that can be expected in the one or more subsequent days and decides how much of the current available stored energy amount can be depleted related to the override function signal. 9. A method for energy consumption control for a solar powered device, the method comprising the steps of:
creating a solar power supply model for the solar powered device; creating a power demand model for the solar powered device; refining the solar power supply model and the power demand model using augmented data; computing an energy balance model for a first and second period of load of the solar powered device using the solar power supply model and the power demand model; and determining, based on the energy balance model, if an override capability is possible that would increase energy consumption of the load in the first period and provide energy for the load in the second period. 10. The method according to claim 9, wherein the step of refining the power demand model includes one or more of the following techniques and algorithms: statistic averaging of duration of increased energy consumption, statistic averaging of local weather phenomena, accounting for parasitic loads of other components related to the solar powered device, anti-freeze operations related to the solar powered device, and/or backup capacity limits. 11. The method according to claim 9, wherein the step of refining the power supply model includes one or more of the following techniques and algorithms that use additional data related to required period in day cycles to restore backup capacity, solar line of sight obstructions, recorded local bad weather phenomena, Linke Turbidity data, local average daily and daytime temperatures, and past and recorded solar energy collector performance. 12. (canceled) 13. The method according to claim 9, wherein the step of determining if an override capability is possible is implemented as a decision report including various override durations, energy intensity levels or risk factors. 14. A solar powered device, comprising:
a photo voltaic unit; a solar charger coupled to the photo voltaic unit; an energy storage unit coupled to the solar charger arranged to store energy from the photo voltaic unit; a control engine arranged to control the energy supply to a load; a communication interface; and a controller coupled to the control engine and the communication interface, wherein the controller is arranged to receive an override function signal via the communication interface, wherein the override function signal requests a change related to an energy consumption of the load, the controller is further arranged to determine if a current available stored energy amount in the energy storage unit can provide enough energy for the change in the energy consumption of the load, and estimate if an amount of energy depleted due to the change in the energy consumption can be recovered by solar generation in at least one or more subsequent days after the amount is depleted, and change the energy consumption of the load if the current available stored energy can provide enough energy and the amount of energy depleted can be recovered. 15. (canceled) 16. The solar powered device according to claim 14, wherein the change in the energy consumption of the load is in accordance with an energy use preservation profile. 17. The solar powered device according to claim 14, further comprising a user indicator that the override function signal is possible which means that the current available stored energy can provide enough energy and the amount of energy depleted can be recovered. 18. (canceled) 19. The solar powered device according to claim 14, wherein the controller is further arranged to return the energy consumption of the load to the normal or previous level after a predetermined amount of time has passed after receiving the override function signal. 20. The solar powered device according to claim 14, wherein the solar powered device is an off grid lighting unit and the load is a light producing means. | 2,100 |
5,929 | 5,929 | 15,222,996 | 2,124 | In an approach for providing a self-learning framework for performance analysis using content-oriented analysis, a processor initiates a performance analysis of a dump on a thread. A processor presents time information and an associated location of the time information. A processor analyzes the time information by registering the time information into a knowledge base to debug errors in a computer program. Subsequent to a query for dump information, a processor displays the analyzed time information, based on the performance analysis. | 1. A method for providing a self-learning framework for performance analysis using content-oriented analysis, the method comprising:
initiating, by one or more processors, a performance analysis of a dump on a thread; presenting, by one or more processors, time information and an associated location of the time information; analyzing, by one or more processors, the time information by registering the time information into a knowledge base to debug errors in a computer program; and subsequent to a query for dump information, displaying, by one or more processors, the analyzed time information, based on the performance analysis. 2. The method of claim 1, wherein displaying the analyzed time information comprises:
displaying, by one or more processors, an overview of running threads and length of time for each thread, a thread start time previously registered, and long run activities. 3. The method of claim 1, wherein displaying the analyzed time information comprises:
displaying, by one or more processors, elapsed run times for the thread; providing, by one or more processors, a time breakdown in a call stack; and identifying, by one or more processors, performance bottleneck in the thread. 4. The method of claim 1, wherein presenting time information and associated location comprises:
presenting, by one or more processors, a call stack, a time attribute, and an actual elapsed run time, wherein the time attribute includes location, name, and object type. 5. The method of claim 1, wherein presenting time information and associated location comprises:
presenting, by one or more processors, the time information and location of origin of a variable, based on the variable being used in multiple locations of multiple threads, wherein the variable is a representation of a time attribute. 6. The method of claim 1, wherein displaying the analyzed time information comprises:
displaying, by one or more processors, the analyzed time information after a predetermined time period following the analysis of the time information. 7. The method of claim 1, wherein the knowledge base is an extensible markup language (XML) library. 8. A computer program product for providing a self-learning framework for performance analysis using content-oriented analysis, the computer program product comprising:
one or more computer readable storage media and program instructions stored on the one or more computer readable storage media, the program instructions comprising: program instructions to initiate a performance analysis of a dump on a thread; program instructions to present time information and an associated location of the time information; program instructions to analyze the time information by registering the time information into a knowledge base to debug errors in a computer program; and subsequent to a query for dump information, program instructions to display the analyzed time information, based on the performance analysis. 9. The computer program product of claim 8, wherein program instructions to display the analyzed time information comprise:
program instructions to display an overview of running threads and length of time for each thread, a thread start time previously registered, and long run activities. 10. The computer program product of claim 8, wherein program instructions to display the analyzed time information comprise:
program instructions to display elapsed run times for the thread; program instructions to provide a time breakdown in a call stack; and program instructions to identify performance bottleneck in the thread. 11. The computer program product of claim 8, wherein program instructions to present time information and associated location comprise:
program instructions to present a call stack, a time attribute, and an actual elapsed run time, wherein the time attribute includes location, name, and object type. 12. The computer program product of claim 8, wherein program instructions to present time information and associated location comprise:
program instructions to present the time information and location of origin of a variable, based on the variable being used in multiple locations of multiple threads, wherein the variable is a representation of a time attribute. 13. The computer program product of claim 8, wherein program instructions to display the analyzed time information comprise:
program instructions to display the analyzed time information after a predetermined time period following the analysis of the time information. 14. The computer program product of claim 8, wherein the knowledge base is an extensible markup language (XML) library. 15. A computer system for providing a self-learning framework for performance analysis using content-oriented analysis, the computer system comprising:
one or more computer processors, one or more computer readable storage media, and program instructions stored on the computer readable storage media for execution by at least one of the one or more processors, the program instructions comprising: program instructions to initiate a performance analysis of a dump on a thread; program instructions to present time information and an associated location of the time information; program instructions to analyze the time information by registering the time information into a knowledge base to debug errors in a computer program; and subsequent to a query for dump information, program instructions to display the analyzed time information, based on the performance analysis. 16. The computer system of claim 15, wherein program instructions to display the analyzed time information comprise:
program instructions to display an overview of running threads and length of time for each thread, a thread start time previously registered, and long run activities. 17. The computer system of claim 15, wherein program instructions to display the analyzed time information comprise:
program instructions to display elapsed run times for the thread; program instructions to provide a time breakdown in a call stack; and program instructions to identify performance bottleneck in the thread. 18. The computer system of claim 15, wherein program instructions to present time information and associated location comprise:
program instructions to present a call stack, a time attribute, and an actual elapsed run time, wherein the time attribute includes location, name, and object type. 19. The computer system of claim 15, wherein program instructions to present time information and associated location comprise:
program instructions to present the time information and location of origin of a variable, based on the variable being used in multiple locations of multiple threads, wherein the variable is a representation of a time attribute. 20. The computer system of claim 15, wherein program instructions to display the analyzed time information comprise:
program instructions to display the analyzed time information after a predetermined time period following the analysis of the time information. | In an approach for providing a self-learning framework for performance analysis using content-oriented analysis, a processor initiates a performance analysis of a dump on a thread. A processor presents time information and an associated location of the time information. A processor analyzes the time information by registering the time information into a knowledge base to debug errors in a computer program. Subsequent to a query for dump information, a processor displays the analyzed time information, based on the performance analysis.1. A method for providing a self-learning framework for performance analysis using content-oriented analysis, the method comprising:
initiating, by one or more processors, a performance analysis of a dump on a thread; presenting, by one or more processors, time information and an associated location of the time information; analyzing, by one or more processors, the time information by registering the time information into a knowledge base to debug errors in a computer program; and subsequent to a query for dump information, displaying, by one or more processors, the analyzed time information, based on the performance analysis. 2. The method of claim 1, wherein displaying the analyzed time information comprises:
displaying, by one or more processors, an overview of running threads and length of time for each thread, a thread start time previously registered, and long run activities. 3. The method of claim 1, wherein displaying the analyzed time information comprises:
displaying, by one or more processors, elapsed run times for the thread; providing, by one or more processors, a time breakdown in a call stack; and identifying, by one or more processors, performance bottleneck in the thread. 4. The method of claim 1, wherein presenting time information and associated location comprises:
presenting, by one or more processors, a call stack, a time attribute, and an actual elapsed run time, wherein the time attribute includes location, name, and object type. 5. The method of claim 1, wherein presenting time information and associated location comprises:
presenting, by one or more processors, the time information and location of origin of a variable, based on the variable being used in multiple locations of multiple threads, wherein the variable is a representation of a time attribute. 6. The method of claim 1, wherein displaying the analyzed time information comprises:
displaying, by one or more processors, the analyzed time information after a predetermined time period following the analysis of the time information. 7. The method of claim 1, wherein the knowledge base is an extensible markup language (XML) library. 8. A computer program product for providing a self-learning framework for performance analysis using content-oriented analysis, the computer program product comprising:
one or more computer readable storage media and program instructions stored on the one or more computer readable storage media, the program instructions comprising: program instructions to initiate a performance analysis of a dump on a thread; program instructions to present time information and an associated location of the time information; program instructions to analyze the time information by registering the time information into a knowledge base to debug errors in a computer program; and subsequent to a query for dump information, program instructions to display the analyzed time information, based on the performance analysis. 9. The computer program product of claim 8, wherein program instructions to display the analyzed time information comprise:
program instructions to display an overview of running threads and length of time for each thread, a thread start time previously registered, and long run activities. 10. The computer program product of claim 8, wherein program instructions to display the analyzed time information comprise:
program instructions to display elapsed run times for the thread; program instructions to provide a time breakdown in a call stack; and program instructions to identify performance bottleneck in the thread. 11. The computer program product of claim 8, wherein program instructions to present time information and associated location comprise:
program instructions to present a call stack, a time attribute, and an actual elapsed run time, wherein the time attribute includes location, name, and object type. 12. The computer program product of claim 8, wherein program instructions to present time information and associated location comprise:
program instructions to present the time information and location of origin of a variable, based on the variable being used in multiple locations of multiple threads, wherein the variable is a representation of a time attribute. 13. The computer program product of claim 8, wherein program instructions to display the analyzed time information comprise:
program instructions to display the analyzed time information after a predetermined time period following the analysis of the time information. 14. The computer program product of claim 8, wherein the knowledge base is an extensible markup language (XML) library. 15. A computer system for providing a self-learning framework for performance analysis using content-oriented analysis, the computer system comprising:
one or more computer processors, one or more computer readable storage media, and program instructions stored on the computer readable storage media for execution by at least one of the one or more processors, the program instructions comprising: program instructions to initiate a performance analysis of a dump on a thread; program instructions to present time information and an associated location of the time information; program instructions to analyze the time information by registering the time information into a knowledge base to debug errors in a computer program; and subsequent to a query for dump information, program instructions to display the analyzed time information, based on the performance analysis. 16. The computer system of claim 15, wherein program instructions to display the analyzed time information comprise:
program instructions to display an overview of running threads and length of time for each thread, a thread start time previously registered, and long run activities. 17. The computer system of claim 15, wherein program instructions to display the analyzed time information comprise:
program instructions to display elapsed run times for the thread; program instructions to provide a time breakdown in a call stack; and program instructions to identify performance bottleneck in the thread. 18. The computer system of claim 15, wherein program instructions to present time information and associated location comprise:
program instructions to present a call stack, a time attribute, and an actual elapsed run time, wherein the time attribute includes location, name, and object type. 19. The computer system of claim 15, wherein program instructions to present time information and associated location comprise:
program instructions to present the time information and location of origin of a variable, based on the variable being used in multiple locations of multiple threads, wherein the variable is a representation of a time attribute. 20. The computer system of claim 15, wherein program instructions to display the analyzed time information comprise:
program instructions to display the analyzed time information after a predetermined time period following the analysis of the time information. | 2,100 |
5,930 | 5,930 | 12,955,486 | 2,163 | A computer implemented method is described. Data is collected about a number of member entities that have online interaction with a group entity. A predefined state is assigned to a selected one of the member entities automatically, in response to applying a predefined rule to analyze some of the collected data. The rule is defined in part by the group entity. The method automatically determines whether or not online content is to be delivered to the selected member entity, based on the assigned state. In another embodiment, online content that is to be delivered to the selected member entity is automatically personalized for the selected member entity, again based on the assigned state. Other embodiments are also described and claimed. | 1. (canceled) 2. An article of manufacture comprising:
a machine-accessible medium having stored therein a viewer application that is associated with a user, wherein the viewer application is a client program that programs a machine to enable the user to view a fax message that is contained in a file, wherein the viewer application upon being launched in the machine determines whether an Internet connection exists in the machine and if so sends a resource locator that includes an identification of the user, to a server over the Internet, the programmed machine is to then display a splash screen of the viewer application that contains a window, wherein content of the window is determined by the server in response to receiving the resource locator. 3. The article of manufacture of claim 2 wherein the viewer application is from a merchant of a telecommunication message service that receives fax messages at an inbound telephone number that has been assigned to the user by the merchant, and wherein the user identification sent to the server comprises the user's inbound telephone number. 4. The article of manufacture of claim 2 wherein the resource locator sent by the viewer application comprises a request for advertisement content. 5. The article of manufacture of claim 2 wherein content of the splash screen window includes an advertisement that has been personalized or selected for the user by the server. 6. The article of manufacture of claim 2 wherein content of the splash screen window has been personalized or selected by the server for the user, based on the identification of the user received in the resource locator and based on predetermined content delivery rules stored in the server. 7. The article of manufacture of claim 2 wherein content of the splash screen window includes an advertisement that has been personalized or selected for the user, by the server a) receiving the user id that is in the resource locator, b) using the received user id to access a stored, unique state variable that is assigned to the user, wherein the state variable indicates a current customer life cycle state value of the user as a customer of a merchant, and c) on the basis of the current state value of the user, determining how to personalize or select the advertisement by applying a stored, predetermined content delivery rule to the current state value of the user. 8. The article of manufacture of claim 6 wherein the resource locator further comprises a content presentation method identifier that is recognized by the server as referring to a type of online content to be personalized. 9. The article of manufacture of claim 8 wherein the resource locator further comprises an online content distribution campaign identifier. 10. The article of manufacture of claim 2 wherein the machine-accessible medium contains further instructions that program the machine to obtain the resource locator, from the merchant, when the user downloads the viewer application to be installed in the machine for the first time. 11. The article of manufacture of claim 2 wherein the machine-accessible medium contains further instructions that program the machine to obtain the resource locator, from the merchant, when the user downloads the file containing the fax message, prior to launching the viewer application. | A computer implemented method is described. Data is collected about a number of member entities that have online interaction with a group entity. A predefined state is assigned to a selected one of the member entities automatically, in response to applying a predefined rule to analyze some of the collected data. The rule is defined in part by the group entity. The method automatically determines whether or not online content is to be delivered to the selected member entity, based on the assigned state. In another embodiment, online content that is to be delivered to the selected member entity is automatically personalized for the selected member entity, again based on the assigned state. Other embodiments are also described and claimed.1. (canceled) 2. An article of manufacture comprising:
a machine-accessible medium having stored therein a viewer application that is associated with a user, wherein the viewer application is a client program that programs a machine to enable the user to view a fax message that is contained in a file, wherein the viewer application upon being launched in the machine determines whether an Internet connection exists in the machine and if so sends a resource locator that includes an identification of the user, to a server over the Internet, the programmed machine is to then display a splash screen of the viewer application that contains a window, wherein content of the window is determined by the server in response to receiving the resource locator. 3. The article of manufacture of claim 2 wherein the viewer application is from a merchant of a telecommunication message service that receives fax messages at an inbound telephone number that has been assigned to the user by the merchant, and wherein the user identification sent to the server comprises the user's inbound telephone number. 4. The article of manufacture of claim 2 wherein the resource locator sent by the viewer application comprises a request for advertisement content. 5. The article of manufacture of claim 2 wherein content of the splash screen window includes an advertisement that has been personalized or selected for the user by the server. 6. The article of manufacture of claim 2 wherein content of the splash screen window has been personalized or selected by the server for the user, based on the identification of the user received in the resource locator and based on predetermined content delivery rules stored in the server. 7. The article of manufacture of claim 2 wherein content of the splash screen window includes an advertisement that has been personalized or selected for the user, by the server a) receiving the user id that is in the resource locator, b) using the received user id to access a stored, unique state variable that is assigned to the user, wherein the state variable indicates a current customer life cycle state value of the user as a customer of a merchant, and c) on the basis of the current state value of the user, determining how to personalize or select the advertisement by applying a stored, predetermined content delivery rule to the current state value of the user. 8. The article of manufacture of claim 6 wherein the resource locator further comprises a content presentation method identifier that is recognized by the server as referring to a type of online content to be personalized. 9. The article of manufacture of claim 8 wherein the resource locator further comprises an online content distribution campaign identifier. 10. The article of manufacture of claim 2 wherein the machine-accessible medium contains further instructions that program the machine to obtain the resource locator, from the merchant, when the user downloads the viewer application to be installed in the machine for the first time. 11. The article of manufacture of claim 2 wherein the machine-accessible medium contains further instructions that program the machine to obtain the resource locator, from the merchant, when the user downloads the file containing the fax message, prior to launching the viewer application. | 2,100 |
5,931 | 5,931 | 14,133,987 | 2,174 | A view is created that includes nodes in a serial sequence of nodes. Hierarchical tree data is received. It can be determined whether a node is a start node of a serial sequence of nodes. Responsive to a determination that the node is a start node of a serial sequence of nodes a collapse control of the start node in the serial sequence of nodes is changed to a collapsed state. The computer-implemented process counts intervening nodes between the start node and an end node of the serial sequence of nodes to form a count, hides the intervening nodes to form hidden intervening nodes, creates a segment using the start node with collapse control and the end node using the count in place of the hidden intervening nodes and creates the view using the segments. | 1. A method comprising:
receiving hierarchical tree data; determining whether a node is a start node of a serial sequence of nodes; responsive to a determination that the node is a start node of a serial sequence of nodes, changing a collapse control of the start node in the serial sequence of nodes to a collapsed state; counting intervening nodes between the start node and an end node of the serial sequence of nodes to form a count; hiding the intervening nodes to form hidden intervening nodes; creating a segment using the start node with collapse control and the end node using the count in place of the hidden intervening nodes; and creating a hierarchical tree view using the segment, wherein one or more actions of the method are performed by a computing device comprising a processor executing program instructions stored in a non-transitory storage medium. 2. The method of claim 1 wherein receiving hierarchical tree data further comprises:
determining whether the hierarchical tree data is to be presented in a summary view state;
responsive to a determination that the hierarchical tree data is to be presented in a summary view state, determining whether a node with multiple children exists;
responsive to a determination that a node with multiple children exists, setting the node to expanded state;
responsive to a determination that a node with multiple children does not exist, determining whether a node with a single child and no grandchild exists;
responsive to a determination that a node with a single child and no grandchild exists, setting the node to expanded state; and
determining whether more nodes exist. 3. The method of claim 2 further comprising:
responsive to a determination that the hierarchical tree data is not to be presented in a summary view, determining whether a node with multiple children exists;
responsive to a determination that a node with multiple children exists, performing a normal collapse operation;
responsive to a determination that a node with multiple children does not exist, determining whether a node with a single child and no grandchild exists;
responsive to a determination that a node with a single child and no grandchild exists, performing a normal collapse operation; and
determining whether more nodes exist. 4. The method of claim 1 wherein creating a view using the segment further comprises: determining whether a list mode exists. 5. The method of claim 4 wherein responsive to a determination that a list mode exists further comprises:
identifying a node of interest;
identifying a number of entries before and after the node of interest; and
identifying a path including the node of interest. 6. The method of claim 5 wherein identifying a path including the node of interest further comprises:
identifying additional node information; and
creating the view with the additional node information. 7. The method of claim 1 further comprising: displaying the view. 8. A non-transitory computer program product comprising:
a computer recordable-type media containing computer executable program code stored thereon, the computer executable program code comprising: computer executable program code for receiving hierarchical tree data; computer executable program code for determining whether a node is a start node of a serial sequence of nodes; computer executable program code responsive to a determination that the node is a start node of a serial sequence of nodes for changing a collapse control of the start node in the serial sequence of nodes to a collapsed state; computer executable program code for counting intervening nodes between the start node and an end node of the serial sequence of nodes to form a count; computer executable program code for hiding the intervening nodes to form hidden intervening nodes; computer executable program code for creating a segment using the start node with collapse control and the end node using the count in place of the hidden intervening nodes; and computer executable program code for creating a hierarchical tree view using the segment. 9. The non-transitory computer program product of claim 8 wherein computer executable program code for receiving hierarchical tree data further comprises:
computer executable program code for determining whether the hierarchical tree data is to be presented in a summary view state;
computer executable program code responsive to a determination that the hierarchical tree data is to be presented in the summary view state, for determining whether a node with multiple children exists;
computer executable program code responsive to a determination that a node with multiple children exists, for setting the node to expanded state;
computer executable program code responsive to a determination that a node with multiple children does not exist, for determining whether a node with a single child and no grandchild exists;
computer executable program code responsive to a determination that a node with a single child and no grandchild exists, for setting the node to expanded state; and
computer executable program code for determining whether more nodes exist. 10. The non-transitory computer program product of claim 9 wherein computer executable program code responsive to a determination that the hierarchical tree data is no to be presented in a summary view state, further comprises:
computer executable program code for determining whether a node with multiple children exists;
computer executable program code responsive to a determination that a node with multiple children exists, for performing a normal collapse operation;
computer executable program code responsive to a determination that a node with multiple children does not exist, for determining whether a node with a single child and no grandchild exists;
computer executable program code responsive to a determination that a node with a single child and no grandchild exists, for performing a normal collapse operation; and
computer executable program code for determining whether more nodes exist. 11. The non-transitory computer program product of claim 8 wherein computer executable program code for creating the view using the segment further comprises:
computer executable program code for determining whether a list mode exists. 12. The non-transitory computer program product of claim 11 wherein computer executable program code responsive to a determination that the list mode exists further comprises:
computer executable program code for identifying a node of interest;
computer executable program code for identifying a number of entries before and after the node of interest; and
computer executable program code for identifying a path including the node of interest. 13. The non-transitory computer program product of claim 12 wherein computer executable program code for identifying a path including the node of interest further comprises:
computer executable program code for identifying additional node information; and
computer executable program code for creating the view with the additional node information. 14. The non-transitory computer program product of claim 8 further comprising:
computer executable program code for displaying the view. 15. An apparatus comprising:
a communications fabric; a memory connected to the communications fabric, wherein the memory contains computer executable program code; a communications unit connected to the communications fabric; an input/output unit connected to the communications fabric; a display connected to the communications fabric; and a processor unit connected to the communications fabric, wherein the processor unit executes the computer executable program code to direct the apparatus to: receive hierarchical tree data; determine whether a node is a start node of a serial sequence of nodes; responsive to a determination that the node is a start node of a serial sequence of nodes, change a collapse control of the start node in the serial sequence of nodes to a collapsed state; count intervening nodes between the start node and an end node of the serial sequence of nodes to form a count; hide the intervening nodes to form hidden intervening nodes; create a segment using the start node with collapse control and the end node using the count in place of the hidden intervening nodes; and create a hierarchical tree view using the segment. 16. The apparatus of claim 15 wherein the processor unit executes the computer executable program code to direct the apparatus to receive hierarchical tree data further directs the apparatus to:
determine whether the hierarchical tree data is to be presented in a summary view state;
responsive to a determination that the hierarchical tree data is to be presented in a summary view state, determine whether a node with multiple children exists;
responsive to a determination that a node with multiple children exists, set the node to expanded state;
responsive to a determination that a node with multiple children does not exist, determine whether a node with a single child and no grandchild exists;
responsive to a determination that a node with a single child and no grandchild exists, set the node to expanded state; and
determine whether more nodes exist. 17. The apparatus of claim 16 wherein the processor unit executes the computer executable program code responsive to a determination that the hierarchical tree data is not to be presented in a summary view state, to direct the apparatus to:
determine whether a node with multiple children exists;
responsive to a determination that a node with multiple children exists, perform a normal collapse operation;
responsive to a determination that a node with multiple children does not exist, determine whether a node with a single child and no grandchild exists;
responsive to a determination that a node with a single child and no grandchild exists, perform a normal collapse operation; and
determine whether more nodes exist. 18. The apparatus of claim 15 wherein the processor unit executes the computer executable program code to create a view using the segment further directs the apparatus to:
determine whether a list mode exists. 19. The apparatus of claim 18 wherein the processor unit executes the computer executable program code responsive to a determination that the list mode exists further directs the apparatus to:
identify a node of interest;
identify a number of entries before and after the node of interest; and
identify a path including the node of interest. 20. The apparatus of claim 19 wherein the processor unit executes the computer executable program code to identify a path including the node of interest further directs the apparatus to:
identify additional node information; and
create the view with the additional node information. | A view is created that includes nodes in a serial sequence of nodes. Hierarchical tree data is received. It can be determined whether a node is a start node of a serial sequence of nodes. Responsive to a determination that the node is a start node of a serial sequence of nodes a collapse control of the start node in the serial sequence of nodes is changed to a collapsed state. The computer-implemented process counts intervening nodes between the start node and an end node of the serial sequence of nodes to form a count, hides the intervening nodes to form hidden intervening nodes, creates a segment using the start node with collapse control and the end node using the count in place of the hidden intervening nodes and creates the view using the segments.1. A method comprising:
receiving hierarchical tree data; determining whether a node is a start node of a serial sequence of nodes; responsive to a determination that the node is a start node of a serial sequence of nodes, changing a collapse control of the start node in the serial sequence of nodes to a collapsed state; counting intervening nodes between the start node and an end node of the serial sequence of nodes to form a count; hiding the intervening nodes to form hidden intervening nodes; creating a segment using the start node with collapse control and the end node using the count in place of the hidden intervening nodes; and creating a hierarchical tree view using the segment, wherein one or more actions of the method are performed by a computing device comprising a processor executing program instructions stored in a non-transitory storage medium. 2. The method of claim 1 wherein receiving hierarchical tree data further comprises:
determining whether the hierarchical tree data is to be presented in a summary view state;
responsive to a determination that the hierarchical tree data is to be presented in a summary view state, determining whether a node with multiple children exists;
responsive to a determination that a node with multiple children exists, setting the node to expanded state;
responsive to a determination that a node with multiple children does not exist, determining whether a node with a single child and no grandchild exists;
responsive to a determination that a node with a single child and no grandchild exists, setting the node to expanded state; and
determining whether more nodes exist. 3. The method of claim 2 further comprising:
responsive to a determination that the hierarchical tree data is not to be presented in a summary view, determining whether a node with multiple children exists;
responsive to a determination that a node with multiple children exists, performing a normal collapse operation;
responsive to a determination that a node with multiple children does not exist, determining whether a node with a single child and no grandchild exists;
responsive to a determination that a node with a single child and no grandchild exists, performing a normal collapse operation; and
determining whether more nodes exist. 4. The method of claim 1 wherein creating a view using the segment further comprises: determining whether a list mode exists. 5. The method of claim 4 wherein responsive to a determination that a list mode exists further comprises:
identifying a node of interest;
identifying a number of entries before and after the node of interest; and
identifying a path including the node of interest. 6. The method of claim 5 wherein identifying a path including the node of interest further comprises:
identifying additional node information; and
creating the view with the additional node information. 7. The method of claim 1 further comprising: displaying the view. 8. A non-transitory computer program product comprising:
a computer recordable-type media containing computer executable program code stored thereon, the computer executable program code comprising: computer executable program code for receiving hierarchical tree data; computer executable program code for determining whether a node is a start node of a serial sequence of nodes; computer executable program code responsive to a determination that the node is a start node of a serial sequence of nodes for changing a collapse control of the start node in the serial sequence of nodes to a collapsed state; computer executable program code for counting intervening nodes between the start node and an end node of the serial sequence of nodes to form a count; computer executable program code for hiding the intervening nodes to form hidden intervening nodes; computer executable program code for creating a segment using the start node with collapse control and the end node using the count in place of the hidden intervening nodes; and computer executable program code for creating a hierarchical tree view using the segment. 9. The non-transitory computer program product of claim 8 wherein computer executable program code for receiving hierarchical tree data further comprises:
computer executable program code for determining whether the hierarchical tree data is to be presented in a summary view state;
computer executable program code responsive to a determination that the hierarchical tree data is to be presented in the summary view state, for determining whether a node with multiple children exists;
computer executable program code responsive to a determination that a node with multiple children exists, for setting the node to expanded state;
computer executable program code responsive to a determination that a node with multiple children does not exist, for determining whether a node with a single child and no grandchild exists;
computer executable program code responsive to a determination that a node with a single child and no grandchild exists, for setting the node to expanded state; and
computer executable program code for determining whether more nodes exist. 10. The non-transitory computer program product of claim 9 wherein computer executable program code responsive to a determination that the hierarchical tree data is no to be presented in a summary view state, further comprises:
computer executable program code for determining whether a node with multiple children exists;
computer executable program code responsive to a determination that a node with multiple children exists, for performing a normal collapse operation;
computer executable program code responsive to a determination that a node with multiple children does not exist, for determining whether a node with a single child and no grandchild exists;
computer executable program code responsive to a determination that a node with a single child and no grandchild exists, for performing a normal collapse operation; and
computer executable program code for determining whether more nodes exist. 11. The non-transitory computer program product of claim 8 wherein computer executable program code for creating the view using the segment further comprises:
computer executable program code for determining whether a list mode exists. 12. The non-transitory computer program product of claim 11 wherein computer executable program code responsive to a determination that the list mode exists further comprises:
computer executable program code for identifying a node of interest;
computer executable program code for identifying a number of entries before and after the node of interest; and
computer executable program code for identifying a path including the node of interest. 13. The non-transitory computer program product of claim 12 wherein computer executable program code for identifying a path including the node of interest further comprises:
computer executable program code for identifying additional node information; and
computer executable program code for creating the view with the additional node information. 14. The non-transitory computer program product of claim 8 further comprising:
computer executable program code for displaying the view. 15. An apparatus comprising:
a communications fabric; a memory connected to the communications fabric, wherein the memory contains computer executable program code; a communications unit connected to the communications fabric; an input/output unit connected to the communications fabric; a display connected to the communications fabric; and a processor unit connected to the communications fabric, wherein the processor unit executes the computer executable program code to direct the apparatus to: receive hierarchical tree data; determine whether a node is a start node of a serial sequence of nodes; responsive to a determination that the node is a start node of a serial sequence of nodes, change a collapse control of the start node in the serial sequence of nodes to a collapsed state; count intervening nodes between the start node and an end node of the serial sequence of nodes to form a count; hide the intervening nodes to form hidden intervening nodes; create a segment using the start node with collapse control and the end node using the count in place of the hidden intervening nodes; and create a hierarchical tree view using the segment. 16. The apparatus of claim 15 wherein the processor unit executes the computer executable program code to direct the apparatus to receive hierarchical tree data further directs the apparatus to:
determine whether the hierarchical tree data is to be presented in a summary view state;
responsive to a determination that the hierarchical tree data is to be presented in a summary view state, determine whether a node with multiple children exists;
responsive to a determination that a node with multiple children exists, set the node to expanded state;
responsive to a determination that a node with multiple children does not exist, determine whether a node with a single child and no grandchild exists;
responsive to a determination that a node with a single child and no grandchild exists, set the node to expanded state; and
determine whether more nodes exist. 17. The apparatus of claim 16 wherein the processor unit executes the computer executable program code responsive to a determination that the hierarchical tree data is not to be presented in a summary view state, to direct the apparatus to:
determine whether a node with multiple children exists;
responsive to a determination that a node with multiple children exists, perform a normal collapse operation;
responsive to a determination that a node with multiple children does not exist, determine whether a node with a single child and no grandchild exists;
responsive to a determination that a node with a single child and no grandchild exists, perform a normal collapse operation; and
determine whether more nodes exist. 18. The apparatus of claim 15 wherein the processor unit executes the computer executable program code to create a view using the segment further directs the apparatus to:
determine whether a list mode exists. 19. The apparatus of claim 18 wherein the processor unit executes the computer executable program code responsive to a determination that the list mode exists further directs the apparatus to:
identify a node of interest;
identify a number of entries before and after the node of interest; and
identify a path including the node of interest. 20. The apparatus of claim 19 wherein the processor unit executes the computer executable program code to identify a path including the node of interest further directs the apparatus to:
identify additional node information; and
create the view with the additional node information. | 2,100 |
5,932 | 5,932 | 14,215,843 | 2,177 | Systems and methods of the present technology generally provide computer implemented assistance for data summary, including organizing and generating a summary of data selected form source documents. In accordance with the present technology, a user identifies a subset of information from one or more source documents, assigns an identifier to the user-identified information, and may add custom information. The user can repeat the identification and assignment steps using multiple source documents, as many times as desired. The system then analyzes and prioritizes the user-identified information and any custom information, and generates a formatted summary. | 1. A data summary system comprising:
at least one user device including at least one user device processor and at least one user device non-transitory computer readable medium; at least one system server including at least one system processor and at least one system non-transitory computer readable medium; and a communication link that operatively connects the at least one user device and the at least one system server; program instructions stored on at least one of the at least one system non-transitory computer readable medium and the at least one user device non-transitory computer readable medium, the program instructions being executable by at least one of the at least one system processor and the at least one user device processor that, when executed, cause the data summary system to perform steps of: receiving a first dataset, the first dataset including user-identified information, a unique identifier associated with the user-identified information, and a user identifier; storing the user-identified information and the unique identifier; and generating a summary including the user-identified information. 2. The system of claim 1, wherein the step of storing includes storing the user-identified information and the unique identifier in a user profile based on the user identifier. 3. The system of claim 1, wherein the program instructions, when executed, further cause the data summary system to perform a step of:
linking the source document and the user-identified information. 4. The system of claim 1, wherein the user selects the unique identifier associated with the user-identified information from a plurality of predefined unique identifiers. 5. The system of claim 1, wherein the program instructions, when executed, further cause the data summary system to perform steps of:
receiving custom information input by a user under a unique identifier; and storing the custom information in the user profile. 6. The system of claim 1, wherein the step of generating the summary includes prioritizing the custom information relative to the user-identified information. 7. The system of claim 1, wherein the program instructions, when executed, further cause the data summary system to perform a step of:
providing the generated summary to the at least one user device. 8. The system of claim 1, wherein the program instructions, when executed, further cause the data summary system to perform a step of:
aggregating multiple summaries. 9. The system of claim 1, wherein the program instructions, when executed, further cause the data summary system to perform a step of:
generating a split screen display including a source document and a summary generated by the system. 10. A method implemented by a data summary system comprising at least one user device, at least one system server, and a communication link that operatively connects the at least one user device and the at least one system server, the method comprising steps of:
the at least one system server receiving a first dataset from the at least one user device via the communication link, the first dataset including user-identified information, a unique identifier associated with the user-identified information, and a user identifier; the at least one system server storing the user-identified information and the unique identifier in a user profile based on the user identifier; and the at least one system server generating a summary including the user-identified information. 11. The method of claim 10, wherein the user-identified information is selected by a user from a source document. 12. The method of claim 10, wherein the method further comprises a step of:
the system server linking the source document and the user-identified information. 13. The method of claim 10, wherein the user selects the unique identifier associated with the user-identified information from a plurality of predefined unique identifiers. 14. The method of claim 10, wherein the method further comprises a step of:
the system server receiving custom information input by a user under a unique identifier; and the system server storing the custom information in the user profile. 15. The method of claim 10, wherein the step of generating the summary includes prioritizing the custom information above the user-identified information. 16. The method of claim 10, wherein the method further comprises a step of:
the system server providing the generated summary to the at least one user device. 17. The method of claim 10, wherein the method further comprises a step of:
the system server aggregating multiple summaries. 18. The method of claim 10, wherein the method further comprises a step of:
the system server generating a split screen display including a source document and a summary generated by the system. 19. A method implemented by a data summary system comprising at least one user device, at least one system server, and a communication link that operatively connects the at least one user device and the at least one system server, the method comprising steps of:
the at least one user device storing a first dataset, the first dataset including user-identified information, a unique identifier associated with the user-identified information, and a user identifier; the at least one user device generating a summary including the user-identified information and custom information; and the at least one user device synchronizing information on the at least one system server when the communications link is available. | Systems and methods of the present technology generally provide computer implemented assistance for data summary, including organizing and generating a summary of data selected form source documents. In accordance with the present technology, a user identifies a subset of information from one or more source documents, assigns an identifier to the user-identified information, and may add custom information. The user can repeat the identification and assignment steps using multiple source documents, as many times as desired. The system then analyzes and prioritizes the user-identified information and any custom information, and generates a formatted summary.1. A data summary system comprising:
at least one user device including at least one user device processor and at least one user device non-transitory computer readable medium; at least one system server including at least one system processor and at least one system non-transitory computer readable medium; and a communication link that operatively connects the at least one user device and the at least one system server; program instructions stored on at least one of the at least one system non-transitory computer readable medium and the at least one user device non-transitory computer readable medium, the program instructions being executable by at least one of the at least one system processor and the at least one user device processor that, when executed, cause the data summary system to perform steps of: receiving a first dataset, the first dataset including user-identified information, a unique identifier associated with the user-identified information, and a user identifier; storing the user-identified information and the unique identifier; and generating a summary including the user-identified information. 2. The system of claim 1, wherein the step of storing includes storing the user-identified information and the unique identifier in a user profile based on the user identifier. 3. The system of claim 1, wherein the program instructions, when executed, further cause the data summary system to perform a step of:
linking the source document and the user-identified information. 4. The system of claim 1, wherein the user selects the unique identifier associated with the user-identified information from a plurality of predefined unique identifiers. 5. The system of claim 1, wherein the program instructions, when executed, further cause the data summary system to perform steps of:
receiving custom information input by a user under a unique identifier; and storing the custom information in the user profile. 6. The system of claim 1, wherein the step of generating the summary includes prioritizing the custom information relative to the user-identified information. 7. The system of claim 1, wherein the program instructions, when executed, further cause the data summary system to perform a step of:
providing the generated summary to the at least one user device. 8. The system of claim 1, wherein the program instructions, when executed, further cause the data summary system to perform a step of:
aggregating multiple summaries. 9. The system of claim 1, wherein the program instructions, when executed, further cause the data summary system to perform a step of:
generating a split screen display including a source document and a summary generated by the system. 10. A method implemented by a data summary system comprising at least one user device, at least one system server, and a communication link that operatively connects the at least one user device and the at least one system server, the method comprising steps of:
the at least one system server receiving a first dataset from the at least one user device via the communication link, the first dataset including user-identified information, a unique identifier associated with the user-identified information, and a user identifier; the at least one system server storing the user-identified information and the unique identifier in a user profile based on the user identifier; and the at least one system server generating a summary including the user-identified information. 11. The method of claim 10, wherein the user-identified information is selected by a user from a source document. 12. The method of claim 10, wherein the method further comprises a step of:
the system server linking the source document and the user-identified information. 13. The method of claim 10, wherein the user selects the unique identifier associated with the user-identified information from a plurality of predefined unique identifiers. 14. The method of claim 10, wherein the method further comprises a step of:
the system server receiving custom information input by a user under a unique identifier; and the system server storing the custom information in the user profile. 15. The method of claim 10, wherein the step of generating the summary includes prioritizing the custom information above the user-identified information. 16. The method of claim 10, wherein the method further comprises a step of:
the system server providing the generated summary to the at least one user device. 17. The method of claim 10, wherein the method further comprises a step of:
the system server aggregating multiple summaries. 18. The method of claim 10, wherein the method further comprises a step of:
the system server generating a split screen display including a source document and a summary generated by the system. 19. A method implemented by a data summary system comprising at least one user device, at least one system server, and a communication link that operatively connects the at least one user device and the at least one system server, the method comprising steps of:
the at least one user device storing a first dataset, the first dataset including user-identified information, a unique identifier associated with the user-identified information, and a user identifier; the at least one user device generating a summary including the user-identified information and custom information; and the at least one user device synchronizing information on the at least one system server when the communications link is available. | 2,100 |
5,933 | 5,933 | 14,108,128 | 2,176 | An asset merging system generates a merge file containing assets from multiple party systems. The asset merging system receives the assets and determines whether any of the assets has changed versus previously received versions of the assets. If any of the received assets has changed, the asset merging system generates a merge file containing at least the most recent version of the changed assets. The asset merging system then communicates the merge file to a content delivery network (CDN) for serving to users requesting for content that uses the assets in the merge file. | 1. A computer implemented method comprising:
receiving third party web page assets provided by one or more third parties located at one or more third party web addresses, the third party assets configured to provide functionality to a web page operated by a first party distinct from the one or more third parties; determining whether one or more of the received third party assets differs from a previous version of that third party asset; and responsive to one or more of the received assets being different from the previous version of that third party asset:
merging the received third party web page assets into a merge file, and
storing, at a web address other than the third parties web addresses, the merge file, the web address accessible to clients requesting the web page operated by the first party. 2. The computer implemented method of claim 1, wherein the third party web page assets include Javascript computer code. 3. The computer implemented method of claim 1, wherein the third party web page assets include cascading style sheet computer code. 4. The computer implemented method of claim 1, wherein determining whether one or more of the received third party assets differs from a previous version of that third party asset comprises:
determining whether a hash value for the received third party asset is equal to a hash value of the previous version of the third party asset. 5. The computer implemented method of claim 1 further comprising:
sending requests for third party assets to the plurality of third party sources periodically in time. 6. The computer implemented method of claim 5 further comprising:
responsive to a request for a third party asset timing out, resending the request one or more times. 7. The computer implemented method of claim 6 further comprising:
responsive to a request for a specific third party asset timing out a threshold number of times, using a previously received version of the third party asset as the current version of the third party asset. 8. The computer implemented method of claim 5 further comprising:
responsive to the request for a specific third party asset timing out a threshold number of times, increasing the amount of time until a next request for the third party asset is sent. 9. A computer implemented method comprising:
receiving a plurality of third party web page assets associated with a first geographic location, and a plurality of third party web page assets associated with a second geographic location, the third party web page assets provided by one or more third parties located at one or more third party web addresses, the third party assets configured to provide functionality to a web page operated by a first party distinct from the one or more third parties; merging the plurality of third party web page assets associated with the first geographic location into a first merge file; merging the plurality of third party web page assets associated with the second geographic location into a second merge file; storing the first merge file at a first web address, the first web address associated with the first geographic location and different than the third parties web addresses; and storing the second merge file at a second web address, the second web address associated with the second geographic location and different than the third parties web address. 10. The computer implemented method of claim 9 wherein the third party web page assets including at least one of Javascript and cascading style sheet computer code. 11. The computer implemented method of claim 9, wherein determining whether one or more of the received third party assets differs from a previous version of that third party asset comprises:
determining whether a hash value for the received third party asset is equal to a hash value of the previous version of the third party asset. 12. The computer implemented method of claim 9 further comprising:
sending requests for third party assets to the plurality of third party sources periodically in time. 13. The computer implemented method of claim 12 further comprising:
responsive to a request for a third party asset timing out, resending the request one or more times. 14. The computer implemented method of claim 13 further comprising:
responsive to a request for a specific third party asset timing out a threshold number of times, using a previously received version of the third party asset as the current version of the third party asset. 15. The computer implemented method of claim 12 further comprising:
responsive to the request for a specific third party asset timing out a threshold number of times, increasing the amount of time until the next request for the third party asset is sent. 16. A system comprising:
an assets collection module configured to receive third party web page assets provided by one or more third parties located at one or more third party web addresses, the third party assets configured to provide functionality to a web page operated by a first party distinct from the one or more third parties; an assets merging module configured to:
determine whether one or more of the third party assets received by the assets collection module differs from a previous version of that third party asset, and
merge the received third party web page assets into a merge file; and
an upload module configured to store, at a web address other than the one of the third parties web addresses, the merge file, the web address accessible to clients requesting the web page operated by the first party. 17. The system of claim 16, wherein the third party web page assets include at least one of a Javascript and a cascading style sheet computer code. 18. The system of claim 16 wherein the assets collection module is further configured to:
send requests for third party assets to the plurality of third party sources periodically in time;
responsive to a request for a third party asset timing out, resend the request one or more times; and
responsive to a request for a specific third party asset timing out a threshold number of times, use a previously received version of the third party asset as the current version of the third party asset. 19. The system of claim 16, wherein the received third party assets include a plurality of assets associated with a first geographic location and a plurality of assets associated with a second geographic location, and wherein merging the assets into a consolidated file of third party assets comprises:
merging the third party assets associated with the first geographic location into a first merge file; and merging the third party assets associated with the second geographic location into a second merge file. 20. The system of claim 19, wherein storing the merge file comprises:
storing first merge file at a web address associated with the first geographic location; and storing the second merge file at a web address associated with the second geographic location. | An asset merging system generates a merge file containing assets from multiple party systems. The asset merging system receives the assets and determines whether any of the assets has changed versus previously received versions of the assets. If any of the received assets has changed, the asset merging system generates a merge file containing at least the most recent version of the changed assets. The asset merging system then communicates the merge file to a content delivery network (CDN) for serving to users requesting for content that uses the assets in the merge file.1. A computer implemented method comprising:
receiving third party web page assets provided by one or more third parties located at one or more third party web addresses, the third party assets configured to provide functionality to a web page operated by a first party distinct from the one or more third parties; determining whether one or more of the received third party assets differs from a previous version of that third party asset; and responsive to one or more of the received assets being different from the previous version of that third party asset:
merging the received third party web page assets into a merge file, and
storing, at a web address other than the third parties web addresses, the merge file, the web address accessible to clients requesting the web page operated by the first party. 2. The computer implemented method of claim 1, wherein the third party web page assets include Javascript computer code. 3. The computer implemented method of claim 1, wherein the third party web page assets include cascading style sheet computer code. 4. The computer implemented method of claim 1, wherein determining whether one or more of the received third party assets differs from a previous version of that third party asset comprises:
determining whether a hash value for the received third party asset is equal to a hash value of the previous version of the third party asset. 5. The computer implemented method of claim 1 further comprising:
sending requests for third party assets to the plurality of third party sources periodically in time. 6. The computer implemented method of claim 5 further comprising:
responsive to a request for a third party asset timing out, resending the request one or more times. 7. The computer implemented method of claim 6 further comprising:
responsive to a request for a specific third party asset timing out a threshold number of times, using a previously received version of the third party asset as the current version of the third party asset. 8. The computer implemented method of claim 5 further comprising:
responsive to the request for a specific third party asset timing out a threshold number of times, increasing the amount of time until a next request for the third party asset is sent. 9. A computer implemented method comprising:
receiving a plurality of third party web page assets associated with a first geographic location, and a plurality of third party web page assets associated with a second geographic location, the third party web page assets provided by one or more third parties located at one or more third party web addresses, the third party assets configured to provide functionality to a web page operated by a first party distinct from the one or more third parties; merging the plurality of third party web page assets associated with the first geographic location into a first merge file; merging the plurality of third party web page assets associated with the second geographic location into a second merge file; storing the first merge file at a first web address, the first web address associated with the first geographic location and different than the third parties web addresses; and storing the second merge file at a second web address, the second web address associated with the second geographic location and different than the third parties web address. 10. The computer implemented method of claim 9 wherein the third party web page assets including at least one of Javascript and cascading style sheet computer code. 11. The computer implemented method of claim 9, wherein determining whether one or more of the received third party assets differs from a previous version of that third party asset comprises:
determining whether a hash value for the received third party asset is equal to a hash value of the previous version of the third party asset. 12. The computer implemented method of claim 9 further comprising:
sending requests for third party assets to the plurality of third party sources periodically in time. 13. The computer implemented method of claim 12 further comprising:
responsive to a request for a third party asset timing out, resending the request one or more times. 14. The computer implemented method of claim 13 further comprising:
responsive to a request for a specific third party asset timing out a threshold number of times, using a previously received version of the third party asset as the current version of the third party asset. 15. The computer implemented method of claim 12 further comprising:
responsive to the request for a specific third party asset timing out a threshold number of times, increasing the amount of time until the next request for the third party asset is sent. 16. A system comprising:
an assets collection module configured to receive third party web page assets provided by one or more third parties located at one or more third party web addresses, the third party assets configured to provide functionality to a web page operated by a first party distinct from the one or more third parties; an assets merging module configured to:
determine whether one or more of the third party assets received by the assets collection module differs from a previous version of that third party asset, and
merge the received third party web page assets into a merge file; and
an upload module configured to store, at a web address other than the one of the third parties web addresses, the merge file, the web address accessible to clients requesting the web page operated by the first party. 17. The system of claim 16, wherein the third party web page assets include at least one of a Javascript and a cascading style sheet computer code. 18. The system of claim 16 wherein the assets collection module is further configured to:
send requests for third party assets to the plurality of third party sources periodically in time;
responsive to a request for a third party asset timing out, resend the request one or more times; and
responsive to a request for a specific third party asset timing out a threshold number of times, use a previously received version of the third party asset as the current version of the third party asset. 19. The system of claim 16, wherein the received third party assets include a plurality of assets associated with a first geographic location and a plurality of assets associated with a second geographic location, and wherein merging the assets into a consolidated file of third party assets comprises:
merging the third party assets associated with the first geographic location into a first merge file; and merging the third party assets associated with the second geographic location into a second merge file. 20. The system of claim 19, wherein storing the merge file comprises:
storing first merge file at a web address associated with the first geographic location; and storing the second merge file at a web address associated with the second geographic location. | 2,100 |
5,934 | 5,934 | 14,958,050 | 2,137 | An operation method a storage device including a nonvolatile memory and a memory controller configured to control the nonvolatile memory is provided. The operation method includes erasing memory cells of the nonvolatile memory using the memory controller and prohibiting an erase of the erased memory cells for a critical time using the memory controller. | 1. An operation method of a storage device including a nonvolatile memory and a memory controller configured to control the nonvolatile memory, the operation method comprising:
erasing memory cells of the nonvolatile memory using the memory controller; and prohibiting an erase of the erased memory cells for a critical time using the memory controller. 2. The operation method of claim 1, wherein the prohibiting the erase of the erased memory cells for the critical time includes:
setting at least some memory cells among the erased memory cells to store valid data; managing a table to indicate valid data is stored in the at least some memory cells; and releasing the at least some memory cells after the critical time has elapsed. 3. The operation method of claim 2, wherein the memory controller is configured to select the memory cells as an erase target when valid data is not stored in the memory cells. 4. The operation method of claim 1, wherein the prohibiting the erase of the erased memory cells for the critical time includes:
collecting information of memory blocks being erased among memory blocks of the nonvolatile memory; and periodically registering the information collected in a slot of an interval table together with an initial count according to a period, wherein each of the erased memory blocks is virtually set to store valid data. 5. The operation method of claim 4, wherein the prohibiting the erase of the erased memory cells for the critical time further includes:
periodically reducing counts corresponding to the slots of the interval table according to the period. 6. The operation method of claim 5, wherein the prohibiting the erase of the erased memory cells for the critical time further includes:
periodically releasing information of a slot having a count which reaches a threshold value among the slots of the interval table according to the period. 7. The operation method of claim 5, wherein the prohibiting the erase of the erased memory cells for the critical time further includes:
periodically releasing virtual settings of memory blocks corresponding to a slot having a count which reaches a threshold value among the slots of the interval table according to the period. 8. The operation method of claim 4, wherein the prohibiting the erase of the erased memory cells for the critical time further includes mapping at least a part of physical addresses of each of the erased memory blocks to a logical address of an out-of-range area of a logical address of the storage device. 9. The operation method of claim 4, further comprising:
storing the interval table in the nonvolatile memory before power-off. 10. The operation method of claim 9, further comprising:
reading the interval table from the nonvolatile memory when power is turned on. 11. An operation method of a storage device including a nonvolatile memory and a memory controller configured to control the nonvolatile memory, the operation method comprising:
programming memory cells of the nonvolatile memory using the memory controller; and prohibiting an erase of the programmed memory cells for critical time using the memory controller. 12. The operation method of claim 11, wherein the prohibiting the erase of the programmed memory cells for the critical time includes:
setting at least some memory cells among the programmed memory cells to store valid data; managing a table to indicate valid data is stored in the at least some memory cells; and releasing the setting of at least some memory cells after the critical time has elapsed. 13. The operation method of claim 11, wherein the prohibiting the erase of the programmed memory cells for the critical time includes:
collecting information of memory blocks being programmed among memory blocks of the nonvolatile memory; and periodically registering the information being collected in a slot of an interval table together with an initial count according to a period, wherein each of the programmed memory blocks is virtually set to store valid data. 14. The operation method of claim 13, wherein the collecting the information of memory blocks being programmed includes:
collecting information of memory blocks in which last memory cells are programmed according to a program order of each memory block. 15. The operation method of claim 13, wherein the collecting the information of memory blocks being programmed includes collecting information of memory blocks in which first memory cells are programmed according to a program order of each memory block. 16. An operation method of a storage device including a nonvolatile memory and a memory controller configured to control the nonvolatile memory, the nonvolatile memory including a plurality of memory cells, the operation method comprising:
erasing the memory cells in one of a plurality of erase operation units of the nonvolatile memory using the memory controller; and at least one of: excluding the memory cells in the one of the plurality of erase operation units as an available erase target for a critical period of time using the memory controller, the critical period of time being based on a time elapsed since the memory cells in the one of the plurality of erase operation units were last erased, and programming at least some of the memory cells in the one of the plurality of erase operations units and inhibiting the memory cells in the one of the plurality of erase operation units from being erased for a critical length of time using the memory controller, the critical length of time being based on a time elapsed since the at some of the memory cells in the one of the plurality of erase operation units were last programmed. 17. The operation method of claim 16, wherein
the memory cells in the nonvolatile memory are organized into a plurality of blocks, the memory blocks each include a plurality of physical pages, the at least one of excluding the memory cells in the one of the plurality of erase operation units as the available erase target and programming at least some of the memory cells in the one of the plurality of erase operation units and inhibiting the memory cells in the one of the plurality of erase operation units from being erased from the critical length of time includes,
managing a table using the memory controller that maps logical page addresses to the physical pages of the plurality of memory blocks,
mapping a virtual page to at least one of the physical pages included in the one of a plurality of erase operation units of the nonvolatile memory after the erasing the one of a plurality of erase operation units of the nonvolatile memory,
removing the virtual page mapping to the at least one of the physical pages included in the one of the plurality of erase operation units if one of the critical period of time and the critical length of time has elapsed since the mapping the virtual page, and
prohibiting the at least one of the plurality of erase operation units from being erased using the memory controller if any one of the physical pages included in the at least one of the plurality of erase operation units is mapped to the virtual page. 18. The operation method of claim 16, wherein the at least one of excluding the memory cells in the one of the plurality of erase operation units as the available erase target for the critical period of time using the memory controller and programming the at least some of the memory cells in the one of the plurality of erase operations units and inhibiting the memory cells in the one of the plurality of erase operation units from being erased for the critical length of time using the memory controller is the excluding the memory cells in the one of the plurality of erase operation units as the available erase target for the critical period of time using the memory controller. 19. The operation method of claim 16, wherein the at least one of excluding the memory cells in the one of the plurality of erase operation units as the available erase target for the critical period of time using the memory controller and the programming the at least some of the memory cells in the one of the plurality of erase operations units and inhibiting the memory cells in the one of the plurality of erase operation units from being erased for the critical length of time using the memory controller is the programming the at least some of the memory cells in the one of the plurality of erase operations units and inhibiting the memory cells in the one of the plurality of erase operation units from being erased for the critical length of time using the memory controller. 20. The operation method of claim 16, wherein
the memory cells in the nonvolatile memory are organized into a plurality of blocks, each one of the plurality of blocks includes a plurality of strings, and each one of the strings includes a number of the memory cells stacked on top of each other in a vertical direction between a ground selection transistor and a string selection transistor. | An operation method a storage device including a nonvolatile memory and a memory controller configured to control the nonvolatile memory is provided. The operation method includes erasing memory cells of the nonvolatile memory using the memory controller and prohibiting an erase of the erased memory cells for a critical time using the memory controller.1. An operation method of a storage device including a nonvolatile memory and a memory controller configured to control the nonvolatile memory, the operation method comprising:
erasing memory cells of the nonvolatile memory using the memory controller; and prohibiting an erase of the erased memory cells for a critical time using the memory controller. 2. The operation method of claim 1, wherein the prohibiting the erase of the erased memory cells for the critical time includes:
setting at least some memory cells among the erased memory cells to store valid data; managing a table to indicate valid data is stored in the at least some memory cells; and releasing the at least some memory cells after the critical time has elapsed. 3. The operation method of claim 2, wherein the memory controller is configured to select the memory cells as an erase target when valid data is not stored in the memory cells. 4. The operation method of claim 1, wherein the prohibiting the erase of the erased memory cells for the critical time includes:
collecting information of memory blocks being erased among memory blocks of the nonvolatile memory; and periodically registering the information collected in a slot of an interval table together with an initial count according to a period, wherein each of the erased memory blocks is virtually set to store valid data. 5. The operation method of claim 4, wherein the prohibiting the erase of the erased memory cells for the critical time further includes:
periodically reducing counts corresponding to the slots of the interval table according to the period. 6. The operation method of claim 5, wherein the prohibiting the erase of the erased memory cells for the critical time further includes:
periodically releasing information of a slot having a count which reaches a threshold value among the slots of the interval table according to the period. 7. The operation method of claim 5, wherein the prohibiting the erase of the erased memory cells for the critical time further includes:
periodically releasing virtual settings of memory blocks corresponding to a slot having a count which reaches a threshold value among the slots of the interval table according to the period. 8. The operation method of claim 4, wherein the prohibiting the erase of the erased memory cells for the critical time further includes mapping at least a part of physical addresses of each of the erased memory blocks to a logical address of an out-of-range area of a logical address of the storage device. 9. The operation method of claim 4, further comprising:
storing the interval table in the nonvolatile memory before power-off. 10. The operation method of claim 9, further comprising:
reading the interval table from the nonvolatile memory when power is turned on. 11. An operation method of a storage device including a nonvolatile memory and a memory controller configured to control the nonvolatile memory, the operation method comprising:
programming memory cells of the nonvolatile memory using the memory controller; and prohibiting an erase of the programmed memory cells for critical time using the memory controller. 12. The operation method of claim 11, wherein the prohibiting the erase of the programmed memory cells for the critical time includes:
setting at least some memory cells among the programmed memory cells to store valid data; managing a table to indicate valid data is stored in the at least some memory cells; and releasing the setting of at least some memory cells after the critical time has elapsed. 13. The operation method of claim 11, wherein the prohibiting the erase of the programmed memory cells for the critical time includes:
collecting information of memory blocks being programmed among memory blocks of the nonvolatile memory; and periodically registering the information being collected in a slot of an interval table together with an initial count according to a period, wherein each of the programmed memory blocks is virtually set to store valid data. 14. The operation method of claim 13, wherein the collecting the information of memory blocks being programmed includes:
collecting information of memory blocks in which last memory cells are programmed according to a program order of each memory block. 15. The operation method of claim 13, wherein the collecting the information of memory blocks being programmed includes collecting information of memory blocks in which first memory cells are programmed according to a program order of each memory block. 16. An operation method of a storage device including a nonvolatile memory and a memory controller configured to control the nonvolatile memory, the nonvolatile memory including a plurality of memory cells, the operation method comprising:
erasing the memory cells in one of a plurality of erase operation units of the nonvolatile memory using the memory controller; and at least one of: excluding the memory cells in the one of the plurality of erase operation units as an available erase target for a critical period of time using the memory controller, the critical period of time being based on a time elapsed since the memory cells in the one of the plurality of erase operation units were last erased, and programming at least some of the memory cells in the one of the plurality of erase operations units and inhibiting the memory cells in the one of the plurality of erase operation units from being erased for a critical length of time using the memory controller, the critical length of time being based on a time elapsed since the at some of the memory cells in the one of the plurality of erase operation units were last programmed. 17. The operation method of claim 16, wherein
the memory cells in the nonvolatile memory are organized into a plurality of blocks, the memory blocks each include a plurality of physical pages, the at least one of excluding the memory cells in the one of the plurality of erase operation units as the available erase target and programming at least some of the memory cells in the one of the plurality of erase operation units and inhibiting the memory cells in the one of the plurality of erase operation units from being erased from the critical length of time includes,
managing a table using the memory controller that maps logical page addresses to the physical pages of the plurality of memory blocks,
mapping a virtual page to at least one of the physical pages included in the one of a plurality of erase operation units of the nonvolatile memory after the erasing the one of a plurality of erase operation units of the nonvolatile memory,
removing the virtual page mapping to the at least one of the physical pages included in the one of the plurality of erase operation units if one of the critical period of time and the critical length of time has elapsed since the mapping the virtual page, and
prohibiting the at least one of the plurality of erase operation units from being erased using the memory controller if any one of the physical pages included in the at least one of the plurality of erase operation units is mapped to the virtual page. 18. The operation method of claim 16, wherein the at least one of excluding the memory cells in the one of the plurality of erase operation units as the available erase target for the critical period of time using the memory controller and programming the at least some of the memory cells in the one of the plurality of erase operations units and inhibiting the memory cells in the one of the plurality of erase operation units from being erased for the critical length of time using the memory controller is the excluding the memory cells in the one of the plurality of erase operation units as the available erase target for the critical period of time using the memory controller. 19. The operation method of claim 16, wherein the at least one of excluding the memory cells in the one of the plurality of erase operation units as the available erase target for the critical period of time using the memory controller and the programming the at least some of the memory cells in the one of the plurality of erase operations units and inhibiting the memory cells in the one of the plurality of erase operation units from being erased for the critical length of time using the memory controller is the programming the at least some of the memory cells in the one of the plurality of erase operations units and inhibiting the memory cells in the one of the plurality of erase operation units from being erased for the critical length of time using the memory controller. 20. The operation method of claim 16, wherein
the memory cells in the nonvolatile memory are organized into a plurality of blocks, each one of the plurality of blocks includes a plurality of strings, and each one of the strings includes a number of the memory cells stacked on top of each other in a vertical direction between a ground selection transistor and a string selection transistor. | 2,100 |
5,935 | 5,935 | 14,915,294 | 2,173 | In one example of the disclosure, a menu and a plurality of menu elements included within the menu are identified within a software application. A set of menu traversal tracking measures are performed with respect to a target element from the plurality. The set of measures includes, responsive to identifying a user menu traversal action that is not a selection of the target element, incrementing the value of a counter. The set includes, responsive to identifying a user menu traversal action that is a selection of the target element, recording the value of the counter in association with data indicative of the target element. A recommendation to modify a first element among the plurality is generated in consideration of the recorded value. | 1. A memory resource storing instructions that when executed cause a processing resource to implement a system for recommending application menu modifications, the instructions comprising:
a menu module, to identify, within a software application, a menu and a plurality of menu elements included within the menu; a tracking module, to perform a set of menu traversal tracking measures with respect to a target element from the plurality, including
responsive to identifying a user menu traversal action that is not a selection of the target element, incrementing the value of a counter, and
responsive to identifying a user menu traversal action that is a selection of the target element, recording the value of the counter in association with data indicative of the target element;
a recommendation module, to generate, in consideration of the recorded value, a recommendation to modify a first element among the plurality. 2. The memory resource of claim 1, wherein the menu is a graphical user interface menu, and the plurality of menu elements are graphical user interface menu elements. 3. The memory resource of claim 1, wherein the modification is to relocate the first element to a new location within the menu. 4. The memory resource of claim 1,
wherein the tracking module includes instructions to perform the set of traversal tracking measures successive times, with each of the plurality of elements at some point designated as the target element such that each of the plurality of elements will have an associated recorded value; and wherein the recommendation is generated in consideration of recorded values of the counter associated with the plurality of elements. 5. The memory resource of claim 1, wherein the application is a web application. 6. The memory resource of claim 1, wherein the set of tracking measures includes setting the counter to a start value responsive to detecting repetitive selection of a same element from the plurality over a specific period of time. 7. The memory resource of claim 1, wherein the menu and the plurality of menu elements are to appear in a first page display, and wherein the set of tracking measures includes setting the counter to a start value responsive to detecting a user action to cause loading of a second page display. 8. The memory resource of claim 1, wherein the set of tracking measures includes setting the counter to a start value after identification of a user menu traversal action that is a selection of the target element. 9. The memory resource of claim 1, wherein the tracking module includes instructions to perform the set of measures for each of a plurality of traversal runs, and wherein generation of the recommendation is in consideration of an average of recorded values of the counter. 10. A message delivery system to recommend application menu modifications, comprising:
a menu engine to identify, within a software application display,
a menu, and
a plurality of menu elements included within the menu;
a tracking engine to perform a set of measures for tracking of user menu traversals, the measures including
designating an element from the plurality as a target element;
responsive to identifying a user menu traversal action that does not include activation of the target element, incrementing the value of a counter;
responsive to identifying a user menu traversal action that includes an activation of the target element, recording in a database the value of the counter in association with an identifier for the target element:
a recommendation engine, to generate, according to a rule that includes as a factor a recorded value of the counter, a recommendation to modify a first element among the plurality. 11. The system of claim 10, wherein the tracking engine is to set the counter to a start value upon detection of subsequent activations of a same element from the plurality over a specific period of time. 12. The system of claim 10, wherein the rule includes a comparison of the recorded value against an expected counter value for the target element. 13. The system of claim 10, wherein the recommendation includes a recommendation to relocate the first element to a new position within the menu. 14. The system of claim 13,
wherein the tracking engine is to perform the set of measures for each of a plurality of traversal runs; wherein the set of measures includes, for each run, recording a user traversal path to reach the target element; wherein the tracking engine is to calculate a number of occurrences of each path during the plurality of runs; wherein the rule includes, for each of the target elements, a comparison of an average of recorded values of the counter against an expected counter value; and wherein the first element has an average recorded value of the counter that exceeds an expected counter value, and the rule includes identifying as the new position a menu position within a path that ends with the first element and has a highest numb of occurrences. 15. A method for recommending modifications for application menus, comprising:
identifying, within a software application display,
a graphical user interface menu, and
a plurality of menu elements included within the menu;
performing a set of menu traversal tracking events for each of the elements of the plurality, the set including
designating a target element;
setting a value of a counter a start value, and
identifying a user menu traversal action;
if the traversal action does not include selection of the target element, the value of the counter is incremented;
if the traversal action does include selection of the target element, the value of the counter is recorded in a database in association with an identifier for the target element;
generating, in consideration of the values of the counter recorded for each of the plurality of elements and expected counter values for the elements, a recommendation to relocate the first element to a new location within the menu. | In one example of the disclosure, a menu and a plurality of menu elements included within the menu are identified within a software application. A set of menu traversal tracking measures are performed with respect to a target element from the plurality. The set of measures includes, responsive to identifying a user menu traversal action that is not a selection of the target element, incrementing the value of a counter. The set includes, responsive to identifying a user menu traversal action that is a selection of the target element, recording the value of the counter in association with data indicative of the target element. A recommendation to modify a first element among the plurality is generated in consideration of the recorded value.1. A memory resource storing instructions that when executed cause a processing resource to implement a system for recommending application menu modifications, the instructions comprising:
a menu module, to identify, within a software application, a menu and a plurality of menu elements included within the menu; a tracking module, to perform a set of menu traversal tracking measures with respect to a target element from the plurality, including
responsive to identifying a user menu traversal action that is not a selection of the target element, incrementing the value of a counter, and
responsive to identifying a user menu traversal action that is a selection of the target element, recording the value of the counter in association with data indicative of the target element;
a recommendation module, to generate, in consideration of the recorded value, a recommendation to modify a first element among the plurality. 2. The memory resource of claim 1, wherein the menu is a graphical user interface menu, and the plurality of menu elements are graphical user interface menu elements. 3. The memory resource of claim 1, wherein the modification is to relocate the first element to a new location within the menu. 4. The memory resource of claim 1,
wherein the tracking module includes instructions to perform the set of traversal tracking measures successive times, with each of the plurality of elements at some point designated as the target element such that each of the plurality of elements will have an associated recorded value; and wherein the recommendation is generated in consideration of recorded values of the counter associated with the plurality of elements. 5. The memory resource of claim 1, wherein the application is a web application. 6. The memory resource of claim 1, wherein the set of tracking measures includes setting the counter to a start value responsive to detecting repetitive selection of a same element from the plurality over a specific period of time. 7. The memory resource of claim 1, wherein the menu and the plurality of menu elements are to appear in a first page display, and wherein the set of tracking measures includes setting the counter to a start value responsive to detecting a user action to cause loading of a second page display. 8. The memory resource of claim 1, wherein the set of tracking measures includes setting the counter to a start value after identification of a user menu traversal action that is a selection of the target element. 9. The memory resource of claim 1, wherein the tracking module includes instructions to perform the set of measures for each of a plurality of traversal runs, and wherein generation of the recommendation is in consideration of an average of recorded values of the counter. 10. A message delivery system to recommend application menu modifications, comprising:
a menu engine to identify, within a software application display,
a menu, and
a plurality of menu elements included within the menu;
a tracking engine to perform a set of measures for tracking of user menu traversals, the measures including
designating an element from the plurality as a target element;
responsive to identifying a user menu traversal action that does not include activation of the target element, incrementing the value of a counter;
responsive to identifying a user menu traversal action that includes an activation of the target element, recording in a database the value of the counter in association with an identifier for the target element:
a recommendation engine, to generate, according to a rule that includes as a factor a recorded value of the counter, a recommendation to modify a first element among the plurality. 11. The system of claim 10, wherein the tracking engine is to set the counter to a start value upon detection of subsequent activations of a same element from the plurality over a specific period of time. 12. The system of claim 10, wherein the rule includes a comparison of the recorded value against an expected counter value for the target element. 13. The system of claim 10, wherein the recommendation includes a recommendation to relocate the first element to a new position within the menu. 14. The system of claim 13,
wherein the tracking engine is to perform the set of measures for each of a plurality of traversal runs; wherein the set of measures includes, for each run, recording a user traversal path to reach the target element; wherein the tracking engine is to calculate a number of occurrences of each path during the plurality of runs; wherein the rule includes, for each of the target elements, a comparison of an average of recorded values of the counter against an expected counter value; and wherein the first element has an average recorded value of the counter that exceeds an expected counter value, and the rule includes identifying as the new position a menu position within a path that ends with the first element and has a highest numb of occurrences. 15. A method for recommending modifications for application menus, comprising:
identifying, within a software application display,
a graphical user interface menu, and
a plurality of menu elements included within the menu;
performing a set of menu traversal tracking events for each of the elements of the plurality, the set including
designating a target element;
setting a value of a counter a start value, and
identifying a user menu traversal action;
if the traversal action does not include selection of the target element, the value of the counter is incremented;
if the traversal action does include selection of the target element, the value of the counter is recorded in a database in association with an identifier for the target element;
generating, in consideration of the values of the counter recorded for each of the plurality of elements and expected counter values for the elements, a recommendation to relocate the first element to a new location within the menu. | 2,100 |
5,936 | 5,936 | 15,114,459 | 2,139 | A method for identifying memory regions that contain remapped memory locations is described. The method includes determining, from a number of tracking bits on a memory module controller, whether a memory region comprises a remapped memory location. The method further includes performing a remapped memory operation on the memory region based on the determination, wherein memory within a computing device is divided into a number of memory regions including the memory region. | 1. A method for identifying memory regions that contain remapped memory locations, comprising:
determining, from a number of tracking bits on a memory module controller, whether a memory region comprises a remapped memory location; and performing a remapped memory operation on the memory region based on the determination, wherein memory within a computing device is divided into a number of memory regions including the memory region. 2. The method of claim 1, in which the remapped memory location was remapped using a fine grained remapping with error checking and correcting and embedded pointers (FREE-p) mapping operation. 3. The method of claim 1, in which determining whether a memory region comprises a remapped memory location comprises determining a value of a number of tracking bits that correspond to the memory region. 4. The method of claim 1, in which a remapped memory location comprises a number of failed memory bits. 5. The method of claim 1, further comprising performing a write operation when the memory region does not comprise a remapped memory location. 6. The method of claim 1, in which performing a remapped memory operation comprises performing a read operation and performing a write operation when the memory region comprises a remapped memory location. 7. A system for identifying memory regions that contain remapped memory locations, comprising:
a processor; memory communicatively coupled to the processor; and a memory module controller, the memory module controller comprising:
a divide module to divide memory into a number of memory regions, a memory region comprising a number of memory locations;
a track module to identify memory regions that comprise a remapped memory location based on a number of tracking bits located in the memory module controller, and
an operation module to perform a remapped write operation to a memory region identified as containing the remapped memory location. 8. The system of claim 7, in which the remapped memory location was remapped using a fine grained remapping with error checking and correcting and embedded pointers (FREE-p) mapping function. 9. The system of claim 7, further comprising a read module to read data from the remapped memory location. 10. The system of claim 7, in which the divide module divides the memory into a multi-dimensional data array structure. 11. The system of claim 7, in which the track module uses a bloom filter to identify memory regions that comprise the remapped memory location. 12. The system of claim 7, in which the divide module divides memory based on a hashing function. 13. The system of claim 7, in which the track module is located within volatile memory of a computing device. 14. A computer program product for identifying memory regions that contain remapped memory locations, the computer program product comprising:
a computer readable storage medium comprising computer usable program code embodied therewith, the computer usable program code comprising:
computer usable program code to, when executed by a processor, divide memory within a computing device into a number of memory regions;
computer usable program code to, when executed by a processor, identify a number of remapped memory locations within a memory region based on a number of tracking bits within a memory module controller; and
computer usable program code to, when executed by a processor, perform a remapped memory operation to the number of remapped memory locations, in which the remapped memory operation is based on a remapping function. 15. The computer program product of claim 14, in which the remapping function is a fine grained remapping with error checking and correcting and embedded pointers (FREE-p) mapping function. | A method for identifying memory regions that contain remapped memory locations is described. The method includes determining, from a number of tracking bits on a memory module controller, whether a memory region comprises a remapped memory location. The method further includes performing a remapped memory operation on the memory region based on the determination, wherein memory within a computing device is divided into a number of memory regions including the memory region.1. A method for identifying memory regions that contain remapped memory locations, comprising:
determining, from a number of tracking bits on a memory module controller, whether a memory region comprises a remapped memory location; and performing a remapped memory operation on the memory region based on the determination, wherein memory within a computing device is divided into a number of memory regions including the memory region. 2. The method of claim 1, in which the remapped memory location was remapped using a fine grained remapping with error checking and correcting and embedded pointers (FREE-p) mapping operation. 3. The method of claim 1, in which determining whether a memory region comprises a remapped memory location comprises determining a value of a number of tracking bits that correspond to the memory region. 4. The method of claim 1, in which a remapped memory location comprises a number of failed memory bits. 5. The method of claim 1, further comprising performing a write operation when the memory region does not comprise a remapped memory location. 6. The method of claim 1, in which performing a remapped memory operation comprises performing a read operation and performing a write operation when the memory region comprises a remapped memory location. 7. A system for identifying memory regions that contain remapped memory locations, comprising:
a processor; memory communicatively coupled to the processor; and a memory module controller, the memory module controller comprising:
a divide module to divide memory into a number of memory regions, a memory region comprising a number of memory locations;
a track module to identify memory regions that comprise a remapped memory location based on a number of tracking bits located in the memory module controller, and
an operation module to perform a remapped write operation to a memory region identified as containing the remapped memory location. 8. The system of claim 7, in which the remapped memory location was remapped using a fine grained remapping with error checking and correcting and embedded pointers (FREE-p) mapping function. 9. The system of claim 7, further comprising a read module to read data from the remapped memory location. 10. The system of claim 7, in which the divide module divides the memory into a multi-dimensional data array structure. 11. The system of claim 7, in which the track module uses a bloom filter to identify memory regions that comprise the remapped memory location. 12. The system of claim 7, in which the divide module divides memory based on a hashing function. 13. The system of claim 7, in which the track module is located within volatile memory of a computing device. 14. A computer program product for identifying memory regions that contain remapped memory locations, the computer program product comprising:
a computer readable storage medium comprising computer usable program code embodied therewith, the computer usable program code comprising:
computer usable program code to, when executed by a processor, divide memory within a computing device into a number of memory regions;
computer usable program code to, when executed by a processor, identify a number of remapped memory locations within a memory region based on a number of tracking bits within a memory module controller; and
computer usable program code to, when executed by a processor, perform a remapped memory operation to the number of remapped memory locations, in which the remapped memory operation is based on a remapping function. 15. The computer program product of claim 14, in which the remapping function is a fine grained remapping with error checking and correcting and embedded pointers (FREE-p) mapping function. | 2,100 |
5,937 | 5,937 | 13,918,402 | 2,179 | Systems, methods, and software are disclosed herein for facilitating enhanced canvas presentation environments. In an implementation, a user interacts with a touch-enabled display system capable of displaying items on a canvas. In response to a gesture made by the user with respect to an item being displayed, a format-specific interaction model is identified based on a format associated with the item. A response to the gesture may then be determined using the interaction model and the response rendered for display. | 1. An apparatus comprising:
one or more computer readable storage media; and program instructions stored on the one or more computer readable storage media that, when executed by a processing system, direct the processing system to at least: in response to a gesture associated with an item displayed on a display surface, identify an interaction model specific to a format of the item; identify a response to the gesture in accordance with the interaction model; and render the response with respect to the item on the display surface. 2. The apparatus of claim 1 wherein to identify the interaction model, the program instructions direct the processing system to identify the format of the item and select the interaction model from a plurality of interaction modules associated with a plurality of formats, wherein the format comprises a one of the plurality of formats. 3. The apparatus of claim 2 wherein the program instructions further direct the processing system to render a plurality of items on the display surface, each of the plurality of items having an active status the same as every other of the plurality of items, and wherein the plurality of items includes the item associated with the gesture. 4. The apparatus of claim 3 wherein the program instructions further direct the processing system to render a user interface in which to display the plurality of items, wherein the user interface comprises a foreground and a background, and wherein the active status indicates whether each of the plurality of items is active in the foreground or the background of the user interface. 5. The apparatus of claim 4 wherein each of the plurality of interaction models defines a plurality of directional gestures as corresponding to a plurality of responses from which to identify the response, wherein at least a first portion of the plurality of responses are unique to each of the plurality of interaction models and wherein at least a second portion of the plurality of responses is shared in common with respect to each of the plurality of interaction models. 6. The apparatus of claim 5 wherein the plurality of directional gestures comprises a right swipe gesture, a left swipe gesture, an up swipe gesture, and a down swipe gesture. 7. The apparatus of claim 6 wherein the first portion of the plurality of responses corresponds to the right swipe gesture and the left swipe gesture and wherein the second portion of the plurality of responses corresponds to the up swipe gesture and the down swipe gesture. 8. The apparatus of claim 1 further comprising a display system configured to accept the gesture by way of a touch interface and display the response to the gesture and the processing system configured to execute the program instructions. 9. One or more computer readable storage media having program instructions stored therein for facilitating enhanced canvas environments that, when executed by a computing system, direct the computing system to at least:
render a drawing on a user interface comprising a multi-format canvas; in response to a single touch interaction that proceeds through the drawing, render an erasure of only a portion of the drawing; and in response to a multi-touch interaction that proceeds through the drawing, render an erasure of an entirety of the drawing. 10. The one or more computer readable storage media of claim 9 wherein the single touch interaction comprises dragging a single digit down through the drawing. 11. The one or more computer readable storage media of claim 10 wherein the multi-touch interaction comprises dragging at least three digits down through the drawing. 12. The one or more computer readable storage media of claim 11 wherein the erasure of only the portion of the drawing comprises an erased vertical strip through the drawing corresponding to a path through the drawing created by the single touch interaction. 13. A method for facilitating enhanced canvas environments comprising:
in response to a gesture associated with an item displayed on a display surface, identifying an interaction model specific to a format of the item; identifying a response to the gesture in accordance with the interaction model; and rendering the response with respect to the item on the display surface. 14. The method of claim 13 wherein identifying the interaction model comprises identifying the format of the item and selecting the interaction model from a plurality of interaction modules associated with a plurality of formats, wherein the format comprises a one of the plurality of formats. 15. The method of claim 14 further comprising rendering a plurality of items on the display surface, each of the plurality of items having an active status the same as every other of the plurality of items, and wherein the plurality of items includes the item associated with the gesture. 16. The method of claim 15 further comprising rendering a user interface in which to display the plurality of items, wherein the user interface comprises a foreground and a background, and wherein the active status indicates whether each of the plurality of items is active in the foreground or the background of the user interface. 17. The method of claim 16 wherein each of the plurality of interaction models defines a plurality of directional gestures as corresponding to a plurality of responses from which to identify the response, wherein at least a first portion of the plurality of responses are unique to each of the plurality of interaction models and wherein at least a second portion of the plurality of responses is shared in common with respect to each of the plurality of interaction models. 18. The method of claim 17 wherein the plurality of directional gestures comprises a right swipe gesture, a left swipe gesture, an up swipe gesture, and a down swipe gesture. 19. The method of claim 18 wherein the first portion of the plurality of responses corresponds to the right swipe gesture and the left swipe gesture and wherein the second portion of the plurality of responses corresponds to the up swipe gesture and the down swipe gesture. 20. The method of claim 13 further comprising, in a display system, accepting the gesture by way of a touch interface and displaying the response to the gesture. | Systems, methods, and software are disclosed herein for facilitating enhanced canvas presentation environments. In an implementation, a user interacts with a touch-enabled display system capable of displaying items on a canvas. In response to a gesture made by the user with respect to an item being displayed, a format-specific interaction model is identified based on a format associated with the item. A response to the gesture may then be determined using the interaction model and the response rendered for display.1. An apparatus comprising:
one or more computer readable storage media; and program instructions stored on the one or more computer readable storage media that, when executed by a processing system, direct the processing system to at least: in response to a gesture associated with an item displayed on a display surface, identify an interaction model specific to a format of the item; identify a response to the gesture in accordance with the interaction model; and render the response with respect to the item on the display surface. 2. The apparatus of claim 1 wherein to identify the interaction model, the program instructions direct the processing system to identify the format of the item and select the interaction model from a plurality of interaction modules associated with a plurality of formats, wherein the format comprises a one of the plurality of formats. 3. The apparatus of claim 2 wherein the program instructions further direct the processing system to render a plurality of items on the display surface, each of the plurality of items having an active status the same as every other of the plurality of items, and wherein the plurality of items includes the item associated with the gesture. 4. The apparatus of claim 3 wherein the program instructions further direct the processing system to render a user interface in which to display the plurality of items, wherein the user interface comprises a foreground and a background, and wherein the active status indicates whether each of the plurality of items is active in the foreground or the background of the user interface. 5. The apparatus of claim 4 wherein each of the plurality of interaction models defines a plurality of directional gestures as corresponding to a plurality of responses from which to identify the response, wherein at least a first portion of the plurality of responses are unique to each of the plurality of interaction models and wherein at least a second portion of the plurality of responses is shared in common with respect to each of the plurality of interaction models. 6. The apparatus of claim 5 wherein the plurality of directional gestures comprises a right swipe gesture, a left swipe gesture, an up swipe gesture, and a down swipe gesture. 7. The apparatus of claim 6 wherein the first portion of the plurality of responses corresponds to the right swipe gesture and the left swipe gesture and wherein the second portion of the plurality of responses corresponds to the up swipe gesture and the down swipe gesture. 8. The apparatus of claim 1 further comprising a display system configured to accept the gesture by way of a touch interface and display the response to the gesture and the processing system configured to execute the program instructions. 9. One or more computer readable storage media having program instructions stored therein for facilitating enhanced canvas environments that, when executed by a computing system, direct the computing system to at least:
render a drawing on a user interface comprising a multi-format canvas; in response to a single touch interaction that proceeds through the drawing, render an erasure of only a portion of the drawing; and in response to a multi-touch interaction that proceeds through the drawing, render an erasure of an entirety of the drawing. 10. The one or more computer readable storage media of claim 9 wherein the single touch interaction comprises dragging a single digit down through the drawing. 11. The one or more computer readable storage media of claim 10 wherein the multi-touch interaction comprises dragging at least three digits down through the drawing. 12. The one or more computer readable storage media of claim 11 wherein the erasure of only the portion of the drawing comprises an erased vertical strip through the drawing corresponding to a path through the drawing created by the single touch interaction. 13. A method for facilitating enhanced canvas environments comprising:
in response to a gesture associated with an item displayed on a display surface, identifying an interaction model specific to a format of the item; identifying a response to the gesture in accordance with the interaction model; and rendering the response with respect to the item on the display surface. 14. The method of claim 13 wherein identifying the interaction model comprises identifying the format of the item and selecting the interaction model from a plurality of interaction modules associated with a plurality of formats, wherein the format comprises a one of the plurality of formats. 15. The method of claim 14 further comprising rendering a plurality of items on the display surface, each of the plurality of items having an active status the same as every other of the plurality of items, and wherein the plurality of items includes the item associated with the gesture. 16. The method of claim 15 further comprising rendering a user interface in which to display the plurality of items, wherein the user interface comprises a foreground and a background, and wherein the active status indicates whether each of the plurality of items is active in the foreground or the background of the user interface. 17. The method of claim 16 wherein each of the plurality of interaction models defines a plurality of directional gestures as corresponding to a plurality of responses from which to identify the response, wherein at least a first portion of the plurality of responses are unique to each of the plurality of interaction models and wherein at least a second portion of the plurality of responses is shared in common with respect to each of the plurality of interaction models. 18. The method of claim 17 wherein the plurality of directional gestures comprises a right swipe gesture, a left swipe gesture, an up swipe gesture, and a down swipe gesture. 19. The method of claim 18 wherein the first portion of the plurality of responses corresponds to the right swipe gesture and the left swipe gesture and wherein the second portion of the plurality of responses corresponds to the up swipe gesture and the down swipe gesture. 20. The method of claim 13 further comprising, in a display system, accepting the gesture by way of a touch interface and displaying the response to the gesture. | 2,100 |
5,938 | 5,938 | 14,833,415 | 2,158 | Utilizing reference/identification (ID) linking in extensible markup language (XML) wrapper code generation in a data processing system. A code generator receives a type document and reference/ID constraints document and accesses the reference/ID constraints document to translate between XML structures and object structures. | 1-20. (canceled) 21. A computer-implemented method, comprising:
receiving a type document including extensible markup language (XML) code; receiving a reference/identification (ID) constraints document; translating, by a code generator and using the reference/ID constraints document, between an XML structure within the XML code and an object structure. 22. The method of claim 21, wherein
the reference/ID constraints document specifies reference/ID constraints of the XML code with the type document. 23. The method of claim 21, further comprising
creating a directed constraint graph using the reference/ID constraints document. 24. The method of claim 23, further comprising
generating, for an XML structure not in the directed constraint graph, a first set of deserialization code. 25. The method of claim 24, further comprising
generating, for a leaf XML structure within the directed constraint graph, a second set of deserialization code. 26. The method of claim 25, further comprising
generating, for a higher XML structure within the directed constraint graph, a third set of deserialization code, wherein the higher XML structure is a next highest structure in the directed constraint graph than the leaf XML structure. 27. A computer hardware system, comprising:
a hardware processor including a code generator, wherein the hardware processor is configured to initiate and/or perform:
receiving a type document including extensible markup language (XML) code;
receiving a reference/identification (ID) constraints document;
translating, by the code generator and using the reference/ID constraints document, between an XML structure within the XML code and an object structure. 28. The system of claim 27, wherein
the reference/ID constraints document specifies reference/ID constraints of the XML code with the type document. 29. The system of claim 27, wherein the hardware processor is further configured to initiate and/or perform
creating a directed constraint graph using the reference/ID constraints document. 30. The system of claim 29, wherein the hardware processor is further configured to initiate and/or perform
generating, for an XML structure not in the directed constraint graph, a first set of deserialization code. 31. The system of claim 30, wherein the hardware processor is further configured to initiate and/or perform
generating, for a leaf XML structure within the directed constraint graph, a second set of deserialization code. 32. The system of claim 31, wherein the hardware processor is further configured to initiate and/or perform
generating, for a higher XML structure within the directed constraint graph, a third set of deserialization code, wherein the higher XML structure is a next highest structure in the directed constraint graph than the leaf XML structure. 33. A computer program product, comprising:
a computer usable storage device having stored therein computer usable program code, which when executed by a computer hardware system, causes the computer hardware system to perform: receiving a type document including extensible markup language (XML) code; receiving a reference/identification (ID) constraints document; translating, by a code generator and using the reference/ID constraints document, between an XML structure within the XML code and an object structure. 34. The computer program product of claim 33, wherein
the reference/ID constraints document specifies reference/ID constraints of the XML code with the type document. 35. The computer program product of claim 33, wherein the computer usable program code further causes the computer hardware system to perform
creating a directed constraint graph using the reference/ID constraints document. 36. The computer program product of claim 35, wherein the computer usable program code further causes the computer hardware system to perform
generating, for an XML structure not in the directed constraint graph, a first set of deserialization code. 37. The computer program product of claim 36, wherein the computer usable program code further causes the computer hardware system to perform
generating, for a leaf XML structure within the directed constraint graph, a second set of deserialization code. 38. The computer program product of claim 37, wherein the computer usable program code further causes the computer hardware system to perform
generating, for a higher XML structure within the directed constraint graph, a third set of deserialization code, wherein the higher XML structure is a next highest structure in the directed constraint graph than the leaf XML structure. | Utilizing reference/identification (ID) linking in extensible markup language (XML) wrapper code generation in a data processing system. A code generator receives a type document and reference/ID constraints document and accesses the reference/ID constraints document to translate between XML structures and object structures.1-20. (canceled) 21. A computer-implemented method, comprising:
receiving a type document including extensible markup language (XML) code; receiving a reference/identification (ID) constraints document; translating, by a code generator and using the reference/ID constraints document, between an XML structure within the XML code and an object structure. 22. The method of claim 21, wherein
the reference/ID constraints document specifies reference/ID constraints of the XML code with the type document. 23. The method of claim 21, further comprising
creating a directed constraint graph using the reference/ID constraints document. 24. The method of claim 23, further comprising
generating, for an XML structure not in the directed constraint graph, a first set of deserialization code. 25. The method of claim 24, further comprising
generating, for a leaf XML structure within the directed constraint graph, a second set of deserialization code. 26. The method of claim 25, further comprising
generating, for a higher XML structure within the directed constraint graph, a third set of deserialization code, wherein the higher XML structure is a next highest structure in the directed constraint graph than the leaf XML structure. 27. A computer hardware system, comprising:
a hardware processor including a code generator, wherein the hardware processor is configured to initiate and/or perform:
receiving a type document including extensible markup language (XML) code;
receiving a reference/identification (ID) constraints document;
translating, by the code generator and using the reference/ID constraints document, between an XML structure within the XML code and an object structure. 28. The system of claim 27, wherein
the reference/ID constraints document specifies reference/ID constraints of the XML code with the type document. 29. The system of claim 27, wherein the hardware processor is further configured to initiate and/or perform
creating a directed constraint graph using the reference/ID constraints document. 30. The system of claim 29, wherein the hardware processor is further configured to initiate and/or perform
generating, for an XML structure not in the directed constraint graph, a first set of deserialization code. 31. The system of claim 30, wherein the hardware processor is further configured to initiate and/or perform
generating, for a leaf XML structure within the directed constraint graph, a second set of deserialization code. 32. The system of claim 31, wherein the hardware processor is further configured to initiate and/or perform
generating, for a higher XML structure within the directed constraint graph, a third set of deserialization code, wherein the higher XML structure is a next highest structure in the directed constraint graph than the leaf XML structure. 33. A computer program product, comprising:
a computer usable storage device having stored therein computer usable program code, which when executed by a computer hardware system, causes the computer hardware system to perform: receiving a type document including extensible markup language (XML) code; receiving a reference/identification (ID) constraints document; translating, by a code generator and using the reference/ID constraints document, between an XML structure within the XML code and an object structure. 34. The computer program product of claim 33, wherein
the reference/ID constraints document specifies reference/ID constraints of the XML code with the type document. 35. The computer program product of claim 33, wherein the computer usable program code further causes the computer hardware system to perform
creating a directed constraint graph using the reference/ID constraints document. 36. The computer program product of claim 35, wherein the computer usable program code further causes the computer hardware system to perform
generating, for an XML structure not in the directed constraint graph, a first set of deserialization code. 37. The computer program product of claim 36, wherein the computer usable program code further causes the computer hardware system to perform
generating, for a leaf XML structure within the directed constraint graph, a second set of deserialization code. 38. The computer program product of claim 37, wherein the computer usable program code further causes the computer hardware system to perform
generating, for a higher XML structure within the directed constraint graph, a third set of deserialization code, wherein the higher XML structure is a next highest structure in the directed constraint graph than the leaf XML structure. | 2,100 |
5,939 | 5,939 | 14,827,963 | 2,173 | A presentation system for presenting information to an audience within a space, the system comprising a control interface, a master presentation unit including a flat panel display screen and a processor, the screen including a master space and a slave presentation assembly including a slave presentation surface and a first projector for projecting images on the slave presentation surface, the master unit processor linkable to the interface to receive commands therefrom, the processor programmed to monitor for a command from the interface to flip an image from the master space to the slave space and, when a command to flip an image is received, causing the image from the master space to be presented in the slave space. | 1. A presentation system comprising:
a portable presentation unit including a rectangular rotational display screen and a wireless transceiver; a wireless access point in communication with the wireless transceiver; a stationary presentation unit including a stationary display screen in communication with the wireless access point; a processor in communication with the wireless access point, wherein the processor is programmed to format an image for display on the rectangular rotational display in the portable presentation unit, to monitor the portable presentation unit for a command signal to flip an image to the stationary display screen in the stationary presentation unit, to produce the image on the stationary display, and to format a height and width of the image on the stationary display screen to correlate with the image on the rotational display in the portable presentation unit. 2. The presentation system of claim 1, wherein the processor is further programmed to format the images to adjust the number of images provided on the stationary display depending on the height and width orientation of the rotational display. 3. The presentation system of claim 1, wherein the rectangular rotational display screen is a mater presentation unit and the stationary presentation unit is a slave presentation unit. 4. The presentation system of claim 1 wherein the processor is programmed to format the images on the stationary display to provide a different numbers of images when the images flipped from the rotational display are in a landscape format than in a portrait format. 5. The presentation system of claim 1 wherein the processor is programmed to simultaneously presents at least two adjacent images in the stationary display in a portrait format and one image in a landscape format. 6. The presentation system of claim 1 wherein the stationary presentation unit includes at least one projector. 7. The presentation system of claim 1 wherein the rotational display screen comprises a touch sensitive flat screen display device. 8. The presentation system of claim 1, wherein the portable presentation unit further comprises an interface, the interface enabling a user to provide a command signal to the processor designating the portable presentation unit as a master unit. 9. The presentation system of claim 1, wherein the portable presentation unit further comprises an interface, the interface enabling a user to provide a command signal to the processor designating at least one other portable or stationary display unit as a slave. 10. The presentation system of claim 1, wherein when a command signal to flip an image is received by the processor, the processor is programmed to correlate the image on the rotational display screen with an image identifier. 11. The presentation system of claim 10, further comprising a memory in communication with the processor, and wherein the processor is programmed to store the image identifier in memory. 12. The presentation system of claim 1, wherein the portable presentation unit further comprises an interface, the interface enabling a user to provide a command signal to the processor to retrieve an image from retrieve an image from the stationary presentation unit. 13. The presentation system of claim 1, wherein the rectangular rotational display comprises a writable and erasable surface. | A presentation system for presenting information to an audience within a space, the system comprising a control interface, a master presentation unit including a flat panel display screen and a processor, the screen including a master space and a slave presentation assembly including a slave presentation surface and a first projector for projecting images on the slave presentation surface, the master unit processor linkable to the interface to receive commands therefrom, the processor programmed to monitor for a command from the interface to flip an image from the master space to the slave space and, when a command to flip an image is received, causing the image from the master space to be presented in the slave space.1. A presentation system comprising:
a portable presentation unit including a rectangular rotational display screen and a wireless transceiver; a wireless access point in communication with the wireless transceiver; a stationary presentation unit including a stationary display screen in communication with the wireless access point; a processor in communication with the wireless access point, wherein the processor is programmed to format an image for display on the rectangular rotational display in the portable presentation unit, to monitor the portable presentation unit for a command signal to flip an image to the stationary display screen in the stationary presentation unit, to produce the image on the stationary display, and to format a height and width of the image on the stationary display screen to correlate with the image on the rotational display in the portable presentation unit. 2. The presentation system of claim 1, wherein the processor is further programmed to format the images to adjust the number of images provided on the stationary display depending on the height and width orientation of the rotational display. 3. The presentation system of claim 1, wherein the rectangular rotational display screen is a mater presentation unit and the stationary presentation unit is a slave presentation unit. 4. The presentation system of claim 1 wherein the processor is programmed to format the images on the stationary display to provide a different numbers of images when the images flipped from the rotational display are in a landscape format than in a portrait format. 5. The presentation system of claim 1 wherein the processor is programmed to simultaneously presents at least two adjacent images in the stationary display in a portrait format and one image in a landscape format. 6. The presentation system of claim 1 wherein the stationary presentation unit includes at least one projector. 7. The presentation system of claim 1 wherein the rotational display screen comprises a touch sensitive flat screen display device. 8. The presentation system of claim 1, wherein the portable presentation unit further comprises an interface, the interface enabling a user to provide a command signal to the processor designating the portable presentation unit as a master unit. 9. The presentation system of claim 1, wherein the portable presentation unit further comprises an interface, the interface enabling a user to provide a command signal to the processor designating at least one other portable or stationary display unit as a slave. 10. The presentation system of claim 1, wherein when a command signal to flip an image is received by the processor, the processor is programmed to correlate the image on the rotational display screen with an image identifier. 11. The presentation system of claim 10, further comprising a memory in communication with the processor, and wherein the processor is programmed to store the image identifier in memory. 12. The presentation system of claim 1, wherein the portable presentation unit further comprises an interface, the interface enabling a user to provide a command signal to the processor to retrieve an image from retrieve an image from the stationary presentation unit. 13. The presentation system of claim 1, wherein the rectangular rotational display comprises a writable and erasable surface. | 2,100 |
5,940 | 5,940 | 15,525,379 | 2,135 | Described are motherboards with memory-module sockets that accept legacy memory modules for backward compatibility, or accept a greater number of configurable modules in support of increased memory capacity. The configurable modules can be backward compatible with legacy motherboards. Equipped with the configurable modules, the motherboards support memory systems with high signaling rates and capacities. | 1. A motherboard comprising:
a memory-controller component; a first memory-module socket adjacent the memory-controller component; a second memory-module socket adjacent the first memory-module socket on a side of the first memory-module socket opposite the memory-controller component; a third memory-module socket adjacent the second memory-module socket on a side of the second memory-module socket opposite the first memory-module socket; a fourth memory-module socket adjacent the third memory-module socket on a side of the third memory-module socket opposite the second memory-module socket; a first data-link group coupling the memory-controller component to the first memory module socket and the third memory-module socket, the first data-link group extending past the second memory-module socket; and a second data-link group extending past the first memory-module socket and the third memory-module socket, the second data-link group coupling the memory-controller component to the second memory module socket and the fourth memory-module socket. 2. The motherboard of claim 1, further comprising a first memory module in the first memory-module socket and a second memory module in the second memory-module socket, the memory controller to direct a memory transaction to the first memory module and the second memory module via the respective first data-link group and the second data-link group. 3. The motherboard of claim 2, further comprising a third memory module in the third memory-module socket and a fourth memory module in the fourth memory-module socket, the memory controller to direct the memory transaction to the third memory module and the fourth memory module via the respective second data-link group and the first data-link group. 4. The motherboard of claim 2, each of the sockets including a similar arrangement of pin groups, including a first pin group and a second pin group;
the first data-link group connected to the first pin group of the first socket and the second data-link group connected to the second pin group of the second socket; the first memory module including a first data-buffer component to steer first data from the first memory module to the first pin group on the first memory module responsive to a read command; and the second memory module including a second data-buffer component to steer second data from the second memory module to the second pin group of the second memory module responsive to the read command. 5. The motherboard of claim 4, further comprising:
a third memory module in the third memory-module socket; and a fourth memory module in the fourth memory-module socket; the first data-link group connected to the first pin group of the third socket and the second data-link group connected to the second pin group of the fourth socket; the motherboard steering data from the third memory module to the first pin group on the third memory module responsive to the read command and steering data from the fourth memory module to the second pin group of the fourth memory module responsive to the read command. 6. The motherboard of claim 1, each of the sockets including a similar arrangement of pin groups, including a first pin group and a second pin group, the first data-link group connected to the first pin group of the first socket and the second data-link group connected to the first pin group of the second socket. 7. The motherboard of claim 1, each of the sockets including a similar arrangement of pin groups, including a first pin group and a second pin group, the first data-link group connected to the first pin group of the first socket and the second data-link group connected to the second pin group of the second socket. 8. The motherboard of claim 1, further comprising:
a third data-link group extending between the first memory-module socket and the second memory-module socket; and a fourth data-link group extending between the third memory-module socket and the fourth memory-module socket. 9. The motherboard of claim 8, further comprising:
a memory module in one of the first memory-module socket and the second memory-module socket; and a continuity module in the other of the first memory-module socket and the second memory-module socket; the memory module coupled to the memory-controller component via a data path that includes the third data-link group connected in series with the continuity module. 10. The motherboard of claim 9, wherein the data path includes the first data-link group connected in series with the third data-link group. 11. The motherboard of claim 8, further comprising:
a memory module in one of the third memory-module socket and the fourth memory-module socket; and a continuity module in the other of the third memory-module socket and the fourth memory-module socket; the memory module coupled to the memory-controller component via a data path that includes the fourth data-link group connected in series with the continuity module. 12. The motherboard of claim 11, wherein the data path includes the second data-link group connected in series with the fourth data-link group. 13. A memory module comprising:
a module interface; memory components, including a first memory component and a second memory component; and an address-buffer component having:
a primary address interface coupled to the module interface to receive primary memory addresses expressed as primary-address bits;
a primary chip-select interface coupled to the module interface to receive primary chip-select information as primary chip-select bits;
a first secondary chip-select interface coupled to the first memory component; and
a second secondary chip-select interface coupled to the second memory component; and
logic to direct the primary chip-select information to the first secondary chip-select interface and disable the second secondary chip-select interface responsive to a subset of the primary-address bits and a subset of the primary chip-select bits. 14. The memory module of claim 13, wherein the logic disables the second secondary chip-select interface in a first mode and supports a second mode. 15. The memory module of claim 14, the logic, in the second mode, to direct the primary chip-select information to the first secondary chip-select interface and the second secondary chip-select interface responsive to the same or a different subset of the primary-address bits and the same or a different subset of the primary chip-select bits. 16. The memory module of claim 13, wherein the logic directs the primary chip-select information responsive to a mode signal. 17. The memory module of claim 16, further comprising mode register to store the mode signal. 18. The memory module of claim 16, wherein the logic derives the mode signal from at least one of the primary-address bits and the primary chip-select bits. 19. The memory module of claim 13, further comprising a data-buffer component coupled between the module interface and the memory components, the logic coupled to the data-buffer component to alternatively connect the first memory component and the second memory component to the module interface. 20. The memory module of claim 19, the data-buffer component including:
a first primary data-link interface coupled to the module interface; a second primary data-link interface coupled to the module interface; a first secondary data-link interface coupled to the first memory component; and a second secondary data-link interface coupled to the second memory component; wherein the address-buffer component to issue a signal to the data-buffer component to steer data between the first primary data-link interface and one of the first and second secondary data-link interfaces, and to disable the other of the first and second secondary data-link interfaces. 21. The memory module of claim 20, wherein the address-buffer component issues the signal to the data-buffer component to steer the data in a first mode, and wherein the address-buffer component supports a second mode to steer the data between the first primary data-link interface and the first secondary data-link interface and between the second primary data-link interface and the second secondary data-link interface. 22. The memory module of claim 21, further comprising a mode-select terminal to receive a mode-select signal to select between the first mode and the second mode, wherein the mode-select signal initializes at least one of the address-buffer component and the data-buffer component and maintains the at least one of the address-buffer component and the data-buffer component in the mode during operation. 23. The memory module of claim 22, further comprising a register to store the mode-select signal. | Described are motherboards with memory-module sockets that accept legacy memory modules for backward compatibility, or accept a greater number of configurable modules in support of increased memory capacity. The configurable modules can be backward compatible with legacy motherboards. Equipped with the configurable modules, the motherboards support memory systems with high signaling rates and capacities.1. A motherboard comprising:
a memory-controller component; a first memory-module socket adjacent the memory-controller component; a second memory-module socket adjacent the first memory-module socket on a side of the first memory-module socket opposite the memory-controller component; a third memory-module socket adjacent the second memory-module socket on a side of the second memory-module socket opposite the first memory-module socket; a fourth memory-module socket adjacent the third memory-module socket on a side of the third memory-module socket opposite the second memory-module socket; a first data-link group coupling the memory-controller component to the first memory module socket and the third memory-module socket, the first data-link group extending past the second memory-module socket; and a second data-link group extending past the first memory-module socket and the third memory-module socket, the second data-link group coupling the memory-controller component to the second memory module socket and the fourth memory-module socket. 2. The motherboard of claim 1, further comprising a first memory module in the first memory-module socket and a second memory module in the second memory-module socket, the memory controller to direct a memory transaction to the first memory module and the second memory module via the respective first data-link group and the second data-link group. 3. The motherboard of claim 2, further comprising a third memory module in the third memory-module socket and a fourth memory module in the fourth memory-module socket, the memory controller to direct the memory transaction to the third memory module and the fourth memory module via the respective second data-link group and the first data-link group. 4. The motherboard of claim 2, each of the sockets including a similar arrangement of pin groups, including a first pin group and a second pin group;
the first data-link group connected to the first pin group of the first socket and the second data-link group connected to the second pin group of the second socket; the first memory module including a first data-buffer component to steer first data from the first memory module to the first pin group on the first memory module responsive to a read command; and the second memory module including a second data-buffer component to steer second data from the second memory module to the second pin group of the second memory module responsive to the read command. 5. The motherboard of claim 4, further comprising:
a third memory module in the third memory-module socket; and a fourth memory module in the fourth memory-module socket; the first data-link group connected to the first pin group of the third socket and the second data-link group connected to the second pin group of the fourth socket; the motherboard steering data from the third memory module to the first pin group on the third memory module responsive to the read command and steering data from the fourth memory module to the second pin group of the fourth memory module responsive to the read command. 6. The motherboard of claim 1, each of the sockets including a similar arrangement of pin groups, including a first pin group and a second pin group, the first data-link group connected to the first pin group of the first socket and the second data-link group connected to the first pin group of the second socket. 7. The motherboard of claim 1, each of the sockets including a similar arrangement of pin groups, including a first pin group and a second pin group, the first data-link group connected to the first pin group of the first socket and the second data-link group connected to the second pin group of the second socket. 8. The motherboard of claim 1, further comprising:
a third data-link group extending between the first memory-module socket and the second memory-module socket; and a fourth data-link group extending between the third memory-module socket and the fourth memory-module socket. 9. The motherboard of claim 8, further comprising:
a memory module in one of the first memory-module socket and the second memory-module socket; and a continuity module in the other of the first memory-module socket and the second memory-module socket; the memory module coupled to the memory-controller component via a data path that includes the third data-link group connected in series with the continuity module. 10. The motherboard of claim 9, wherein the data path includes the first data-link group connected in series with the third data-link group. 11. The motherboard of claim 8, further comprising:
a memory module in one of the third memory-module socket and the fourth memory-module socket; and a continuity module in the other of the third memory-module socket and the fourth memory-module socket; the memory module coupled to the memory-controller component via a data path that includes the fourth data-link group connected in series with the continuity module. 12. The motherboard of claim 11, wherein the data path includes the second data-link group connected in series with the fourth data-link group. 13. A memory module comprising:
a module interface; memory components, including a first memory component and a second memory component; and an address-buffer component having:
a primary address interface coupled to the module interface to receive primary memory addresses expressed as primary-address bits;
a primary chip-select interface coupled to the module interface to receive primary chip-select information as primary chip-select bits;
a first secondary chip-select interface coupled to the first memory component; and
a second secondary chip-select interface coupled to the second memory component; and
logic to direct the primary chip-select information to the first secondary chip-select interface and disable the second secondary chip-select interface responsive to a subset of the primary-address bits and a subset of the primary chip-select bits. 14. The memory module of claim 13, wherein the logic disables the second secondary chip-select interface in a first mode and supports a second mode. 15. The memory module of claim 14, the logic, in the second mode, to direct the primary chip-select information to the first secondary chip-select interface and the second secondary chip-select interface responsive to the same or a different subset of the primary-address bits and the same or a different subset of the primary chip-select bits. 16. The memory module of claim 13, wherein the logic directs the primary chip-select information responsive to a mode signal. 17. The memory module of claim 16, further comprising mode register to store the mode signal. 18. The memory module of claim 16, wherein the logic derives the mode signal from at least one of the primary-address bits and the primary chip-select bits. 19. The memory module of claim 13, further comprising a data-buffer component coupled between the module interface and the memory components, the logic coupled to the data-buffer component to alternatively connect the first memory component and the second memory component to the module interface. 20. The memory module of claim 19, the data-buffer component including:
a first primary data-link interface coupled to the module interface; a second primary data-link interface coupled to the module interface; a first secondary data-link interface coupled to the first memory component; and a second secondary data-link interface coupled to the second memory component; wherein the address-buffer component to issue a signal to the data-buffer component to steer data between the first primary data-link interface and one of the first and second secondary data-link interfaces, and to disable the other of the first and second secondary data-link interfaces. 21. The memory module of claim 20, wherein the address-buffer component issues the signal to the data-buffer component to steer the data in a first mode, and wherein the address-buffer component supports a second mode to steer the data between the first primary data-link interface and the first secondary data-link interface and between the second primary data-link interface and the second secondary data-link interface. 22. The memory module of claim 21, further comprising a mode-select terminal to receive a mode-select signal to select between the first mode and the second mode, wherein the mode-select signal initializes at least one of the address-buffer component and the data-buffer component and maintains the at least one of the address-buffer component and the data-buffer component in the mode during operation. 23. The memory module of claim 22, further comprising a register to store the mode-select signal. | 2,100 |
5,941 | 5,941 | 14,427,418 | 2,198 | A method for computer-assisted monitoring of an electrical energy-generating installation, in which output variables (y(t)) of the installation are prognosticated using a data-driven model (NN) based on corresponding input variables (x(t)). A confidence measurement (C(t)) is determined for respective input variables (x(t)), using one or more density estimators (DE), this measurement being higher, the greater the similarity of the input variables (x(t)) to known input variables from training data with which the data-driven model (NN) and the density estimator (DE) are taught. Based thereon, an average weighted deviation (E(t)) is determined between the prognosticated output variables (y(t)) and the output variables (y 0 (t)) actually occurring. If the average weighted deviation (y(t)) exceeds a predetermined threshold (E Th ) successive times, an error in operation is detected and an alarm is issued. | 1. A method for the computer-assisted monitoring of the operation of a technical system, wherein the technical system is characterized at corresponding operating times (t) by a state vector comprising a number of input variables (x(t)) and at least one output variable (y(t)) which is to be monitored, wherein:
a) the at least one output variable (y(t)) is predicted for respective operating times (t) on the basis of input variables occurring in the operation of the technical system with a data-driven model (NN) which is trained by means of training data from known state vectors; b) at least one density estimator (DE), trained by means of known input variables (x(t)) of the training data, is applied for respective operating times (t) to the number of input variables (x(t)) at the corresponding operating time (t), whereby a confidence measure (C(t)) is defined which is higher the greater the similarity of the input variables (x(t)) at the corresponding operating time (t) to known input variables (x(t)) from the training data; c) for respective cycles (CY) from a plurality of consecutive operating times (t), a weighted deviation (E(t)), averaged over the number of state vectors in the respective cycle (CY), between the at least one predicted output variable (y(t)) and the at least one output variable (y0(t)) occurring in the operation of the technical system is defined, wherein state vectors whose number of input variables have low confidence measures (C(t)) are weighted less in the average weighted deviation; d) a malfunction of the technical system is detected if all average weighted deviations (E(t)), for a number of consecutive cycles (CY) which is greater than a predefined numerical threshold (CntTh), comprising one or more criteria, fulfill the criterion that the amount of these deviations exceeds a predefined threshold value (ETh). 2. The method as claimed in claim 1, in which a malfunction is detected in step d) if all average weighted deviations (E(t)) for a number of consecutive cycles (CY), which is greater than the predefined numerical threshold (CntTh), fulfill the further criterion that they correspond to predicted output variables (y(t)) which are always smaller or always greater than the corresponding output variables (y0(t)) occurring in the operation of the technical system. 3. The method as claimed in claim 1, in which an alarm is output or another precautionary measure is instigated to protect the technical system if a malfunction of the technical system is detected. 4. The method as claimed in claim 1, in which the at least one output variable (y(t)) comprises a measurement variable in the technical system and/or is determined from one or more measurement variables in the technical system and/or is a variable regulated in the operation of the system. 5. The method as claimed in claim 1, in which the number of input variables (x(t)) contained in a respective state vector is defined on the basis of a trainable statistical model. 6. The method as claimed in claim 1, in which the data-driven model (NN) is based on at least one of a neural network, support vector machines or Gaussian processes. 7. The method as claimed in claim 1, in which the at least one density estimator (DE) is based on a neural clouds algorithm. 8. The method as claimed in claim 1, in which the average weighted deviation (E(t)) is defined in such a way that only state vectors whose number of input variables (x(t)) have confidence measures (C(t)) above a confidence threshold (CTh) are taken into account in the average weighted deviation (E(t)), wherein the state vectors taken into account in the average weighted deviation (E(t)) are equally heavily weighted. 9. The method as claimed in claim 1, in which the predefined threshold value (ETh) is defined according to validation data comprising known state vectors at corresponding operating times (t), wherein the scatter of the deviations between the at least one output variable (y(t)), which is predicted with the trained data-driven model (NN) on the basis of input variables (x(t)) from the validation data, and the at least one input variable (y0(t)) which is contained in the state vector (x(t)) of the validation data at the corresponding operating time (t), is defined from the validation data for respective operating times, wherein the predefined threshold (ETh) is determined from the scatter of the deviations in such a way that the greater the predefined threshold (ETh), the greater the scatter. 10. The method as claimed in claim 9, in which the scatter is represented by the standard deviation or variance of the frequency distribution of the deviations determined according to the validation data, or depends on the standard deviation or the variance, wherein the predefined threshold value (ETh) represents the standard deviation or variance multiplied by a positive factor. 11. The method as claimed in claim 1, in which a counter (Cnt(t)) is incremented in step d) whenever the average weighted deviation (E(t)) fulfills the criterion or criteria comprising the criterion that its amount exceeds a predefined threshold value (ETh) for a cycle (CY), wherein, with each incrementation of the counter (Cnt(t)), a warning (W) is output and a malfunction of the technical system is furthermore detected if the incrementation of the counter indicates that the number of temporally consecutive cycles (CY) is greater than the predefined numerical threshold (CntTh), wherein the counter (Cnt(t)) is reset to an initial value if the average weighted deviation (E(t)) does not fulfill the criterion or criteria. 12. The method as claimed in claim 11, in which different types of warning (W) are output depending on the number of consecutive cycles (CY) since the resetting of the counter (Cnt(t)) in which the average weighted deviations (E(t)) fulfill the criterion or criteria. 13. The method as claimed in claim 11, in which the warning (W) comprises the output of a signal and/or the sending of a message. 14. The method as claimed in claim 1, in which a training of the data-driven model (NN) and/or the at least one density estimator (DE) is repeated at predefined time intervals with state vectors newly added as training data during the operation of the technical system. 15. The method as claimed in claim 1, in which the technical system is an electrical energy-generating installation comprising a gas turbine. 16. The method as claimed in claim 15, in which the number of input variables and/or the at least one output variable comprise one or more of the following variables of the gas turbine:
the compressor efficiency of the gas turbine; the turbine efficiency of the gas turbine; the regulated exhaust gas of the gas turbine; the setting of one or more guide vanes, in the gas turbine compressor; the rotational speed of the gas turbine; one or more pressures and/or temperatures in the gas turbine, including the inlet temperature and/or the inlet pressure and/or the outlet temperature and/or the outlet pressure in the compressor and/or in the turbine; the temperature in the environment in which the gas turbine is operated; the relative humidity in the environment in which the gas turbine is operated; the air pressure in the environment in which the gas turbine is operated; one or more mass and/or volume flows; one or more parameters of a cooling and/or auxiliary system and/or lubricating oil and/or bearing systems in the gas turbine, including the setting of one or more valves for the supply of cooling air; the performance of the gas turbine, including a percentage performance value; the fuel quality of the gas turbine; the pollutant emission of the gas turbine, including the emission of nitrogen oxides and/or carbon monoxide; the temperature of one or more turbine vanes of the gas turbine; the combustion dynamics of the combustion chamber of the gas turbine; the quantity of gas supplied to the gas turbine; bearing and/or housing vibrations in the gas turbine. 17. A device for the computer-assisted monitoring of the operation of a technical system, wherein the device comprises a computer which is programmed to carry out the method as claimed in claim 1. 18. A technical system, comprising the device as claimed in claim 17. 19. A computer program product with a program code stored on a non-transitory machine-readable medium which is executable on a computer to carry out the method as claimed in claim 1. 20. A method as claimed in claim 18, wherein said technical system is an electrical energy-generating installation. 21. The method as claimed in claim 6, wherein said neural network is a recurrent neural network. 22. The method as claimed in claim 3, in which the alarm comprises the output of a signal and/or the sending of a message. | A method for computer-assisted monitoring of an electrical energy-generating installation, in which output variables (y(t)) of the installation are prognosticated using a data-driven model (NN) based on corresponding input variables (x(t)). A confidence measurement (C(t)) is determined for respective input variables (x(t)), using one or more density estimators (DE), this measurement being higher, the greater the similarity of the input variables (x(t)) to known input variables from training data with which the data-driven model (NN) and the density estimator (DE) are taught. Based thereon, an average weighted deviation (E(t)) is determined between the prognosticated output variables (y(t)) and the output variables (y 0 (t)) actually occurring. If the average weighted deviation (y(t)) exceeds a predetermined threshold (E Th ) successive times, an error in operation is detected and an alarm is issued.1. A method for the computer-assisted monitoring of the operation of a technical system, wherein the technical system is characterized at corresponding operating times (t) by a state vector comprising a number of input variables (x(t)) and at least one output variable (y(t)) which is to be monitored, wherein:
a) the at least one output variable (y(t)) is predicted for respective operating times (t) on the basis of input variables occurring in the operation of the technical system with a data-driven model (NN) which is trained by means of training data from known state vectors; b) at least one density estimator (DE), trained by means of known input variables (x(t)) of the training data, is applied for respective operating times (t) to the number of input variables (x(t)) at the corresponding operating time (t), whereby a confidence measure (C(t)) is defined which is higher the greater the similarity of the input variables (x(t)) at the corresponding operating time (t) to known input variables (x(t)) from the training data; c) for respective cycles (CY) from a plurality of consecutive operating times (t), a weighted deviation (E(t)), averaged over the number of state vectors in the respective cycle (CY), between the at least one predicted output variable (y(t)) and the at least one output variable (y0(t)) occurring in the operation of the technical system is defined, wherein state vectors whose number of input variables have low confidence measures (C(t)) are weighted less in the average weighted deviation; d) a malfunction of the technical system is detected if all average weighted deviations (E(t)), for a number of consecutive cycles (CY) which is greater than a predefined numerical threshold (CntTh), comprising one or more criteria, fulfill the criterion that the amount of these deviations exceeds a predefined threshold value (ETh). 2. The method as claimed in claim 1, in which a malfunction is detected in step d) if all average weighted deviations (E(t)) for a number of consecutive cycles (CY), which is greater than the predefined numerical threshold (CntTh), fulfill the further criterion that they correspond to predicted output variables (y(t)) which are always smaller or always greater than the corresponding output variables (y0(t)) occurring in the operation of the technical system. 3. The method as claimed in claim 1, in which an alarm is output or another precautionary measure is instigated to protect the technical system if a malfunction of the technical system is detected. 4. The method as claimed in claim 1, in which the at least one output variable (y(t)) comprises a measurement variable in the technical system and/or is determined from one or more measurement variables in the technical system and/or is a variable regulated in the operation of the system. 5. The method as claimed in claim 1, in which the number of input variables (x(t)) contained in a respective state vector is defined on the basis of a trainable statistical model. 6. The method as claimed in claim 1, in which the data-driven model (NN) is based on at least one of a neural network, support vector machines or Gaussian processes. 7. The method as claimed in claim 1, in which the at least one density estimator (DE) is based on a neural clouds algorithm. 8. The method as claimed in claim 1, in which the average weighted deviation (E(t)) is defined in such a way that only state vectors whose number of input variables (x(t)) have confidence measures (C(t)) above a confidence threshold (CTh) are taken into account in the average weighted deviation (E(t)), wherein the state vectors taken into account in the average weighted deviation (E(t)) are equally heavily weighted. 9. The method as claimed in claim 1, in which the predefined threshold value (ETh) is defined according to validation data comprising known state vectors at corresponding operating times (t), wherein the scatter of the deviations between the at least one output variable (y(t)), which is predicted with the trained data-driven model (NN) on the basis of input variables (x(t)) from the validation data, and the at least one input variable (y0(t)) which is contained in the state vector (x(t)) of the validation data at the corresponding operating time (t), is defined from the validation data for respective operating times, wherein the predefined threshold (ETh) is determined from the scatter of the deviations in such a way that the greater the predefined threshold (ETh), the greater the scatter. 10. The method as claimed in claim 9, in which the scatter is represented by the standard deviation or variance of the frequency distribution of the deviations determined according to the validation data, or depends on the standard deviation or the variance, wherein the predefined threshold value (ETh) represents the standard deviation or variance multiplied by a positive factor. 11. The method as claimed in claim 1, in which a counter (Cnt(t)) is incremented in step d) whenever the average weighted deviation (E(t)) fulfills the criterion or criteria comprising the criterion that its amount exceeds a predefined threshold value (ETh) for a cycle (CY), wherein, with each incrementation of the counter (Cnt(t)), a warning (W) is output and a malfunction of the technical system is furthermore detected if the incrementation of the counter indicates that the number of temporally consecutive cycles (CY) is greater than the predefined numerical threshold (CntTh), wherein the counter (Cnt(t)) is reset to an initial value if the average weighted deviation (E(t)) does not fulfill the criterion or criteria. 12. The method as claimed in claim 11, in which different types of warning (W) are output depending on the number of consecutive cycles (CY) since the resetting of the counter (Cnt(t)) in which the average weighted deviations (E(t)) fulfill the criterion or criteria. 13. The method as claimed in claim 11, in which the warning (W) comprises the output of a signal and/or the sending of a message. 14. The method as claimed in claim 1, in which a training of the data-driven model (NN) and/or the at least one density estimator (DE) is repeated at predefined time intervals with state vectors newly added as training data during the operation of the technical system. 15. The method as claimed in claim 1, in which the technical system is an electrical energy-generating installation comprising a gas turbine. 16. The method as claimed in claim 15, in which the number of input variables and/or the at least one output variable comprise one or more of the following variables of the gas turbine:
the compressor efficiency of the gas turbine; the turbine efficiency of the gas turbine; the regulated exhaust gas of the gas turbine; the setting of one or more guide vanes, in the gas turbine compressor; the rotational speed of the gas turbine; one or more pressures and/or temperatures in the gas turbine, including the inlet temperature and/or the inlet pressure and/or the outlet temperature and/or the outlet pressure in the compressor and/or in the turbine; the temperature in the environment in which the gas turbine is operated; the relative humidity in the environment in which the gas turbine is operated; the air pressure in the environment in which the gas turbine is operated; one or more mass and/or volume flows; one or more parameters of a cooling and/or auxiliary system and/or lubricating oil and/or bearing systems in the gas turbine, including the setting of one or more valves for the supply of cooling air; the performance of the gas turbine, including a percentage performance value; the fuel quality of the gas turbine; the pollutant emission of the gas turbine, including the emission of nitrogen oxides and/or carbon monoxide; the temperature of one or more turbine vanes of the gas turbine; the combustion dynamics of the combustion chamber of the gas turbine; the quantity of gas supplied to the gas turbine; bearing and/or housing vibrations in the gas turbine. 17. A device for the computer-assisted monitoring of the operation of a technical system, wherein the device comprises a computer which is programmed to carry out the method as claimed in claim 1. 18. A technical system, comprising the device as claimed in claim 17. 19. A computer program product with a program code stored on a non-transitory machine-readable medium which is executable on a computer to carry out the method as claimed in claim 1. 20. A method as claimed in claim 18, wherein said technical system is an electrical energy-generating installation. 21. The method as claimed in claim 6, wherein said neural network is a recurrent neural network. 22. The method as claimed in claim 3, in which the alarm comprises the output of a signal and/or the sending of a message. | 2,100 |
5,942 | 5,942 | 15,682,717 | 2,199 | A multi-channel control system includes at least a primary control microprocessor and a back-up control microprocessor operable to control a device. The primary control microprocessor and the back-up control microprocessor assert control over a controlled device according to a locally stored method of controlling a back-up microprocessor assumption of control of a device. | 1. A method of controlling a primary microprocessor assumption of control of a device in a multi-channel control device comprising the steps of:
entering a control process in one of two possible control states, wherein said two possible control states are a primary microprocessor in-control state and a primary microprocessor not in-control state; evaluating a plurality of conditions dependent upon which of said possible control states is true; entering one of a plurality of actions based on said evaluating of said plurality of conditions, wherein said plurality of actions includes an is channel-in-control output signal wrap-around false check, a take/keep control action, and a give-up control action; and performing an action in response to entering said one of said plurality of actions. 2. The method of claim 1, wherein said step of evaluating a plurality of conditions dependent upon which of said possible control states is true further comprises evaluating a first set of conditions when said control process is in a first control state and a second set of conditions when said control process is in a second control state. 3. The method of claim 2, wherein
said first set of conditions includes at least three conditions and corresponds to the primary microprocessor in-control state and wherein said three conditions are the back-up microprocessor is in-control, multiple channels are in-control or the primary microprocessor controller is healthier than the remote primary microprocessor controller, and all other states; said back-up microprocessor is in-control condition is the highest priority condition; said multiple channels are in-control condition or the primary microprocessor controller is healthier compared to remote primary microprocessor controller is a middle priority condition; and said all other states condition is a lowest priority condition. 4. The method of claim 3, wherein
said method enters the is channel-in-control output signal false check when said back-up microprocessor is in-control condition is true or both the local and the remote channels are in control and the local channel is a predefined channel or the local primary microprocessor controller has a higher health than a latest determined remote primary microprocessor health; and said method enters the give-up control action when said all other states condition is true, said back-up microprocessor is in-control condition is false and said multiple channels are in-control condition is false. 5. The method of claim 4, wherein said method enters a set channel-in-control wrap-around fault flag action when the channel-in-control wrap-around indicates that the channel-in-control is false and wherein said method enters the take/keep control action when the channel-in-control wrap-around indicates that the channel-in-control is true, 6. A method of controlling a back-up microprocessor assumption of control of a device in a multi-channel control device comprising the steps of:
entering a control process in one of two possible control states, wherein said two possible control states are a back-up microprocessor in-control state and a back-up microprocessor not in-control state; evaluating a plurality of conditions dependent upon which of said possible control states is true; entering one of a plurality of actions based on said evaluating of said plurality of conditions, wherein said plurality of actions includes a take/keep control action and a give-up control action; and performing an action in response to entering said one of said plurality of actions. 7. The method of claim 6, wherein said step of evaluating a plurality of conditions dependent upon which of said possible control states is true further comprises evaluating a first set of conditions when said control process is in said back-up microprocessor in-control state and a second set of conditions when said control process is in said back-up microprocessor not in-control state. 8. The method of claim 7, wherein said first set of conditions includes at least three conditions and wherein;
a highest priority condition of said at least three conditions is met when a local hardware wrap-around fault exists within a local channel or when the back-up microprocessor does not meet a minimum health requirement; a second highest priority condition of said at least three conditions is met when multiple remote channels are channel-in-control and the local channel is a predefined channel, or a remote channel is not in-control and when both channels are unhealthy or the back-up microprocessor has been in-control since an initial power up of the control system; and a lowest priority condition of said at least three conditions is met when none of the highest, or second highest priority conditions are met. 9. The method of claim 8, wherein
the method enters the give-up control action when the highest priority conditions are met, and when the lowest priority condition is met simultaneous with the second highest priority condition not being met; and the method enters the take/keep control action when the highest and third highest priority conditions are not met and the second highest priority condition is met. 10. The method of claim 7, wherein said second set of conditions includes at least two conditions and wherein said at least two conditions include:
a highest priority condition, said highest priority condition being met at least when a remote channel is not in-control, the remote channel's primary microprocessor controller is not healthy, a local channel's primary microprocessor controller is not healthy, there is no critical fault in the back-up microprocessor, the back-up microprocessor is not disabled, a time since power-up exceeds a set time period, and the back-up microprocessor has not been in-control within a set time period; and a lowest priority condition, said lowest priority condition being met when said highest priority condition is not met. 11. The method of claim 10, wherein the method enters a take/keep control action when the highest priority condition is met and the method enters a give up control action when the lowest priority condition is met. 12. The method of claim 7, wherein the step of performing an action in response to entering said one of said plurality of actions further comprises said method controlling a controlled device using a back-up microprocessor in response to the method entering the take/keep control action. 13. The method of claim 7, wherein the step of performing an action in response to entering said one of said plurality of actions further comprises said method disabling back-up controls to a controlled device in response to entering the give-up control action. 14. An electrical control configuration comprising:
at least a first primary control microprocessor and a first back-up control microprocessor operable to control a device, said first primary control microprocessor and said first back-up microprocessor being located in a first control channel; a second control channel including at least one control microprocessor operable to control the device; and each of said first primary control microprocessors and said first back-up control microprocessors being arranged as an independent equivalent control channel. 15. The electrical control configuration of claim 14, wherein said first control channel includes a hardware lock operable to prevent said back-up control microprocessor from asserting control when said primary microprocessor is in-control of the device and operable to prevent said primary control microprocessor from asserting control when said back-up microprocessor is in-control of the device. 16. The electrical control configuration of claim 14, wherein said first primary control microprocessor and said first back-up microprocessor are electrically isolated from each other. 17. The electrical control configuration of claim 16, wherein said electrical isolation is a resistive barrier. 18. The electrical control configuration of claim 14, wherein said at first control channel includes a redundant primary control microprocessor in-control logic circuit, a redundant back-up control microprocessor in-control logic circuit and a redundant channel-in-control microprocessor circuit. 19. The electrical control configuration of claim 18, wherein said redundant channel-in-control microprocessor is operable to output an channel-in-control signal when at least one fault is present in the redundant primary control microprocessor in-control logic circuit, the redundant back-up control microprocessor in-control logic circuit and the redundant channel-in-control microprocessor circuit. 20. The electrical control configuration of claim 14, wherein said second channel mirrors said first channel. | A multi-channel control system includes at least a primary control microprocessor and a back-up control microprocessor operable to control a device. The primary control microprocessor and the back-up control microprocessor assert control over a controlled device according to a locally stored method of controlling a back-up microprocessor assumption of control of a device.1. A method of controlling a primary microprocessor assumption of control of a device in a multi-channel control device comprising the steps of:
entering a control process in one of two possible control states, wherein said two possible control states are a primary microprocessor in-control state and a primary microprocessor not in-control state; evaluating a plurality of conditions dependent upon which of said possible control states is true; entering one of a plurality of actions based on said evaluating of said plurality of conditions, wherein said plurality of actions includes an is channel-in-control output signal wrap-around false check, a take/keep control action, and a give-up control action; and performing an action in response to entering said one of said plurality of actions. 2. The method of claim 1, wherein said step of evaluating a plurality of conditions dependent upon which of said possible control states is true further comprises evaluating a first set of conditions when said control process is in a first control state and a second set of conditions when said control process is in a second control state. 3. The method of claim 2, wherein
said first set of conditions includes at least three conditions and corresponds to the primary microprocessor in-control state and wherein said three conditions are the back-up microprocessor is in-control, multiple channels are in-control or the primary microprocessor controller is healthier than the remote primary microprocessor controller, and all other states; said back-up microprocessor is in-control condition is the highest priority condition; said multiple channels are in-control condition or the primary microprocessor controller is healthier compared to remote primary microprocessor controller is a middle priority condition; and said all other states condition is a lowest priority condition. 4. The method of claim 3, wherein
said method enters the is channel-in-control output signal false check when said back-up microprocessor is in-control condition is true or both the local and the remote channels are in control and the local channel is a predefined channel or the local primary microprocessor controller has a higher health than a latest determined remote primary microprocessor health; and said method enters the give-up control action when said all other states condition is true, said back-up microprocessor is in-control condition is false and said multiple channels are in-control condition is false. 5. The method of claim 4, wherein said method enters a set channel-in-control wrap-around fault flag action when the channel-in-control wrap-around indicates that the channel-in-control is false and wherein said method enters the take/keep control action when the channel-in-control wrap-around indicates that the channel-in-control is true, 6. A method of controlling a back-up microprocessor assumption of control of a device in a multi-channel control device comprising the steps of:
entering a control process in one of two possible control states, wherein said two possible control states are a back-up microprocessor in-control state and a back-up microprocessor not in-control state; evaluating a plurality of conditions dependent upon which of said possible control states is true; entering one of a plurality of actions based on said evaluating of said plurality of conditions, wherein said plurality of actions includes a take/keep control action and a give-up control action; and performing an action in response to entering said one of said plurality of actions. 7. The method of claim 6, wherein said step of evaluating a plurality of conditions dependent upon which of said possible control states is true further comprises evaluating a first set of conditions when said control process is in said back-up microprocessor in-control state and a second set of conditions when said control process is in said back-up microprocessor not in-control state. 8. The method of claim 7, wherein said first set of conditions includes at least three conditions and wherein;
a highest priority condition of said at least three conditions is met when a local hardware wrap-around fault exists within a local channel or when the back-up microprocessor does not meet a minimum health requirement; a second highest priority condition of said at least three conditions is met when multiple remote channels are channel-in-control and the local channel is a predefined channel, or a remote channel is not in-control and when both channels are unhealthy or the back-up microprocessor has been in-control since an initial power up of the control system; and a lowest priority condition of said at least three conditions is met when none of the highest, or second highest priority conditions are met. 9. The method of claim 8, wherein
the method enters the give-up control action when the highest priority conditions are met, and when the lowest priority condition is met simultaneous with the second highest priority condition not being met; and the method enters the take/keep control action when the highest and third highest priority conditions are not met and the second highest priority condition is met. 10. The method of claim 7, wherein said second set of conditions includes at least two conditions and wherein said at least two conditions include:
a highest priority condition, said highest priority condition being met at least when a remote channel is not in-control, the remote channel's primary microprocessor controller is not healthy, a local channel's primary microprocessor controller is not healthy, there is no critical fault in the back-up microprocessor, the back-up microprocessor is not disabled, a time since power-up exceeds a set time period, and the back-up microprocessor has not been in-control within a set time period; and a lowest priority condition, said lowest priority condition being met when said highest priority condition is not met. 11. The method of claim 10, wherein the method enters a take/keep control action when the highest priority condition is met and the method enters a give up control action when the lowest priority condition is met. 12. The method of claim 7, wherein the step of performing an action in response to entering said one of said plurality of actions further comprises said method controlling a controlled device using a back-up microprocessor in response to the method entering the take/keep control action. 13. The method of claim 7, wherein the step of performing an action in response to entering said one of said plurality of actions further comprises said method disabling back-up controls to a controlled device in response to entering the give-up control action. 14. An electrical control configuration comprising:
at least a first primary control microprocessor and a first back-up control microprocessor operable to control a device, said first primary control microprocessor and said first back-up microprocessor being located in a first control channel; a second control channel including at least one control microprocessor operable to control the device; and each of said first primary control microprocessors and said first back-up control microprocessors being arranged as an independent equivalent control channel. 15. The electrical control configuration of claim 14, wherein said first control channel includes a hardware lock operable to prevent said back-up control microprocessor from asserting control when said primary microprocessor is in-control of the device and operable to prevent said primary control microprocessor from asserting control when said back-up microprocessor is in-control of the device. 16. The electrical control configuration of claim 14, wherein said first primary control microprocessor and said first back-up microprocessor are electrically isolated from each other. 17. The electrical control configuration of claim 16, wherein said electrical isolation is a resistive barrier. 18. The electrical control configuration of claim 14, wherein said at first control channel includes a redundant primary control microprocessor in-control logic circuit, a redundant back-up control microprocessor in-control logic circuit and a redundant channel-in-control microprocessor circuit. 19. The electrical control configuration of claim 18, wherein said redundant channel-in-control microprocessor is operable to output an channel-in-control signal when at least one fault is present in the redundant primary control microprocessor in-control logic circuit, the redundant back-up control microprocessor in-control logic circuit and the redundant channel-in-control microprocessor circuit. 20. The electrical control configuration of claim 14, wherein said second channel mirrors said first channel. | 2,100 |
5,943 | 5,943 | 13,283,866 | 2,137 | A data management method of a data storage device having a data management unit different from a data management unit of a user device receives information regarding a storage area of a file to be deleted, from the user device, selects a storage area which matches with the data management unit of the data storage device, from among the storage area of the deleted file, and performs an erasing operation on the selected storage area which matches with the data management unit. | 1. A data management method of a data storage device which has a data management unit different from a data management unit of a user device, the data management method comprising:
receiving from the user device information regarding a storage area of a file to be deleted; selecting from among the storage area of the file to be deleted a storage area which matches with the data management unit of the data storage device; and performing an erasing operation on the selected storage area which matches with the data management unit. 2. The data management method of claim 1, wherein information regarding a storage area, which is mismatched with the data management unit of the data storage device among the storage area of the file to be deleted, is separately managed. 3. The data management method of claim 1, wherein the user device changes information regarding metadata of the file to be deleted to indicate that the file to be deleted is deleted from a high level. 4. The data management method of claim 1, further comprising storing, by the data storage device, information regarding storage areas of at least two files to be deleted in a buffer memory when the information regarding the storage areas of the at least two files to be deleted is provided from the user device. 5. The data management method of claim 4, wherein the selecting of a storage area selects a storage area, matching with the data management unit of the data storage device, from among the storage areas of the at least two files to be deleted which are stored in the buffer memory. 6. The data management method of claim 1, wherein:
the user device manages data by sector unit, the data storage device manages data by page unit, and each page is divided into a plurality of sectors. 7. A data management method for a data storage device which uses a data management unit different from a data management unit of a user device, the data management method comprising:
receiving from the user device information regarding a storage area of a file to be deleted; and marking a storage area which matches with the management unit of the data storage device, as invalid, wherein: the data storage device comprises a data storage unit configured to store data, and a buffer memory configured to temporarily store data to be written in the data storage unit, and data regarding the storage area marked as invalid among the data stored in the buffer memory is not written in the storage unit. 8. The data management method of claim 7, further comprising marking a storage area which is mismatched with the management unit of the data storage device among the storage area of the file to be deleted, as valid,
wherein data regarding the storage area marked as valid among the data stored in the buffer memory is written in the storage unit. 9. The data management method of claim 7, further comprising creating a TRIM manage table which is configured to manage a storage area mismatched with the management unit of the data storage device among the storage area of the file to be deleted. 10. The data management method of claim 9, wherein the TRIM manage table is stored in the buffer memory, and information of the TRIM manage table is controlled in a push scheme. 11. The data management method of claim 7, further comprising storing in the buffer memory, by the data storage device, information regarding storage areas of at least two files to be deleted when the information regarding the storage areas of the at least two files to be deleted is provided from the user device. 12. The data management method of claim 11, wherein the marking of a storage area marks as invalid a storage area which matches with the data management unit of the data storage device. 13. A memory system comprising:
a host configured to generate a TRIM command; and a data storage device configured to perform an erasing operation in response to the TRIM command from the host, wherein the data storage device performs an erasing operation on an area which matches with a data management unit of the data storage device among a storage area which has been designated as an area to be deleted according to the TRIM command. 14. The memory system of claim 13, wherein the data storage device separately manages information regarding an area which is mismatched with the data management unit of the data storage device among the storage area which has been designated as the area to be deleted according to the TRIM command. 15. The memory system of claim 13, wherein:
the data storage device manages data by page unit, the host manages data by sector unit, and each page is divided into a plurality of sectors. 16. The memory system claim 13, wherein:
the data storage device comprises a mapping table configured to change a logical address, which is provided from the host, to a physical address of the data storage device, and in the mapping table, the storage area which matches with the data management unit of the data storage device among the storage area designated as the area to be deleted is marked as invalid. 17. The memory system of claim 16, wherein in the mapping table, a storage area which is mismatched with the data management unit of the data storage device among the storage area designated as the area to be deleted is marked as valid. 18. The memory system of claim 17, wherein the data storage device further comprises a TRIM manage table configured to manage information regarding the storage area which is mismatched with the data management unit of the data storage device among the storage area designated as the area to be deleted. 19. The memory system of claim 18, wherein the mapping table updates Writing State Information (WSI) on the basis of the TRIM manage table when the storage area managed in the TRIM manage table matches with the data management unit of the data storage device, according to another TRIM command from the host. 20. The memory system of claim 19, wherein the WSI of the mapping table is updated, and information regarding a storage area which matches with the data management unit of the data storage device and is managed in the TRIM manage table is deleted from the TRIM manage table. 21. The memory system of claim 18, wherein the data storage device further comprises a buffer memory configured to store the TRIM manage table and to manage the information stored in the TRIM manage table in a push scheme. 22. The memory system of claim 13, wherein the data storage device comprises a buffer memory configured to store information regarding at least two TRIM commands transferred from the host. 23. The memory system of claim 22, wherein:
the data storage device further comprises at least two flash memories configured to store data, and a control unit configured to control the at least two flash memories, and the control unit controls processing order of the at least two TRIM commands stored in the buffer memory for the at least two flash memories to operate in parallel. 24. A data storage device which is connected to a user device, the data storage device comprising:
a storage unit configured to store data; a buffer memory configured to temporarily store data to be written in the storage unit; and a control unit configured to control the storage unit and the buffer memory, wherein data of a storage area which matches with a data management unit of the storage unit among a storage area designated as an area to be deleted is not written in the storage unit, according to a TRIM command transferred from the user device. 25. The data storage device of claim 24, wherein data of a storage area which is mismatched with the data management unit of the storage unit among the storage area designated as the area to be deleted is written in the storage unit. 26. The data storage device of claim 24, further comprising a mapping table configured to change a logical address, which is provided from the user device, to a physical address of the data storage device,
wherein: in the mapping table, Writing State Information (WSI) of the storage area which matches with the data management unit of the storage unit among the storage area designated as the area to be deleted is marked as invalid, and in the mapping table, the WSI of a storage area which is mismatched with the data management unit of the storage unit among the storage area designated as the area to be deleted is marked as valid. 27. The data storage device of claim 26, wherein the data storage device further comprises a TRIM manage table configured to manage a storage area which is mismatched with the data management unit of the data storage device and marked as valid in the mapping table. 28. The data storage device of claim 27, wherein the mapping table updates the WSI on the basis of the TRIM manage table when the storage area managed in the TRIM manage table matches with the data management unit of the data storage device, according to another TRIM command transferred from the user device. 29. The data storage device of claim 24, further comprising a buffer memory configured to store information regarding at least two TRIM commands when the at least two TRIM commands are transferred from the user device. 30. The data storage device of claim 29, wherein:
the storage unit comprises at least two flash memories, and the control unit controls processing order of the at least two TRIM commands stored in the buffer memory for the at least two flash memories to operate in parallel. 31. A data management method for a user device that stores data of a file in a data storage device and having a different data management unit than the data storage device, the method comprising:
changing metadata of a delete-requested file in response to a file delete request; determining whether information about a storage region of the delete-requested file corresponds to a data management unit of the data storage device; and transmitting information about a region corresponding to the data management unit of the data storage device among information about the storage region of the delete-requested file to the data storage device. 32. The method of claim 31, wherein the changing of the metadata of the delete-requested file represents that the delete-requested file is deleted in a high level. 33. The method of claim 31, further comprising generating a TRIM manage table configured to manage information about a region that does not correspond to the data management unit of the data storage device among the information about the storage region of the delete-requested file. 34. The method of claim 31, wherein the information about the storage region of the delete-requested file is provided from a mapping table of the data storage device. 35. A user device that stores data of a file in a data storage device, the user device comprising:
a file system configured to manage a file by a unit different from a data management unit of the data storage device and to change information about metadata of a delete-requested file; and a TRIM manage module configured to provide information about a storage region corresponding to the data management unit of the data storage device among information about a storage region of the delete-requested file. 36. The user device of claim 35, wherein a changing of the information about the metadata of the delete-requested file represents that the delete-requested file is deleted in a high level. 37. The user device of claim 35, further comprising a TRIM manage table configured to manage information about a region that does not correspond to the data management unit of the data storage device among the information about the storage region of the delete-requested file. 38. The user device of claim 37, further comprising a host memory configured to store the TRIM manage table, wherein the TRIM manage table stored in the host memory is managed through a pushing method. 39. The user device of claim 35, further comprising a host memory configured to store information about a storage region of at least two delete-requested files. 40. The user device of claim 39, wherein the TRIM manage module provides information about a region corresponding to the data management unit of the data storage device among the information about the storage region of the at least two delete-requested files stored in the host memory. 41. The user device of claim 35, wherein the information about the storage region of the at least two delete-requested files is provided from a mapping table of the data storage device. 42. A memory system comprising:
a host configured to support a TRIM operation; and a data storage device configured to perform an erase operation in response to a TRIM command from the host, wherein the host provides only information about a storage region corresponding to a data management unit of the data storage device among information about a storage region of a delete-requested file. 43. The memory system of claim 42, wherein the host separately manages information about a region that does not correspond to the data management unit of the data storage device among the information about the storage region of the delete-requested file. 44. The memory system of claim 42, wherein the host manages a file by a sector unit;
the data storage device manages data of a file by a page unit; and each page is divided into a plurality of sectors. 45. The memory system of claim 42, wherein the host comprises:
a file system configured to manage a file by a sector unit and to change information about metadata of a delete-requested file; and a TRIM manage module configured to select information about sectors corresponding to a page unit of the data storage device, among sectors of the delete-requested file. 46. The memory system of claim 45, wherein the changing of the information about the metadata of the delete-requested file represents that the delete-requested file is deleted in a high level. 47. The memory system of claim 45, wherein the host further comprises a TRIM manage table managing information about a partial sector that does not correspond to the page unit among the sectors of the delete-requested file. 48. The memory system of claim 47, wherein the TRIM manage table manages information about the partial sector and information about a sector in the same page as the partial sector. 49. The memory system of claim 45, wherein the host further comprises a host memory configured to store information about sectors of at least two files that are delete-requested at respectively different times. 50. The memory system of claim 49, wherein the TRIM manage module selects information about a sector address corresponding to the management unit of the data storage device among information about sectors of the at least two files that are delete-requested at respectively different times, which is stored in the host memory. 51. A data management erasing method for a flash memory system, the flash memory system having a host file system configured to communicate with a flash memory storage device, the data management method comprising:
providing by the host file system to the flash memory storage device a TRIM command that informs the flash memory storage device which blocks of data are no longer considered in use, wherein the TRIM command includes a sector address for designating a file for which deletion has been requested; receiving the TRIM command by the flash memory storage device, translating the sector address into a page address, and marking a page of the flash memory storage device that will be deleted, as invalid; and performing an erasing operation by the flash memory device on the page marked as invalid. 52. The data management erasing method of claim 51, wherein the erasing operation is performed at an idle time when there is no request from the host file system to the flash memory storage device. 53. The data management erasing method of claim 51, wherein upon receipt of a file deletion request by the host file system, the host file system changes metadata of the file for which deletion has been requested such that when an application subsequently accesses a corresponding file of the host file system the application will be provided information indicating that the corresponding file has been already deleted. | A data management method of a data storage device having a data management unit different from a data management unit of a user device receives information regarding a storage area of a file to be deleted, from the user device, selects a storage area which matches with the data management unit of the data storage device, from among the storage area of the deleted file, and performs an erasing operation on the selected storage area which matches with the data management unit.1. A data management method of a data storage device which has a data management unit different from a data management unit of a user device, the data management method comprising:
receiving from the user device information regarding a storage area of a file to be deleted; selecting from among the storage area of the file to be deleted a storage area which matches with the data management unit of the data storage device; and performing an erasing operation on the selected storage area which matches with the data management unit. 2. The data management method of claim 1, wherein information regarding a storage area, which is mismatched with the data management unit of the data storage device among the storage area of the file to be deleted, is separately managed. 3. The data management method of claim 1, wherein the user device changes information regarding metadata of the file to be deleted to indicate that the file to be deleted is deleted from a high level. 4. The data management method of claim 1, further comprising storing, by the data storage device, information regarding storage areas of at least two files to be deleted in a buffer memory when the information regarding the storage areas of the at least two files to be deleted is provided from the user device. 5. The data management method of claim 4, wherein the selecting of a storage area selects a storage area, matching with the data management unit of the data storage device, from among the storage areas of the at least two files to be deleted which are stored in the buffer memory. 6. The data management method of claim 1, wherein:
the user device manages data by sector unit, the data storage device manages data by page unit, and each page is divided into a plurality of sectors. 7. A data management method for a data storage device which uses a data management unit different from a data management unit of a user device, the data management method comprising:
receiving from the user device information regarding a storage area of a file to be deleted; and marking a storage area which matches with the management unit of the data storage device, as invalid, wherein: the data storage device comprises a data storage unit configured to store data, and a buffer memory configured to temporarily store data to be written in the data storage unit, and data regarding the storage area marked as invalid among the data stored in the buffer memory is not written in the storage unit. 8. The data management method of claim 7, further comprising marking a storage area which is mismatched with the management unit of the data storage device among the storage area of the file to be deleted, as valid,
wherein data regarding the storage area marked as valid among the data stored in the buffer memory is written in the storage unit. 9. The data management method of claim 7, further comprising creating a TRIM manage table which is configured to manage a storage area mismatched with the management unit of the data storage device among the storage area of the file to be deleted. 10. The data management method of claim 9, wherein the TRIM manage table is stored in the buffer memory, and information of the TRIM manage table is controlled in a push scheme. 11. The data management method of claim 7, further comprising storing in the buffer memory, by the data storage device, information regarding storage areas of at least two files to be deleted when the information regarding the storage areas of the at least two files to be deleted is provided from the user device. 12. The data management method of claim 11, wherein the marking of a storage area marks as invalid a storage area which matches with the data management unit of the data storage device. 13. A memory system comprising:
a host configured to generate a TRIM command; and a data storage device configured to perform an erasing operation in response to the TRIM command from the host, wherein the data storage device performs an erasing operation on an area which matches with a data management unit of the data storage device among a storage area which has been designated as an area to be deleted according to the TRIM command. 14. The memory system of claim 13, wherein the data storage device separately manages information regarding an area which is mismatched with the data management unit of the data storage device among the storage area which has been designated as the area to be deleted according to the TRIM command. 15. The memory system of claim 13, wherein:
the data storage device manages data by page unit, the host manages data by sector unit, and each page is divided into a plurality of sectors. 16. The memory system claim 13, wherein:
the data storage device comprises a mapping table configured to change a logical address, which is provided from the host, to a physical address of the data storage device, and in the mapping table, the storage area which matches with the data management unit of the data storage device among the storage area designated as the area to be deleted is marked as invalid. 17. The memory system of claim 16, wherein in the mapping table, a storage area which is mismatched with the data management unit of the data storage device among the storage area designated as the area to be deleted is marked as valid. 18. The memory system of claim 17, wherein the data storage device further comprises a TRIM manage table configured to manage information regarding the storage area which is mismatched with the data management unit of the data storage device among the storage area designated as the area to be deleted. 19. The memory system of claim 18, wherein the mapping table updates Writing State Information (WSI) on the basis of the TRIM manage table when the storage area managed in the TRIM manage table matches with the data management unit of the data storage device, according to another TRIM command from the host. 20. The memory system of claim 19, wherein the WSI of the mapping table is updated, and information regarding a storage area which matches with the data management unit of the data storage device and is managed in the TRIM manage table is deleted from the TRIM manage table. 21. The memory system of claim 18, wherein the data storage device further comprises a buffer memory configured to store the TRIM manage table and to manage the information stored in the TRIM manage table in a push scheme. 22. The memory system of claim 13, wherein the data storage device comprises a buffer memory configured to store information regarding at least two TRIM commands transferred from the host. 23. The memory system of claim 22, wherein:
the data storage device further comprises at least two flash memories configured to store data, and a control unit configured to control the at least two flash memories, and the control unit controls processing order of the at least two TRIM commands stored in the buffer memory for the at least two flash memories to operate in parallel. 24. A data storage device which is connected to a user device, the data storage device comprising:
a storage unit configured to store data; a buffer memory configured to temporarily store data to be written in the storage unit; and a control unit configured to control the storage unit and the buffer memory, wherein data of a storage area which matches with a data management unit of the storage unit among a storage area designated as an area to be deleted is not written in the storage unit, according to a TRIM command transferred from the user device. 25. The data storage device of claim 24, wherein data of a storage area which is mismatched with the data management unit of the storage unit among the storage area designated as the area to be deleted is written in the storage unit. 26. The data storage device of claim 24, further comprising a mapping table configured to change a logical address, which is provided from the user device, to a physical address of the data storage device,
wherein: in the mapping table, Writing State Information (WSI) of the storage area which matches with the data management unit of the storage unit among the storage area designated as the area to be deleted is marked as invalid, and in the mapping table, the WSI of a storage area which is mismatched with the data management unit of the storage unit among the storage area designated as the area to be deleted is marked as valid. 27. The data storage device of claim 26, wherein the data storage device further comprises a TRIM manage table configured to manage a storage area which is mismatched with the data management unit of the data storage device and marked as valid in the mapping table. 28. The data storage device of claim 27, wherein the mapping table updates the WSI on the basis of the TRIM manage table when the storage area managed in the TRIM manage table matches with the data management unit of the data storage device, according to another TRIM command transferred from the user device. 29. The data storage device of claim 24, further comprising a buffer memory configured to store information regarding at least two TRIM commands when the at least two TRIM commands are transferred from the user device. 30. The data storage device of claim 29, wherein:
the storage unit comprises at least two flash memories, and the control unit controls processing order of the at least two TRIM commands stored in the buffer memory for the at least two flash memories to operate in parallel. 31. A data management method for a user device that stores data of a file in a data storage device and having a different data management unit than the data storage device, the method comprising:
changing metadata of a delete-requested file in response to a file delete request; determining whether information about a storage region of the delete-requested file corresponds to a data management unit of the data storage device; and transmitting information about a region corresponding to the data management unit of the data storage device among information about the storage region of the delete-requested file to the data storage device. 32. The method of claim 31, wherein the changing of the metadata of the delete-requested file represents that the delete-requested file is deleted in a high level. 33. The method of claim 31, further comprising generating a TRIM manage table configured to manage information about a region that does not correspond to the data management unit of the data storage device among the information about the storage region of the delete-requested file. 34. The method of claim 31, wherein the information about the storage region of the delete-requested file is provided from a mapping table of the data storage device. 35. A user device that stores data of a file in a data storage device, the user device comprising:
a file system configured to manage a file by a unit different from a data management unit of the data storage device and to change information about metadata of a delete-requested file; and a TRIM manage module configured to provide information about a storage region corresponding to the data management unit of the data storage device among information about a storage region of the delete-requested file. 36. The user device of claim 35, wherein a changing of the information about the metadata of the delete-requested file represents that the delete-requested file is deleted in a high level. 37. The user device of claim 35, further comprising a TRIM manage table configured to manage information about a region that does not correspond to the data management unit of the data storage device among the information about the storage region of the delete-requested file. 38. The user device of claim 37, further comprising a host memory configured to store the TRIM manage table, wherein the TRIM manage table stored in the host memory is managed through a pushing method. 39. The user device of claim 35, further comprising a host memory configured to store information about a storage region of at least two delete-requested files. 40. The user device of claim 39, wherein the TRIM manage module provides information about a region corresponding to the data management unit of the data storage device among the information about the storage region of the at least two delete-requested files stored in the host memory. 41. The user device of claim 35, wherein the information about the storage region of the at least two delete-requested files is provided from a mapping table of the data storage device. 42. A memory system comprising:
a host configured to support a TRIM operation; and a data storage device configured to perform an erase operation in response to a TRIM command from the host, wherein the host provides only information about a storage region corresponding to a data management unit of the data storage device among information about a storage region of a delete-requested file. 43. The memory system of claim 42, wherein the host separately manages information about a region that does not correspond to the data management unit of the data storage device among the information about the storage region of the delete-requested file. 44. The memory system of claim 42, wherein the host manages a file by a sector unit;
the data storage device manages data of a file by a page unit; and each page is divided into a plurality of sectors. 45. The memory system of claim 42, wherein the host comprises:
a file system configured to manage a file by a sector unit and to change information about metadata of a delete-requested file; and a TRIM manage module configured to select information about sectors corresponding to a page unit of the data storage device, among sectors of the delete-requested file. 46. The memory system of claim 45, wherein the changing of the information about the metadata of the delete-requested file represents that the delete-requested file is deleted in a high level. 47. The memory system of claim 45, wherein the host further comprises a TRIM manage table managing information about a partial sector that does not correspond to the page unit among the sectors of the delete-requested file. 48. The memory system of claim 47, wherein the TRIM manage table manages information about the partial sector and information about a sector in the same page as the partial sector. 49. The memory system of claim 45, wherein the host further comprises a host memory configured to store information about sectors of at least two files that are delete-requested at respectively different times. 50. The memory system of claim 49, wherein the TRIM manage module selects information about a sector address corresponding to the management unit of the data storage device among information about sectors of the at least two files that are delete-requested at respectively different times, which is stored in the host memory. 51. A data management erasing method for a flash memory system, the flash memory system having a host file system configured to communicate with a flash memory storage device, the data management method comprising:
providing by the host file system to the flash memory storage device a TRIM command that informs the flash memory storage device which blocks of data are no longer considered in use, wherein the TRIM command includes a sector address for designating a file for which deletion has been requested; receiving the TRIM command by the flash memory storage device, translating the sector address into a page address, and marking a page of the flash memory storage device that will be deleted, as invalid; and performing an erasing operation by the flash memory device on the page marked as invalid. 52. The data management erasing method of claim 51, wherein the erasing operation is performed at an idle time when there is no request from the host file system to the flash memory storage device. 53. The data management erasing method of claim 51, wherein upon receipt of a file deletion request by the host file system, the host file system changes metadata of the file for which deletion has been requested such that when an application subsequently accesses a corresponding file of the host file system the application will be provided information indicating that the corresponding file has been already deleted. | 2,100 |
5,944 | 5,944 | 15,237,928 | 2,183 | In an embodiment, a host device includes: a transceiver to communicate information on an interconnect; a controller to control operation of the transceiver and to be a master for the interconnect; and a role transfer logic to cause a secondary device to be the master for the interconnect when at least a portion of the host device is to enter into a low power. Other embodiments are described and claimed. | 1. A machine-readable medium having stored thereon instructions, which if performed by a controller cause the controller to perform a method comprising:
identifying a trigger for a master device to enter into a low power state, the master device a bus master for a bus; initiating a role transfer from the master device to a secondary device, the role transfer to cause bus master responsibility to be transferred from the master device to the secondary device; sending at least a portion of a context of the master device, configuration information and policy information from the master device to the secondary device; enabling a wake detection circuit of the master device; and causing the master device to enter into the low power state while the secondary device has the bus master responsibility. 2. The machine-readable medium of claim 1, wherein the method further comprises preventing one or more bus characteristics to be changed during the role transfer. 3. The machine-readable medium of claim 1, wherein the method further comprises:
identifying a wake indication from the secondary device, via the wake detection circuit; and sending a signal to a power management unit to cause the master device to exit the low power state responsive to identifying the wake indication. 4. The machine-readable medium of claim 3, wherein the method further comprises after exiting from the low power state, causing the master device to obtain the bus master responsibility from the secondary device. 5. The machine-readable medium of claim 1, wherein the method further comprises handling in the secondary device one or more communications from a sensor coupled to the bus while the master device is in the low power state, without waking the master device from the low power state. 6. A computing system comprising:
a host controller to couple to an interconnect to which a plurality of devices may be coupled, the host controller including:
a first domain having:
a first driver to drive first information onto the interconnect; and
a master controller to be a bus master for the interconnect, wherein the master controller is to perform a role transfer to a second controller to enable the second controller to be the bus master for the interconnect while at least a portion of the host controller is in a low power state;
a second domain having:
a wake detection circuit to identify a wake indication from the second controller, to indicate that the second controller is to perform a second role transfer to cause the master controller to be the bus master for the interconnect;
the second controller to couple to the interconnect and to be a bus master for the interconnect, wherein the second controller has a lower power consumption level than the host controller; and at least one sensor to couple to the interconnect, wherein the second controller is to handle a communication from the at least one sensor while at least the portion of the host controller is in the low power state. 7. The computing system of claim 6, wherein the master controller is to send at least a portion of a context of the master controller, policy information and interconnect characteristic information to the second controller before the host controller is to enter into the low power state. 8. The computing system of claim 6, wherein the host controller further comprises a control logic to prevent a change in one or more characteristics of the interconnect based at least in part on the wake detection circuit identification of the wake indication. 9. The computing system of claim 6, wherein the host controller further comprises a storage to store a pattern corresponding to the wake indication, wherein the wake detection circuit is to identify the wake indication when a communication on the interconnect matches the pattern. 10. A host device for controlling operation on an interconnect comprising:
a transceiver to communicate information on the interconnect, the interconnect to couple to the host device; a controller to control operation of the transceiver and to be a master for the interconnect; and a role transfer logic to cause a secondary device to be the master for the interconnect when at least a portion of the host device is to enter into a low power mode. 11. The host device of claim 10, wherein the role transfer logic is to cause the secondary device to be the master for the interconnect based at least in part on an activity level on the interconnect, the secondary device having a lower power consumption than the host device. 12. The host device of claim 10, wherein the role transfer logic is to send at least a portion of a context of the controller, policy information and interconnect characteristic information to the secondary device, to cause the secondary device to be the master for the interconnect. 13. The host device of claim 10, further comprising a wake detection circuit to identify a wake indication from the secondary device, to indicate that, after the secondary device is caused to be the master for the interconnect, the secondary device is to transfer master responsibility for the interconnect back to the controller. 14. The host device of claim 13, further comprising a control logic to prevent a change in one or more characteristics of the interconnect based at least in part on the wake detection circuit identification of the wake indication. 15. The host device of claim 13, further comprising a storage to store a pattern corresponding to the wake indication, wherein the wake detection circuit is to identify the wake indication when a communication on the interconnect matches the pattern. 16. The host device of claim 15, wherein the host device comprises a master device, the master device having a first voltage domain and a second voltage domain. 17. The host device of claim 16, wherein the first voltage domain comprises the wake detection circuit and the storage, and the second voltage domain comprises the transceiver and the controller. 18. The host device of claim 17, wherein the host device further comprises a power unit to:
provide a first voltage to the first voltage domain, the first voltage comprising an always on voltage; and provide a second voltage to the second voltage domain, the second voltage to be disabled when the apparatus is in the low power mode. 19. The host device of claim 13, wherein the wake detection circuit is to send a wake signal to a power management unit responsive to the match, wherein the power management unit is to cause the controller to exit the low power mode responsive to the wake signal. 20. The host device of claim 19, wherein the power management unit is to inform a system software regarding the wake signal, the system software to instruct the power management unit to wake the host device. | In an embodiment, a host device includes: a transceiver to communicate information on an interconnect; a controller to control operation of the transceiver and to be a master for the interconnect; and a role transfer logic to cause a secondary device to be the master for the interconnect when at least a portion of the host device is to enter into a low power. Other embodiments are described and claimed.1. A machine-readable medium having stored thereon instructions, which if performed by a controller cause the controller to perform a method comprising:
identifying a trigger for a master device to enter into a low power state, the master device a bus master for a bus; initiating a role transfer from the master device to a secondary device, the role transfer to cause bus master responsibility to be transferred from the master device to the secondary device; sending at least a portion of a context of the master device, configuration information and policy information from the master device to the secondary device; enabling a wake detection circuit of the master device; and causing the master device to enter into the low power state while the secondary device has the bus master responsibility. 2. The machine-readable medium of claim 1, wherein the method further comprises preventing one or more bus characteristics to be changed during the role transfer. 3. The machine-readable medium of claim 1, wherein the method further comprises:
identifying a wake indication from the secondary device, via the wake detection circuit; and sending a signal to a power management unit to cause the master device to exit the low power state responsive to identifying the wake indication. 4. The machine-readable medium of claim 3, wherein the method further comprises after exiting from the low power state, causing the master device to obtain the bus master responsibility from the secondary device. 5. The machine-readable medium of claim 1, wherein the method further comprises handling in the secondary device one or more communications from a sensor coupled to the bus while the master device is in the low power state, without waking the master device from the low power state. 6. A computing system comprising:
a host controller to couple to an interconnect to which a plurality of devices may be coupled, the host controller including:
a first domain having:
a first driver to drive first information onto the interconnect; and
a master controller to be a bus master for the interconnect, wherein the master controller is to perform a role transfer to a second controller to enable the second controller to be the bus master for the interconnect while at least a portion of the host controller is in a low power state;
a second domain having:
a wake detection circuit to identify a wake indication from the second controller, to indicate that the second controller is to perform a second role transfer to cause the master controller to be the bus master for the interconnect;
the second controller to couple to the interconnect and to be a bus master for the interconnect, wherein the second controller has a lower power consumption level than the host controller; and at least one sensor to couple to the interconnect, wherein the second controller is to handle a communication from the at least one sensor while at least the portion of the host controller is in the low power state. 7. The computing system of claim 6, wherein the master controller is to send at least a portion of a context of the master controller, policy information and interconnect characteristic information to the second controller before the host controller is to enter into the low power state. 8. The computing system of claim 6, wherein the host controller further comprises a control logic to prevent a change in one or more characteristics of the interconnect based at least in part on the wake detection circuit identification of the wake indication. 9. The computing system of claim 6, wherein the host controller further comprises a storage to store a pattern corresponding to the wake indication, wherein the wake detection circuit is to identify the wake indication when a communication on the interconnect matches the pattern. 10. A host device for controlling operation on an interconnect comprising:
a transceiver to communicate information on the interconnect, the interconnect to couple to the host device; a controller to control operation of the transceiver and to be a master for the interconnect; and a role transfer logic to cause a secondary device to be the master for the interconnect when at least a portion of the host device is to enter into a low power mode. 11. The host device of claim 10, wherein the role transfer logic is to cause the secondary device to be the master for the interconnect based at least in part on an activity level on the interconnect, the secondary device having a lower power consumption than the host device. 12. The host device of claim 10, wherein the role transfer logic is to send at least a portion of a context of the controller, policy information and interconnect characteristic information to the secondary device, to cause the secondary device to be the master for the interconnect. 13. The host device of claim 10, further comprising a wake detection circuit to identify a wake indication from the secondary device, to indicate that, after the secondary device is caused to be the master for the interconnect, the secondary device is to transfer master responsibility for the interconnect back to the controller. 14. The host device of claim 13, further comprising a control logic to prevent a change in one or more characteristics of the interconnect based at least in part on the wake detection circuit identification of the wake indication. 15. The host device of claim 13, further comprising a storage to store a pattern corresponding to the wake indication, wherein the wake detection circuit is to identify the wake indication when a communication on the interconnect matches the pattern. 16. The host device of claim 15, wherein the host device comprises a master device, the master device having a first voltage domain and a second voltage domain. 17. The host device of claim 16, wherein the first voltage domain comprises the wake detection circuit and the storage, and the second voltage domain comprises the transceiver and the controller. 18. The host device of claim 17, wherein the host device further comprises a power unit to:
provide a first voltage to the first voltage domain, the first voltage comprising an always on voltage; and provide a second voltage to the second voltage domain, the second voltage to be disabled when the apparatus is in the low power mode. 19. The host device of claim 13, wherein the wake detection circuit is to send a wake signal to a power management unit responsive to the match, wherein the power management unit is to cause the controller to exit the low power mode responsive to the wake signal. 20. The host device of claim 19, wherein the power management unit is to inform a system software regarding the wake signal, the system software to instruct the power management unit to wake the host device. | 2,100 |
5,945 | 5,945 | 15,174,376 | 2,199 | The current document is directed to an efficient and non-blocking mechanism for flow control within a multi-processor or multi-core processor with hierarchical memory caches. Traditionally, a centralized shared-computational-resource access pool, accessed using a locking operation, is used to control access to a shared computational resource within a multi-processor system or multi-core processor. The efficient and non-blocking mechanism for flow control, to which the current document is directed, distributes local shared-computational-resource access pools to each core of a multi-core processor and/or to each processor of a multi-processor system, avoiding significant computational overheads associated with cache-controller contention-control for a traditional, centralized access pool and associated with use of locking operations for access to the access pool. | 1. A flow-control component of a multi-processing-entity computer system, the flow-control component comprising:
a shared computational resource; two or more local access pools, together comprising a distributed access pool, each local access pool uniquely associated with a processing entity; and a process or thread that accesses the shared computational resource when a local access pool associated with the processing entity on which the process or thread executes contains at least one shared-computational-resource access, and is therefore not exhausted, and when the process or thread first removes a shared-computational-resource access from the local access pool before accessing the shared computational resource. 2. The flow-control component of claim 1 wherein the shared computational resource is a computer-system component with an electronic interface through which a process or thread may conduct an electronic transaction selected from among:
receiving data for the computational resource;
transmitting data to the computational resource; and
issuing commands to the computational resource. 3. The flow-control component of claim 2 wherein computational resources include:
I/O devices;
networking devices;
data-storage devices; and
processor-controlled devices. 4. The flow-control component of claim 1 wherein each local access pool maintains a count, having a value stored in a main memory, that represents the number of shared-computational-resource accesses contained in the access pool. 5. The flow-control component of claim 4 wherein only the local cache of the processing entity associated with the local access pool contains a copy of the count maintained by the local access pool. 6. The flow-control component of claim 4 wherein a process or thread removes a shared-computational-resource access from a local access pool, a copy of the count of which is stored in the local cache of a processing entity on which the process or thread is executing, by carrying out an atomic operation that returns a value stored in the count maintained by the local access pool at the start of the atomic operation and that changes the value stored in the count maintained by the local access pool. 7. The flow-control component of claim 6 wherein the atomic operation changes the value of the count by one of:
decrementing the value stored in the count;
incrementing the value stored in the count;
adding a value other than one to the value stored in the count; and
subtracting a value other than one from the value stored in the count. 8. The flow-control component of claim 6 wherein the atomic operation is an atomic increment instruction. 9. The flow-control component of claim 4 wherein, when the count falls below 1, the flow-control component attempts to transfer shared-computational-resource accesses from one or more processing entities other than the processing entity on which the process or thread executes. 10. A method that controls a rate of access to a shared computational resource in a multi-processing-entity computer system, the method comprising:
initializing a distributed access pool comprising two or more local access pools, each uniquely associated with a processing entity, to contain a number of shared-computational-resource accesses distributed among the local access pools; and removing, by a process or thread, a shared-computational-resource access from a local access pool associated with a processing entity on which the process or thread executes prior to accessing the shared-computational-resource 11. The method of claim 1 wherein the shared computational resource is a computer-system component with an electronic interface through which a process or thread may conduct an electronic transaction selected from among:
receiving data for the computational resource;
transmitting data to the computational resource; and
issuing commands to the computational resource. 12. The method of claim 11 wherein computational resources include:
I/O devices;
networking devices;
data-storage devices; and
processor-controlled devices. 13. The method of claim 11 wherein each local access pool maintains a count, having a value stored in a main memory that represents the number of shared-computational-resource accesses contained in the access pool. 14. The method of claim 13 wherein only the local cache of the processing entity associated with the local access pool contains a copy of the count maintained by the local access pool. 15. The method of claim 13 wherein a process or thread removes a shared-computational-resource access from a local access pool, a copy of the count of which is stored in the local cache of a processing entity on which the process or thread is executing, by carrying out an atomic operation that returns a value stored in the count maintained by the local access pool at the start of the atomic operation and that changes the value stored in the count maintained by the local access pool. 16. The method of claim 15 wherein the atomic operation changes the value of the count by one of:
decrementing the value stored in the count;
incrementing the value stored in the count;
adding a value other than one to the value stored in the count; and
subtracting a value other than one from the value stored in the count. 17. The method of claim 15 wherein the atomic operation is an atomic increment instruction. 18. The method of claim 13 wherein, when the count falls below 1, the flow-control component attempts to transfer shared-computational-resource accesses from one or more processing entities other than the processing entity on which the process or thread executes. 19. Computer instructions, stored within a data-storage component of a multi-processing-entity computer system, that, when executed by the processing entities, control the multi-processing-entity computer system to controls a rate of access to a shared computational resource by:
initializing a distributed access pool comprising two or more local access pools, each uniquely associated with a processing entity, to contain a number of shared-computational-resource accesses distributed among the local access pools; and removing, by a process or thread, a shared-computational-resource access from a local access pool associated with a processing entity on which the process or thread executes prior to accessing the shared-computational-resource. 20. The computer instructions of claim 19 wherein the shared computational resource is a computer-system component with an electronic interface through which a process or thread may conduct an electronic transaction selected from among receiving data for the computational resource, transmitting data to the computational resource, and issuing commands to the computational resource; and wherein computational resources include
I/O devices,
networking devices,
data-storage devices, and
processor-controlled devices. | The current document is directed to an efficient and non-blocking mechanism for flow control within a multi-processor or multi-core processor with hierarchical memory caches. Traditionally, a centralized shared-computational-resource access pool, accessed using a locking operation, is used to control access to a shared computational resource within a multi-processor system or multi-core processor. The efficient and non-blocking mechanism for flow control, to which the current document is directed, distributes local shared-computational-resource access pools to each core of a multi-core processor and/or to each processor of a multi-processor system, avoiding significant computational overheads associated with cache-controller contention-control for a traditional, centralized access pool and associated with use of locking operations for access to the access pool.1. A flow-control component of a multi-processing-entity computer system, the flow-control component comprising:
a shared computational resource; two or more local access pools, together comprising a distributed access pool, each local access pool uniquely associated with a processing entity; and a process or thread that accesses the shared computational resource when a local access pool associated with the processing entity on which the process or thread executes contains at least one shared-computational-resource access, and is therefore not exhausted, and when the process or thread first removes a shared-computational-resource access from the local access pool before accessing the shared computational resource. 2. The flow-control component of claim 1 wherein the shared computational resource is a computer-system component with an electronic interface through which a process or thread may conduct an electronic transaction selected from among:
receiving data for the computational resource;
transmitting data to the computational resource; and
issuing commands to the computational resource. 3. The flow-control component of claim 2 wherein computational resources include:
I/O devices;
networking devices;
data-storage devices; and
processor-controlled devices. 4. The flow-control component of claim 1 wherein each local access pool maintains a count, having a value stored in a main memory, that represents the number of shared-computational-resource accesses contained in the access pool. 5. The flow-control component of claim 4 wherein only the local cache of the processing entity associated with the local access pool contains a copy of the count maintained by the local access pool. 6. The flow-control component of claim 4 wherein a process or thread removes a shared-computational-resource access from a local access pool, a copy of the count of which is stored in the local cache of a processing entity on which the process or thread is executing, by carrying out an atomic operation that returns a value stored in the count maintained by the local access pool at the start of the atomic operation and that changes the value stored in the count maintained by the local access pool. 7. The flow-control component of claim 6 wherein the atomic operation changes the value of the count by one of:
decrementing the value stored in the count;
incrementing the value stored in the count;
adding a value other than one to the value stored in the count; and
subtracting a value other than one from the value stored in the count. 8. The flow-control component of claim 6 wherein the atomic operation is an atomic increment instruction. 9. The flow-control component of claim 4 wherein, when the count falls below 1, the flow-control component attempts to transfer shared-computational-resource accesses from one or more processing entities other than the processing entity on which the process or thread executes. 10. A method that controls a rate of access to a shared computational resource in a multi-processing-entity computer system, the method comprising:
initializing a distributed access pool comprising two or more local access pools, each uniquely associated with a processing entity, to contain a number of shared-computational-resource accesses distributed among the local access pools; and removing, by a process or thread, a shared-computational-resource access from a local access pool associated with a processing entity on which the process or thread executes prior to accessing the shared-computational-resource 11. The method of claim 1 wherein the shared computational resource is a computer-system component with an electronic interface through which a process or thread may conduct an electronic transaction selected from among:
receiving data for the computational resource;
transmitting data to the computational resource; and
issuing commands to the computational resource. 12. The method of claim 11 wherein computational resources include:
I/O devices;
networking devices;
data-storage devices; and
processor-controlled devices. 13. The method of claim 11 wherein each local access pool maintains a count, having a value stored in a main memory that represents the number of shared-computational-resource accesses contained in the access pool. 14. The method of claim 13 wherein only the local cache of the processing entity associated with the local access pool contains a copy of the count maintained by the local access pool. 15. The method of claim 13 wherein a process or thread removes a shared-computational-resource access from a local access pool, a copy of the count of which is stored in the local cache of a processing entity on which the process or thread is executing, by carrying out an atomic operation that returns a value stored in the count maintained by the local access pool at the start of the atomic operation and that changes the value stored in the count maintained by the local access pool. 16. The method of claim 15 wherein the atomic operation changes the value of the count by one of:
decrementing the value stored in the count;
incrementing the value stored in the count;
adding a value other than one to the value stored in the count; and
subtracting a value other than one from the value stored in the count. 17. The method of claim 15 wherein the atomic operation is an atomic increment instruction. 18. The method of claim 13 wherein, when the count falls below 1, the flow-control component attempts to transfer shared-computational-resource accesses from one or more processing entities other than the processing entity on which the process or thread executes. 19. Computer instructions, stored within a data-storage component of a multi-processing-entity computer system, that, when executed by the processing entities, control the multi-processing-entity computer system to controls a rate of access to a shared computational resource by:
initializing a distributed access pool comprising two or more local access pools, each uniquely associated with a processing entity, to contain a number of shared-computational-resource accesses distributed among the local access pools; and removing, by a process or thread, a shared-computational-resource access from a local access pool associated with a processing entity on which the process or thread executes prior to accessing the shared-computational-resource. 20. The computer instructions of claim 19 wherein the shared computational resource is a computer-system component with an electronic interface through which a process or thread may conduct an electronic transaction selected from among receiving data for the computational resource, transmitting data to the computational resource, and issuing commands to the computational resource; and wherein computational resources include
I/O devices,
networking devices,
data-storage devices, and
processor-controlled devices. | 2,100 |
5,946 | 5,946 | 15,153,935 | 2,165 | System, method and program product for backing up a plurality of data files from a first server to a second server via a network. A determination is made that more than one compressed data file at the second server, downloaded by the first server, is waiting to be decompressed. A determination is made whether an amount of available processor resource in the second server exceeds a predetermined threshold. If the amount of available processor resource in the second server exceeds the predetermined threshold, a plurality of data decompression programs are invoked in the second server to decompress the plurality of compressed data files substantially concurrently, and data updates in the decompressed data files are applied to corresponding files in the second server. | 1. A method for processing two or more data update files at a primary server for transmission to a backup server, the two or more data update files including data updates to one or more data files stored at both the primary server and the backup server, the backup server being coupled to the primary server via a network, the method comprising:
providing at least one support service for at least one of creating, integrating, hosting, maintaining, and deploying computer readable code in a computer system comprising a processor, wherein the processor carries out instructions contained in the code causing the computer system to perform the method that further comprises: the primary server determining that the two or more data update files at the primary server are waiting to be compressed; the primary server determining if the primary server has more than a predetermined level of available processor power and if the network has more than a predetermined level of available bandwidth, and if so, the primary server compressing the two or more data update files at least partially in parallel, and if not, the primary server compressing the two or more data update files sequentially; and the primary server sending to the backup server via the network the two or more data update files which have been compressed so the backup server can update the corresponding one or more data files at the backup server. 2. The method of claim 1, wherein the steps of the primary server determining if the primary server has more than a predetermined level of available processor power and if the network has more than a predetermined level of available bandwidth, and if so, the primary server compressing the two or more data update files at least partially in parallel, and if not, the primary server compressing the two or more data update files sequentially, comprises the steps of:
the primary server beginning to compress one of the two or more data update files, and before completion of the compression of the one data update file, the primary server determining if the primary server has more than a predetermined level of available processor power to compress another of the two or more data update files and if the network has more than a predetermined level of available bandwidth, and if so, the primary server beginning to compress another of the two or more data update files before completion of the compression of the one data update file, and if not, the primary server postponing compression of any other of the two or more data update files until after completing compression of the one data update file. 3. A computer system for processing two or more data update files at a primary server for transmission to a backup server, the two or more data update files including data updates to one or more data files stored at both the primary server and the backup server, the backup server being coupled to the primary server via a network, the computer program product comprising:
a central processing unit (CPU); a memory coupled to the CPU; and a computer readable hardware storage device coupled to the CPU; first program instructions, for execution at the primary server, to determine that the two or more data update files at the primary server are waiting to be compressed; second program instructions, for execution at the primary server, to determine if the primary server has more than a predetermined level of available processor power and if the network has more than a predetermined level of available bandwidth, and if so, compress the two or more data update files at least partially in parallel, and if not, compress the two or more data update files sequentially; and third program instructions, for execution at the primary server, to send to the backup server via the network the two or more data update files which have been compressed so the backup server can update the corresponding one or more data files at the backup server; and wherein the first, second and third program instructions are stored on the computer-readable, tangible storage device. 4. The computer system of claim 3, wherein the second program instructions determine if the primary server has more than a predetermined level of available processor power and if the network has more than a predetermined level of available bandwidth, and if so, compress the two or more data update files at least partially in parallel, and if not, compress the two or more data update files sequentially, by beginning to compress one of the two or more data update files, and before completion of the compression of the one data update file, determining if the primary server has more than a predetermined level of available processor power to compress another of the two or more data update files and if the network has more than a predetermined level of available bandwidth, and if so, beginning to compress another of the two or more data update files before completion of the compression of the one data update file, and if not, postponing compression of any other of the two or more data update files until after completing compression of the one data update file. | System, method and program product for backing up a plurality of data files from a first server to a second server via a network. A determination is made that more than one compressed data file at the second server, downloaded by the first server, is waiting to be decompressed. A determination is made whether an amount of available processor resource in the second server exceeds a predetermined threshold. If the amount of available processor resource in the second server exceeds the predetermined threshold, a plurality of data decompression programs are invoked in the second server to decompress the plurality of compressed data files substantially concurrently, and data updates in the decompressed data files are applied to corresponding files in the second server.1. A method for processing two or more data update files at a primary server for transmission to a backup server, the two or more data update files including data updates to one or more data files stored at both the primary server and the backup server, the backup server being coupled to the primary server via a network, the method comprising:
providing at least one support service for at least one of creating, integrating, hosting, maintaining, and deploying computer readable code in a computer system comprising a processor, wherein the processor carries out instructions contained in the code causing the computer system to perform the method that further comprises: the primary server determining that the two or more data update files at the primary server are waiting to be compressed; the primary server determining if the primary server has more than a predetermined level of available processor power and if the network has more than a predetermined level of available bandwidth, and if so, the primary server compressing the two or more data update files at least partially in parallel, and if not, the primary server compressing the two or more data update files sequentially; and the primary server sending to the backup server via the network the two or more data update files which have been compressed so the backup server can update the corresponding one or more data files at the backup server. 2. The method of claim 1, wherein the steps of the primary server determining if the primary server has more than a predetermined level of available processor power and if the network has more than a predetermined level of available bandwidth, and if so, the primary server compressing the two or more data update files at least partially in parallel, and if not, the primary server compressing the two or more data update files sequentially, comprises the steps of:
the primary server beginning to compress one of the two or more data update files, and before completion of the compression of the one data update file, the primary server determining if the primary server has more than a predetermined level of available processor power to compress another of the two or more data update files and if the network has more than a predetermined level of available bandwidth, and if so, the primary server beginning to compress another of the two or more data update files before completion of the compression of the one data update file, and if not, the primary server postponing compression of any other of the two or more data update files until after completing compression of the one data update file. 3. A computer system for processing two or more data update files at a primary server for transmission to a backup server, the two or more data update files including data updates to one or more data files stored at both the primary server and the backup server, the backup server being coupled to the primary server via a network, the computer program product comprising:
a central processing unit (CPU); a memory coupled to the CPU; and a computer readable hardware storage device coupled to the CPU; first program instructions, for execution at the primary server, to determine that the two or more data update files at the primary server are waiting to be compressed; second program instructions, for execution at the primary server, to determine if the primary server has more than a predetermined level of available processor power and if the network has more than a predetermined level of available bandwidth, and if so, compress the two or more data update files at least partially in parallel, and if not, compress the two or more data update files sequentially; and third program instructions, for execution at the primary server, to send to the backup server via the network the two or more data update files which have been compressed so the backup server can update the corresponding one or more data files at the backup server; and wherein the first, second and third program instructions are stored on the computer-readable, tangible storage device. 4. The computer system of claim 3, wherein the second program instructions determine if the primary server has more than a predetermined level of available processor power and if the network has more than a predetermined level of available bandwidth, and if so, compress the two or more data update files at least partially in parallel, and if not, compress the two or more data update files sequentially, by beginning to compress one of the two or more data update files, and before completion of the compression of the one data update file, determining if the primary server has more than a predetermined level of available processor power to compress another of the two or more data update files and if the network has more than a predetermined level of available bandwidth, and if so, beginning to compress another of the two or more data update files before completion of the compression of the one data update file, and if not, postponing compression of any other of the two or more data update files until after completing compression of the one data update file. | 2,100 |
5,947 | 5,947 | 15,262,877 | 2,119 | A system and method for advanced digital economization for an HVAC system having an economizer. A digital processing unit is configured to open a damper of an economizer within a dead-band range that allows for preemptive cooling prior to a call for cooling. This economization strategy allows for free cooling (outside air) without having to pay energy costs for cooled (air-conditioned) air. The system and method can be used with or without demand control ventilation (DCV). The method also includes a “self-learning” strategy with outside air and return air sensor to regularly sense past economizer damper modifications and average out recent readings to help set the dead-band range. The method can include the ability to work in conjunction with a variable supply fan speed control, provide fault detection, self-correct, auto-configure, and report system status. | 1. A method to reduce the energy usage of an HVAC system that provides one or more of the following functions: heating, cooling, or ventilation to an indoor space within a building where the existing HVAC system provides heating when a call for heating is received and provides cooling when a call for cooling is received, as well as an economizer function tied to the ventilation function that provides fresh air and free cooling benefit to the indoor building space; the method comprising:
providing a controller that can receive sensor data, receive input data from the existing HVAC system that indicates an operating mode, and can send data to the HVAC system for the purposes of operating and monitoring status of the ventilation, economizer, heating, or cooling functions; providing an outside air sensor that is controlled by the controller, the outside air sensor is capable of sensing the outside air conditions and a supply air sensor that is capable of sensing the supply air conditions; establishing a dead-band range for an HVAC system dead-band state to supply predictive free cooling wherein the dead-band range is based on optimum energy savings for free cooling benefit; and modifying the economizer function through the controller of the HVAC system to provide predictive free cooling benefit when the HVAC system is in the dead-band state prior to a call for cooling. 2. The method according to claim 1 wherein the ventilation function is modulated between a fully closed and a fully opened position to maintain a specific supply air setpoint. 3. The method according to claim 1 wherein a minimum ventilation rate is established. 4. The method according to claim 3 where the minimum ventilation rate is overridden during the dead-band range to allow higher levels of outside air from the economizer based on optimum energy savings for free cooling benefit where minimum ventilation rates result in an energy penalty. 5. The method according to claim 1 wherein the system also includes a return air sensor that senses return air conditions. 6. The method according to claim 5 wherein the return air sensor is used to establish a dead-band range. 7. The method according to claim 1 further comprising providing a space sensor that senses temperature of the building space. 8. The method according to claim 7 wherein the space sensor is used to establish a dead-band range. 9. The method according to claim 8 wherein the space sensor is used to determine the heating and cooling functions. 10. The method according to claim 9 where the dead-band range is between an occupied building space heating setpoint and an occupied building space cooling setpoint. 11. The method according to claim 5 further comprising a mixed air sensor. 12. The method according to claim 1 wherein the controller is capable of communicating the status of the economizer function to an operator. 13. The method according to claim 1 wherein the controller further comprises fault detection to determine proper operation of the economizer system. 14. The method according to claim 1 wherein the controller further comprises auto-configuration. 15. The method according to claim 1 wherein the dead-band range is calculated using a time reference. 16. The method according to claim 1 wherein the controller further comprises self-correction routines. 17. The method according to claim 2 wherein the HVAC system is equipped with variable fan speed control and wherein the controller is capable of controlling or sensing the fan speed. 18. A system utilizing advanced economizer strategies to reduce energy used by an indoor building space, the system comprising:
an HVAC assembly configured to provide heating, cooling, ventilation, and economization functions to an indoor building space; an economizer that provides the economization function, the economizer including at least one damper to outside air, the damper configured to operate between a closed and open position when the economizer receives a signal from a controller, wherein the open position is by percentage; the economizer being interconnected to the ventilation functions of the HVAC assembly; the controller being configured to receive sensor data, receive input data from the HVAC assembly that indicates an operating mode, and can send data to the HVAC assembly for the purposes of operating and monitoring status of the ventilation, heating, cooling function, or economization functions, wherein the controller is configured to predictively operate the economizer in a predictive cooling mode; and an outside air sensor being configured to sense supply air conditions; the outside air sensor being configured to sense an established dead-band range to provide preemptive cooling when the economizer damper is operated in the predictive cooling mode. 19. The system according to claim 18 wherein the controller is configured to communicate the status of the economizer function to an operator. 20. The system according to claim 18 wherein the controller further comprises fault detection configured to determine proper operation of the economizer. 21. The system according to claim 18 wherein the controller further comprises auto-configuration. 22. The system according to claim 18 wherein the dead-band range is calculated using a time reference. 23. The system according to claim 18 wherein the controller further comprises self-correction routines. 24. The system according to claim 18 wherein the controller is configured to modulate the ventilation function between a fully closed and a fully opened position to maintain a specific supply air setpoint wherein the HVAC system is equipped with variable fan speed control; and wherein the controller is configured to control the fan speed. | A system and method for advanced digital economization for an HVAC system having an economizer. A digital processing unit is configured to open a damper of an economizer within a dead-band range that allows for preemptive cooling prior to a call for cooling. This economization strategy allows for free cooling (outside air) without having to pay energy costs for cooled (air-conditioned) air. The system and method can be used with or without demand control ventilation (DCV). The method also includes a “self-learning” strategy with outside air and return air sensor to regularly sense past economizer damper modifications and average out recent readings to help set the dead-band range. The method can include the ability to work in conjunction with a variable supply fan speed control, provide fault detection, self-correct, auto-configure, and report system status.1. A method to reduce the energy usage of an HVAC system that provides one or more of the following functions: heating, cooling, or ventilation to an indoor space within a building where the existing HVAC system provides heating when a call for heating is received and provides cooling when a call for cooling is received, as well as an economizer function tied to the ventilation function that provides fresh air and free cooling benefit to the indoor building space; the method comprising:
providing a controller that can receive sensor data, receive input data from the existing HVAC system that indicates an operating mode, and can send data to the HVAC system for the purposes of operating and monitoring status of the ventilation, economizer, heating, or cooling functions; providing an outside air sensor that is controlled by the controller, the outside air sensor is capable of sensing the outside air conditions and a supply air sensor that is capable of sensing the supply air conditions; establishing a dead-band range for an HVAC system dead-band state to supply predictive free cooling wherein the dead-band range is based on optimum energy savings for free cooling benefit; and modifying the economizer function through the controller of the HVAC system to provide predictive free cooling benefit when the HVAC system is in the dead-band state prior to a call for cooling. 2. The method according to claim 1 wherein the ventilation function is modulated between a fully closed and a fully opened position to maintain a specific supply air setpoint. 3. The method according to claim 1 wherein a minimum ventilation rate is established. 4. The method according to claim 3 where the minimum ventilation rate is overridden during the dead-band range to allow higher levels of outside air from the economizer based on optimum energy savings for free cooling benefit where minimum ventilation rates result in an energy penalty. 5. The method according to claim 1 wherein the system also includes a return air sensor that senses return air conditions. 6. The method according to claim 5 wherein the return air sensor is used to establish a dead-band range. 7. The method according to claim 1 further comprising providing a space sensor that senses temperature of the building space. 8. The method according to claim 7 wherein the space sensor is used to establish a dead-band range. 9. The method according to claim 8 wherein the space sensor is used to determine the heating and cooling functions. 10. The method according to claim 9 where the dead-band range is between an occupied building space heating setpoint and an occupied building space cooling setpoint. 11. The method according to claim 5 further comprising a mixed air sensor. 12. The method according to claim 1 wherein the controller is capable of communicating the status of the economizer function to an operator. 13. The method according to claim 1 wherein the controller further comprises fault detection to determine proper operation of the economizer system. 14. The method according to claim 1 wherein the controller further comprises auto-configuration. 15. The method according to claim 1 wherein the dead-band range is calculated using a time reference. 16. The method according to claim 1 wherein the controller further comprises self-correction routines. 17. The method according to claim 2 wherein the HVAC system is equipped with variable fan speed control and wherein the controller is capable of controlling or sensing the fan speed. 18. A system utilizing advanced economizer strategies to reduce energy used by an indoor building space, the system comprising:
an HVAC assembly configured to provide heating, cooling, ventilation, and economization functions to an indoor building space; an economizer that provides the economization function, the economizer including at least one damper to outside air, the damper configured to operate between a closed and open position when the economizer receives a signal from a controller, wherein the open position is by percentage; the economizer being interconnected to the ventilation functions of the HVAC assembly; the controller being configured to receive sensor data, receive input data from the HVAC assembly that indicates an operating mode, and can send data to the HVAC assembly for the purposes of operating and monitoring status of the ventilation, heating, cooling function, or economization functions, wherein the controller is configured to predictively operate the economizer in a predictive cooling mode; and an outside air sensor being configured to sense supply air conditions; the outside air sensor being configured to sense an established dead-band range to provide preemptive cooling when the economizer damper is operated in the predictive cooling mode. 19. The system according to claim 18 wherein the controller is configured to communicate the status of the economizer function to an operator. 20. The system according to claim 18 wherein the controller further comprises fault detection configured to determine proper operation of the economizer. 21. The system according to claim 18 wherein the controller further comprises auto-configuration. 22. The system according to claim 18 wherein the dead-band range is calculated using a time reference. 23. The system according to claim 18 wherein the controller further comprises self-correction routines. 24. The system according to claim 18 wherein the controller is configured to modulate the ventilation function between a fully closed and a fully opened position to maintain a specific supply air setpoint wherein the HVAC system is equipped with variable fan speed control; and wherein the controller is configured to control the fan speed. | 2,100 |
5,948 | 5,948 | 14,012,961 | 2,164 | Methods and systems for recommending online media content to other users includes receiving a selection of media content rendered on content page. The media content is identified by a user for sharing. A list of topics associated with the user is generated for presenting on a user interface. The topics are descriptive of the media content selected for sharing. Selection of one or more topics for the selected media content is received from the user. The received selections define the user's relevancy perspective for the selected media content. A recommendation for the selected media content is provided in content streams of users that follow the selected topics for users interactions with the selected media content. | 1. A method, comprising:
receiving a recommendation signal from a media page, the recommendation signal being associated with a user viewing the media page; generating a list of topics that are related to content of the media page and a history of topics followed by the user; detecting selection of one or more topics from the list of topics; and generating recommendation data for the media page, the recommendation data being provided to users that are following the selected one or more topics. 2. The method of claim 1, wherein generating further includes,
providing an option for the user to define and associate a new topic for the content of the media page; and providing an option for the user to search for a topic that is different than the topics identified in the list for associating with the content of the media page. 3. The method of claim 1, further includes generating a tag for the selected topic and associating the tag to the content of the media page, the tag used during distribution of the content of the media file to users. 4. The method of claim 1, wherein the recommendation data provided to users includes the content of the media page along with content of other media pages for the selected topic, the content of the media pages for the topic organized in a ranking order defined by popularity scores associated with the media pages. 5. The method of claim 4, wherein the ranking order is defined by relative ranking of users sharing the content of the media pages for the selected topic. 6. The method of claim 1, wherein the recommendation data provided to the user initiating the recommendation signal includes user metrics of the user relating to the content of the media page. 7. A method for recommending online media content, comprising:
receiving a selection of media content identified for sharing from a content page, the media content selection associated with a user viewing the content page; generating a list of topics for the selected media content for presenting on a user interface, the list of topics associated with the media content and the user, the topics being descriptive of the media content selected for sharing; receiving selection of one or more topics for the selected media content from the user, the received selection defining a relevancy perspective of the user for the selected media content; and providing a recommendation for the selected media content in content streams of users that follow the selected topics to interact with the selected media content. 8. The method of claim 7, wherein the selection is done using a recommendation tool provided alongside the media content on the content page, the recommendation tool generating a recommendation signal. 9. The method of claim 7, wherein the topics presented in the user interface are identified based on history of topics followed by the user. 10. The method of claim 9, wherein the history of topics followed by the user identified based on explicit or implicit interactions at different media content rendered on the content page over time. 11. The method of claim 7, wherein generating further includes,
providing an option in the user interface for the user to define and associate a new topic for the selected media content; and providing an option in the user interface for the user to search for a topic that is different than the topics identified in the list for associating with the selected media content. 12. The method of claim 7, further includes generating a tag for the selected topic, the generated tag associated with the selected media content and used during distribution of the selected media content in content streams to users. 13. The method of claim 7, wherein distributing the selected media content includes posting the selected media content to a content stream of the users that follow the topic or providing a link to the selected media content in an account of the users. 14. The method of claim 7, wherein the media content is one or more of article, quote, comment, picture, image, or any digital asset that can be posted on a website. 15. The method of claim 7, further includes,
monitoring users interactions for the selected media content recommended for sharing; computing user metrics of the user recommending the selected media content based on the users interactions, the user metrics defining reputation and popularity of the user recommending the selected media content; comparing the user metrics against a pre-defined threshold value; updating an expertise level of the user recommending the media content based on the comparison, wherein the expertise level is used to obtain additional tools related to recommending content to the other users. 16. The method of claim 15, further includes providing monetary or non-monetary awards to the user based on the expertise level of the user. 17. The method of claim 7, further includes providing monetary or non-monetary awards to the user recommending the selected media content. 18. A non-transitory computer readable medium having program instructions for recommending an online media content, comprising:
program instructions for receiving a selection of media content identified for sharing from a content page, the media content selection associated with a user viewing the content page; program instructions for generating a list of topics for the selected media content for presenting on a user interface, the list of topics associated with the media content and the user, the topics being descriptive of the media content selected for sharing; program instructions for receiving selection of one or more topics for the selected media content from the user, the received selection defining a relevancy perspective of the user for the selected media content; and program instructions for providing a recommendation for the selected media content in content streams of users that follow the selected topics to interact with the selected media content. 19. The non-transitory computer readable medium of claim 18, wherein program instructions for generating further includes,
program instructions for providing an option in the user interface for the user to define and associate a new topic for the selected media content; and program instructions for providing an option in the user interface for the user to search for a topic that is different than the topics identified in the list for associating with the selected media content. 20. The non-transitory computer readable medium of claim 18, wherein program instructions for presenting the list of topics further includes program instructions to identify topics based on history of topics followed by the user, wherein the history of topics identified by monitoring explicit or implicit interactions of the user at different media content rendered over time. | Methods and systems for recommending online media content to other users includes receiving a selection of media content rendered on content page. The media content is identified by a user for sharing. A list of topics associated with the user is generated for presenting on a user interface. The topics are descriptive of the media content selected for sharing. Selection of one or more topics for the selected media content is received from the user. The received selections define the user's relevancy perspective for the selected media content. A recommendation for the selected media content is provided in content streams of users that follow the selected topics for users interactions with the selected media content.1. A method, comprising:
receiving a recommendation signal from a media page, the recommendation signal being associated with a user viewing the media page; generating a list of topics that are related to content of the media page and a history of topics followed by the user; detecting selection of one or more topics from the list of topics; and generating recommendation data for the media page, the recommendation data being provided to users that are following the selected one or more topics. 2. The method of claim 1, wherein generating further includes,
providing an option for the user to define and associate a new topic for the content of the media page; and providing an option for the user to search for a topic that is different than the topics identified in the list for associating with the content of the media page. 3. The method of claim 1, further includes generating a tag for the selected topic and associating the tag to the content of the media page, the tag used during distribution of the content of the media file to users. 4. The method of claim 1, wherein the recommendation data provided to users includes the content of the media page along with content of other media pages for the selected topic, the content of the media pages for the topic organized in a ranking order defined by popularity scores associated with the media pages. 5. The method of claim 4, wherein the ranking order is defined by relative ranking of users sharing the content of the media pages for the selected topic. 6. The method of claim 1, wherein the recommendation data provided to the user initiating the recommendation signal includes user metrics of the user relating to the content of the media page. 7. A method for recommending online media content, comprising:
receiving a selection of media content identified for sharing from a content page, the media content selection associated with a user viewing the content page; generating a list of topics for the selected media content for presenting on a user interface, the list of topics associated with the media content and the user, the topics being descriptive of the media content selected for sharing; receiving selection of one or more topics for the selected media content from the user, the received selection defining a relevancy perspective of the user for the selected media content; and providing a recommendation for the selected media content in content streams of users that follow the selected topics to interact with the selected media content. 8. The method of claim 7, wherein the selection is done using a recommendation tool provided alongside the media content on the content page, the recommendation tool generating a recommendation signal. 9. The method of claim 7, wherein the topics presented in the user interface are identified based on history of topics followed by the user. 10. The method of claim 9, wherein the history of topics followed by the user identified based on explicit or implicit interactions at different media content rendered on the content page over time. 11. The method of claim 7, wherein generating further includes,
providing an option in the user interface for the user to define and associate a new topic for the selected media content; and providing an option in the user interface for the user to search for a topic that is different than the topics identified in the list for associating with the selected media content. 12. The method of claim 7, further includes generating a tag for the selected topic, the generated tag associated with the selected media content and used during distribution of the selected media content in content streams to users. 13. The method of claim 7, wherein distributing the selected media content includes posting the selected media content to a content stream of the users that follow the topic or providing a link to the selected media content in an account of the users. 14. The method of claim 7, wherein the media content is one or more of article, quote, comment, picture, image, or any digital asset that can be posted on a website. 15. The method of claim 7, further includes,
monitoring users interactions for the selected media content recommended for sharing; computing user metrics of the user recommending the selected media content based on the users interactions, the user metrics defining reputation and popularity of the user recommending the selected media content; comparing the user metrics against a pre-defined threshold value; updating an expertise level of the user recommending the media content based on the comparison, wherein the expertise level is used to obtain additional tools related to recommending content to the other users. 16. The method of claim 15, further includes providing monetary or non-monetary awards to the user based on the expertise level of the user. 17. The method of claim 7, further includes providing monetary or non-monetary awards to the user recommending the selected media content. 18. A non-transitory computer readable medium having program instructions for recommending an online media content, comprising:
program instructions for receiving a selection of media content identified for sharing from a content page, the media content selection associated with a user viewing the content page; program instructions for generating a list of topics for the selected media content for presenting on a user interface, the list of topics associated with the media content and the user, the topics being descriptive of the media content selected for sharing; program instructions for receiving selection of one or more topics for the selected media content from the user, the received selection defining a relevancy perspective of the user for the selected media content; and program instructions for providing a recommendation for the selected media content in content streams of users that follow the selected topics to interact with the selected media content. 19. The non-transitory computer readable medium of claim 18, wherein program instructions for generating further includes,
program instructions for providing an option in the user interface for the user to define and associate a new topic for the selected media content; and program instructions for providing an option in the user interface for the user to search for a topic that is different than the topics identified in the list for associating with the selected media content. 20. The non-transitory computer readable medium of claim 18, wherein program instructions for presenting the list of topics further includes program instructions to identify topics based on history of topics followed by the user, wherein the history of topics identified by monitoring explicit or implicit interactions of the user at different media content rendered over time. | 2,100 |
5,949 | 5,949 | 14,667,320 | 2,152 | A method and system are provided. The method includes identifying a set of applications compatible with a set of data. The applications and the data are untagged by corresponding metadata. The identifying step includes executing, by an execution platform, at least some of the applications in the set against at least some of the data in the set. The identifying step further includes analyzing, by a log analyzer, execution logs for executions of the at least some of the applications against the at least some of the data. The identifying step also includes indicating, by the log analyzer, a compatibility of the at least some of the applications to the at least some of the data by detecting compatibility relevant errors using the execution logs. | 1-14. (canceled) 15. A computer program product for identifying application and data compatibility, the computer program product comprising a computer readable storage medium having program instructions embodied therewith, the program instructions executable by a computer to cause the computer to perform a method comprising:
identifying a set of applications compatible with a set of data, wherein the applications and the data are untagged by corresponding metadata, wherein said identifying step comprises: executing, by an execution platform, at least some of the applications in the set against at least some of the data in the set; analyzing, by a log analyzer, execution logs for executions of the at least some of the applications against the at least some of the data; and indicating, by the log analyzer, a compatibility of the at least some of the applications to the at least some of the data by detecting compatibility relevant errors using the execution logs. 16. A system, comprising:
an execution platform for executing at least some applications from a set of applications against at least some data from a set of data, the applications and the data being untagged by corresponding metadata; and a log analyzer for analyzing execution logs for executions of the at least some applications against the at least some data, and indicating a compatibility of the at least some applications to the at least some data by detecting compatibility relevant errors using the execution logs. 17. The system of claim 16, wherein the executions of the at least some of the applications against the at least some of the data are performed sequentially, wherein a subset of the applications in the set are linked with respect to compatibility, and wherein an indication of incompatible status for a given one of the applications determined from a respective one of the executions is also applied to other ones of the applications linked in the subset without execution of the other ones of the applications. 18. The system of claim 17, wherein the applications in the subset are linked based on expected compatibility. 19. The system of claim 16, wherein said log analyzer indicates an incompatible status for a given one of the applications with respect to a respective data portion responsive to detecting a number of compatibility errors there between above a threshold using a respective one of the execution logs. 20. The system of claim 16, wherein said log analyzer indicates an incompatible status for a given one of the applications with respect to a respective data portion responsive to detecting at least one compatibility error there between having a severity above a threshold using a respective one of the execution logs, wherein error severity is profiled a priori. | A method and system are provided. The method includes identifying a set of applications compatible with a set of data. The applications and the data are untagged by corresponding metadata. The identifying step includes executing, by an execution platform, at least some of the applications in the set against at least some of the data in the set. The identifying step further includes analyzing, by a log analyzer, execution logs for executions of the at least some of the applications against the at least some of the data. The identifying step also includes indicating, by the log analyzer, a compatibility of the at least some of the applications to the at least some of the data by detecting compatibility relevant errors using the execution logs.1-14. (canceled) 15. A computer program product for identifying application and data compatibility, the computer program product comprising a computer readable storage medium having program instructions embodied therewith, the program instructions executable by a computer to cause the computer to perform a method comprising:
identifying a set of applications compatible with a set of data, wherein the applications and the data are untagged by corresponding metadata, wherein said identifying step comprises: executing, by an execution platform, at least some of the applications in the set against at least some of the data in the set; analyzing, by a log analyzer, execution logs for executions of the at least some of the applications against the at least some of the data; and indicating, by the log analyzer, a compatibility of the at least some of the applications to the at least some of the data by detecting compatibility relevant errors using the execution logs. 16. A system, comprising:
an execution platform for executing at least some applications from a set of applications against at least some data from a set of data, the applications and the data being untagged by corresponding metadata; and a log analyzer for analyzing execution logs for executions of the at least some applications against the at least some data, and indicating a compatibility of the at least some applications to the at least some data by detecting compatibility relevant errors using the execution logs. 17. The system of claim 16, wherein the executions of the at least some of the applications against the at least some of the data are performed sequentially, wherein a subset of the applications in the set are linked with respect to compatibility, and wherein an indication of incompatible status for a given one of the applications determined from a respective one of the executions is also applied to other ones of the applications linked in the subset without execution of the other ones of the applications. 18. The system of claim 17, wherein the applications in the subset are linked based on expected compatibility. 19. The system of claim 16, wherein said log analyzer indicates an incompatible status for a given one of the applications with respect to a respective data portion responsive to detecting a number of compatibility errors there between above a threshold using a respective one of the execution logs. 20. The system of claim 16, wherein said log analyzer indicates an incompatible status for a given one of the applications with respect to a respective data portion responsive to detecting at least one compatibility error there between having a severity above a threshold using a respective one of the execution logs, wherein error severity is profiled a priori. | 2,100 |
5,950 | 5,950 | 14,743,130 | 2,152 | A method and system are provided. The method includes identifying a set of applications compatible with a set of data. The applications and the data are untagged by corresponding metadata. The identifying step includes executing, by an execution platform, at least some of the applications in the set against at least some of the data in the set. The identifying step further includes analyzing, by a log analyzer, execution logs for executions of the at least some of the applications against the at least some of the data. The identifying step also includes indicating, by the log analyzer, a compatibility of the at least some of the applications to the at least some of the data by detecting compatibility relevant errors using the execution logs. | 1. A method, comprising:
identifying a set of applications compatible with a set of data, wherein the applications and the data are untagged by corresponding metadata, wherein said identifying step comprises:
executing, by an execution platform, at least some of the applications in the set against at least some of the data in the set;
analyzing, by a log analyzer, execution logs for executions of the at least some of the applications against the at least some of the data; and
indicating, by the log analyzer, a compatibility of the at least some of the applications to the at least some of the data by detecting compatibility relevant errors using the execution logs. 2. The method of claim 1, wherein the executions of the at least some of the applications against the at least some of the data are performed sequentially. 3. The method of claim 2, wherein the method is terminated upon an indication of a compatibility status of a predetermined number of the at least some applications against corresponding portions of the at least some of the data. 4. The method of claim 3, wherein the predetermined number is an integer greater than one. 5. The method of claim 2, wherein a subset of the applications in the set are linked with respect to compatibility, and wherein an indication of incompatible status for a given one of the applications determined from a respective one of the executions is also applied to other ones of the applications linked in the subset without execution of the other ones of the applications. 6. The method of claim 5, wherein the applications in the subset are linked based on expected compatibility. 7. The method of claim 1, wherein the executions of the at least some of the applications against the at least some of the data are performed in parallel. 8. The method of claim 1, wherein said indicating step indicates an incompatible status for a given one of the applications with respect to a respective data portion responsive to detecting at least one compatibility error there between using a respective one of the execution logs. 9. The method of claim 1, wherein said indicating step indicates an incompatible status for a given one of the applications with respect to a respective data portion responsive to detecting a number of compatibility errors there between above a threshold using a respective one of the execution logs. 10. The method of claim 1, wherein said indicating step indicates an incompatible status for a given one of the applications with respect to a respective data portion responsive to detecting at least one compatibility error there between having a severity above a threshold using a respective one of the execution logs, wherein error severity is profiled a priori. 11. The method of claim 1, further comprising preventing any of the at least some of the applications that are indicated to have an incompatible status from being executed against a respective portion of the at least some data for which the indication is provided without modification intended to overcome the incompatible status. 12. The method of claim 1, further comprising selecting the at least some data from the set, against which the at least some applications are executed, using a sampling technique. 13. The method of claim 1, wherein the sampling technique involves obtaining random data samples. 14. The method of claim 1, further comprising selecting the at least some data from the set, against which the at least some applications are executed, using an execution time period limiting approach that inherently selects the at least some data by limiting an amount of overall execution time for the data in the set. | A method and system are provided. The method includes identifying a set of applications compatible with a set of data. The applications and the data are untagged by corresponding metadata. The identifying step includes executing, by an execution platform, at least some of the applications in the set against at least some of the data in the set. The identifying step further includes analyzing, by a log analyzer, execution logs for executions of the at least some of the applications against the at least some of the data. The identifying step also includes indicating, by the log analyzer, a compatibility of the at least some of the applications to the at least some of the data by detecting compatibility relevant errors using the execution logs.1. A method, comprising:
identifying a set of applications compatible with a set of data, wherein the applications and the data are untagged by corresponding metadata, wherein said identifying step comprises:
executing, by an execution platform, at least some of the applications in the set against at least some of the data in the set;
analyzing, by a log analyzer, execution logs for executions of the at least some of the applications against the at least some of the data; and
indicating, by the log analyzer, a compatibility of the at least some of the applications to the at least some of the data by detecting compatibility relevant errors using the execution logs. 2. The method of claim 1, wherein the executions of the at least some of the applications against the at least some of the data are performed sequentially. 3. The method of claim 2, wherein the method is terminated upon an indication of a compatibility status of a predetermined number of the at least some applications against corresponding portions of the at least some of the data. 4. The method of claim 3, wherein the predetermined number is an integer greater than one. 5. The method of claim 2, wherein a subset of the applications in the set are linked with respect to compatibility, and wherein an indication of incompatible status for a given one of the applications determined from a respective one of the executions is also applied to other ones of the applications linked in the subset without execution of the other ones of the applications. 6. The method of claim 5, wherein the applications in the subset are linked based on expected compatibility. 7. The method of claim 1, wherein the executions of the at least some of the applications against the at least some of the data are performed in parallel. 8. The method of claim 1, wherein said indicating step indicates an incompatible status for a given one of the applications with respect to a respective data portion responsive to detecting at least one compatibility error there between using a respective one of the execution logs. 9. The method of claim 1, wherein said indicating step indicates an incompatible status for a given one of the applications with respect to a respective data portion responsive to detecting a number of compatibility errors there between above a threshold using a respective one of the execution logs. 10. The method of claim 1, wherein said indicating step indicates an incompatible status for a given one of the applications with respect to a respective data portion responsive to detecting at least one compatibility error there between having a severity above a threshold using a respective one of the execution logs, wherein error severity is profiled a priori. 11. The method of claim 1, further comprising preventing any of the at least some of the applications that are indicated to have an incompatible status from being executed against a respective portion of the at least some data for which the indication is provided without modification intended to overcome the incompatible status. 12. The method of claim 1, further comprising selecting the at least some data from the set, against which the at least some applications are executed, using a sampling technique. 13. The method of claim 1, wherein the sampling technique involves obtaining random data samples. 14. The method of claim 1, further comprising selecting the at least some data from the set, against which the at least some applications are executed, using an execution time period limiting approach that inherently selects the at least some data by limiting an amount of overall execution time for the data in the set. | 2,100 |
5,951 | 5,951 | 14,980,525 | 2,154 | A non-transitory computer readable storage medium has instructions executed by a processor to assign virtual identifiers to blocks of a file that contain identical information in different data sources. A distributed storage and distributed processing query statement is received. Real name attributes of the query statement are equated with selected virtual identifiers. Access control policies are applied to the selected virtual identifiers to obtain policy results. The policy results are applied to the real name attributes of the query statement to obtain query results. | 1. A non-transitory computer readable storage medium with instructions executed by a processor to:
assign virtual identifiers to blocks of a file that contain identical information in different data sources; receive a query statement, wherein the query statement is a distributed storage and distributed processing query statement; equate real name attributes of the query statement with selected virtual identifiers; apply access control policies to the selected virtual identifiers to obtain policy results; and apply the policy results to the real name attributes of the query statement to obtain query results. 2. The non-transitory computer readable storage medium of claim 1 wherein the access control policies specify access control at a user level. 3. The non-transitory computer readable storage medium of claim 1 wherein the access control policies specify access control at a user group level. 4. The non-transitory computer readable storage medium of claim 1 wherein the virtual identifiers have associated table mappings. 5. The non-transitory computer readable storage medium of claim 1 wherein the virtual identifiers have associated column mappings. 6. The non-transitory computer readable storage medium of claim 1 further comprising instructions executed by the processor to enter audit entries in a policy enforcement log for each policy enforcement action. 7. A non-transitory computer readable storage medium with instructions executed by a processor to:
assign virtual identifiers to columns of a table that contain identical information in different databases; receive a query statement, wherein the query statement is a distributed storage and distributed processing query statement; equate real name attributes of the query statement with selected virtual identifiers; apply access control policies to the selected virtual identifiers to obtain policy results; and apply the policy results to the real name attributes of the query statement to obtain query results. 8. The non-transitory computer readable storage medium of claim 7 wherein the access control policies specify access control at a user level. 9. The non-transitory computer readable storage medium of claim 7 wherein the access control policies specify access control at a user group level. 10. The non-transitory computer readable storage medium of claim 7 wherein the virtual identifiers have associated table mappings. 11. The non-transitory computer readable storage medium of claim 7 wherein the virtual identifiers have associated column mappings. 12. The non-transitory computer readable storage medium of claim 7 further comprising instructions executed by the processor to enter audit entries in a policy enforcement log for each policy enforcement action. | A non-transitory computer readable storage medium has instructions executed by a processor to assign virtual identifiers to blocks of a file that contain identical information in different data sources. A distributed storage and distributed processing query statement is received. Real name attributes of the query statement are equated with selected virtual identifiers. Access control policies are applied to the selected virtual identifiers to obtain policy results. The policy results are applied to the real name attributes of the query statement to obtain query results.1. A non-transitory computer readable storage medium with instructions executed by a processor to:
assign virtual identifiers to blocks of a file that contain identical information in different data sources; receive a query statement, wherein the query statement is a distributed storage and distributed processing query statement; equate real name attributes of the query statement with selected virtual identifiers; apply access control policies to the selected virtual identifiers to obtain policy results; and apply the policy results to the real name attributes of the query statement to obtain query results. 2. The non-transitory computer readable storage medium of claim 1 wherein the access control policies specify access control at a user level. 3. The non-transitory computer readable storage medium of claim 1 wherein the access control policies specify access control at a user group level. 4. The non-transitory computer readable storage medium of claim 1 wherein the virtual identifiers have associated table mappings. 5. The non-transitory computer readable storage medium of claim 1 wherein the virtual identifiers have associated column mappings. 6. The non-transitory computer readable storage medium of claim 1 further comprising instructions executed by the processor to enter audit entries in a policy enforcement log for each policy enforcement action. 7. A non-transitory computer readable storage medium with instructions executed by a processor to:
assign virtual identifiers to columns of a table that contain identical information in different databases; receive a query statement, wherein the query statement is a distributed storage and distributed processing query statement; equate real name attributes of the query statement with selected virtual identifiers; apply access control policies to the selected virtual identifiers to obtain policy results; and apply the policy results to the real name attributes of the query statement to obtain query results. 8. The non-transitory computer readable storage medium of claim 7 wherein the access control policies specify access control at a user level. 9. The non-transitory computer readable storage medium of claim 7 wherein the access control policies specify access control at a user group level. 10. The non-transitory computer readable storage medium of claim 7 wherein the virtual identifiers have associated table mappings. 11. The non-transitory computer readable storage medium of claim 7 wherein the virtual identifiers have associated column mappings. 12. The non-transitory computer readable storage medium of claim 7 further comprising instructions executed by the processor to enter audit entries in a policy enforcement log for each policy enforcement action. | 2,100 |
5,952 | 5,952 | 14,987,940 | 2,142 | A computer implemented method and device are provided. The method and device prepare a communications event (CE) through a user interface of a user-related device, access proximity information related to contacts on a contact list and present, on the user interface of the user-related device, a candidate contact to utilize with the communications event based on the proximity information. | 1. A computer implemented method, comprising:
under control of one or more processors configured with specific executable program instructions, preparing a communications event (CE) through a user interface of a user-related device; accessing proximity information related to contacts on a contact list; and presenting, on the user interface of the user-related device, a candidate contact to utilize with the communications event based on the proximity information. 2. The method of claim 1, further comprising ordering a list of candidate contacts based on the proximity information, wherein the presenting includes presenting the list of candidate contacts in an ordered priority. 3. The method of claim 1, wherein the preparing includes opening a CE message having a destination field and the presenting includes displaying a list of device destination addresses associated with candidate contacts. 4. The method of claim 1, further comprising determining the proximity information based on a user-related location and a contact-related location. 5. The method of claim 4, wherein the user-related location is associated with a physical location of the user-related device and the contact-related location is associated with a physical location of a contact-related device. 6. The method of claim 4, wherein the determining includes determining a distance between the user-related location and the contact-related location. 7. The method of claim 1, wherein the communications event is associated with a user account, the contact list being associated with the user account. 8. The method of claim 7, further comprising determining the proximity information based on a user-related calendar event associated with the user account and a contact-related calendar event associated with the candidate contact. 9. The method of claim 1, further comprising tracking a contact-related location of the candidate contact and updating the proximity information based on the contact-related location. 10. A device, comprising:
a processor; a user interface generated via the processor; a memory storing program instructions accessible by the processor; wherein the program instructions are executable by the processor to:
prepare a communications event (CE) through the user interface;
access proximity information related to contacts on a contact list; and
present, on the user interface, a candidate contact to utilize with the communications event based on the proximity information. 11. The device of claim 10, wherein the program instructions are executable by the processor to implement a proximity identifier module and a priority manager module, the proximity identifier module configured to determine when a contact-related location associated with one or more contacts on the contact list changes, the priority manager module configured to reduce a priority of the one or more candidate contacts based on the change. 12. The device of claim 10, wherein the program instructions are executable by the processor to implement a proximity identifier module, the proximity identifier module configured to determine the proximity information for candidate contacts that have associated contact-related calendar events that correspond to a user-related calendar event. 13. The device of claim 10, wherein the program instructions are executable by the processor to implement a priority manager module, the priority manager module configured to prioritize a list of candidate contacts by increasing a priority of one or more candidate contacts. 14. The device of claim 10, wherein the program instructions are executable by the processor to implement a CE manager module that is configured to initiate the communications event by opening a text or email application and to present a candidate phone or email address for the candidate contact. 15. The device of claim 10, wherein the program instructions are executable by the processor to implement a priority manager module configured to monitor past interaction with the candidate contact, and change a rank of the candidate contact based on the past interaction. 16. The device of claim 10, wherein the memory further stores a list of contacts, the contacts including corresponding proximity information and one or more candidate device destination addresses. 17. The device of claim 10, wherein the user interface is configured to display a CE message having a destination field and to display a list of device addresses associated with a list of candidate contacts, the user interface configured to receive a selection from the list of candidate contacts, the user interface populating the destination field with one or more device destination addresses based on the selection from the list of candidate contacts. 18. A computer program product comprising a non-signal computer readable storage medium comprising computer executable code to perform:
prepare a communications event (CE) through a user interface of a user-related device; access proximity information related to contacts on a contact list; and present, on the user interface of the user-related device, a candidate contact to utilize with the communications event based on the proximity information. 19. The computer program product of claim 18, further comprising determining the proximity information based on a user-related location and a contact-related location. 20. The computer program product of claim 18, further comprising determining the proximity information based on a user-related calendar event associated with the user account and a contact-related calendar event associated with the candidate contact. | A computer implemented method and device are provided. The method and device prepare a communications event (CE) through a user interface of a user-related device, access proximity information related to contacts on a contact list and present, on the user interface of the user-related device, a candidate contact to utilize with the communications event based on the proximity information.1. A computer implemented method, comprising:
under control of one or more processors configured with specific executable program instructions, preparing a communications event (CE) through a user interface of a user-related device; accessing proximity information related to contacts on a contact list; and presenting, on the user interface of the user-related device, a candidate contact to utilize with the communications event based on the proximity information. 2. The method of claim 1, further comprising ordering a list of candidate contacts based on the proximity information, wherein the presenting includes presenting the list of candidate contacts in an ordered priority. 3. The method of claim 1, wherein the preparing includes opening a CE message having a destination field and the presenting includes displaying a list of device destination addresses associated with candidate contacts. 4. The method of claim 1, further comprising determining the proximity information based on a user-related location and a contact-related location. 5. The method of claim 4, wherein the user-related location is associated with a physical location of the user-related device and the contact-related location is associated with a physical location of a contact-related device. 6. The method of claim 4, wherein the determining includes determining a distance between the user-related location and the contact-related location. 7. The method of claim 1, wherein the communications event is associated with a user account, the contact list being associated with the user account. 8. The method of claim 7, further comprising determining the proximity information based on a user-related calendar event associated with the user account and a contact-related calendar event associated with the candidate contact. 9. The method of claim 1, further comprising tracking a contact-related location of the candidate contact and updating the proximity information based on the contact-related location. 10. A device, comprising:
a processor; a user interface generated via the processor; a memory storing program instructions accessible by the processor; wherein the program instructions are executable by the processor to:
prepare a communications event (CE) through the user interface;
access proximity information related to contacts on a contact list; and
present, on the user interface, a candidate contact to utilize with the communications event based on the proximity information. 11. The device of claim 10, wherein the program instructions are executable by the processor to implement a proximity identifier module and a priority manager module, the proximity identifier module configured to determine when a contact-related location associated with one or more contacts on the contact list changes, the priority manager module configured to reduce a priority of the one or more candidate contacts based on the change. 12. The device of claim 10, wherein the program instructions are executable by the processor to implement a proximity identifier module, the proximity identifier module configured to determine the proximity information for candidate contacts that have associated contact-related calendar events that correspond to a user-related calendar event. 13. The device of claim 10, wherein the program instructions are executable by the processor to implement a priority manager module, the priority manager module configured to prioritize a list of candidate contacts by increasing a priority of one or more candidate contacts. 14. The device of claim 10, wherein the program instructions are executable by the processor to implement a CE manager module that is configured to initiate the communications event by opening a text or email application and to present a candidate phone or email address for the candidate contact. 15. The device of claim 10, wherein the program instructions are executable by the processor to implement a priority manager module configured to monitor past interaction with the candidate contact, and change a rank of the candidate contact based on the past interaction. 16. The device of claim 10, wherein the memory further stores a list of contacts, the contacts including corresponding proximity information and one or more candidate device destination addresses. 17. The device of claim 10, wherein the user interface is configured to display a CE message having a destination field and to display a list of device addresses associated with a list of candidate contacts, the user interface configured to receive a selection from the list of candidate contacts, the user interface populating the destination field with one or more device destination addresses based on the selection from the list of candidate contacts. 18. A computer program product comprising a non-signal computer readable storage medium comprising computer executable code to perform:
prepare a communications event (CE) through a user interface of a user-related device; access proximity information related to contacts on a contact list; and present, on the user interface of the user-related device, a candidate contact to utilize with the communications event based on the proximity information. 19. The computer program product of claim 18, further comprising determining the proximity information based on a user-related location and a contact-related location. 20. The computer program product of claim 18, further comprising determining the proximity information based on a user-related calendar event associated with the user account and a contact-related calendar event associated with the candidate contact. | 2,100 |
5,953 | 5,953 | 15,494,510 | 2,183 | A method for dynamically allocating copy agents to a background copy process is disclosed. In one embodiment, such a method includes monitoring current host throughput to a source array. The method initiates a background copy process to copy data from the source array to a target array. This includes allocating agents to copy data from the source array to the target array. While the background copy process is executing, the method monitors background copy throughput to the source array and dynamically adjusts the number of agents allocated to the background copy process in accordance with changes to the host throughput. A corresponding system and computer program product are also disclosed. | 1. A method for dynamically allocating agents to a background copy process, the method comprising:
monitoring current host throughput to a source array; initiating a background copy process to copy data from the source array to a target array, wherein initiating the background copy process comprises allocating agents to copy data from the source array to the target array; monitoring current background copy throughput to the source array as a result of the background copy process; determining a maximum throughput of the source array to maintain normal host I/O response times; and repeatedly performing the following:
determine a current throughput to the source array by summing the host throughput and the current background copy throughput;
allocate an additional agent to the background copy process if the additional agent can be allocated without causing the current throughput to exceed the maximum throughput; and
remove an agent from the background copy process if the current throughput exceeds the maximum throughput. 2. The method of claim 1, further comprising, prior to allocating the additional agent to the background copy process, determining if the additional agent can be allocated without causing the current throughput to exceed the maximum throughput. 3. The method of claim 2, wherein determining if the additional agent can be allocated without causing the current throughput to exceed the maximum throughput comprises determining how much additional throughput the additional agent will impose on the source array. 4. The method of claim 3, where determining how much additional throughput the additional agent will impose on the source array comprises recording an amount of time required for the additional agent to transfer a block of data from the source array to the target array and extrapolating the additional throughput from this amount of time. 5. The method of claim 3, wherein allocating the additional agent comprises allocating the additional agent if the sum of the current throughput and the additional throughput will not exceed the maximum throughput. 6. The method of claim 1, wherein the current host throughput fluctuates over time. 7. The method of claim 1, wherein monitoring the current host throughput to the source array comprises continuously monitoring the current host throughput to the source array. 8. A computer program product for dynamically allocating agents to a background copy process, the computer program product comprising a computer-readable storage medium having computer-usable program code embodied therein, the computer-usable program code configured to perform the following when executed by at least one processor:
monitor current host throughput to a source array; initiate a background copy process to copy data from the source array to a target array, wherein initiating the background copy process comprises allocating agents to copy data from the source array to the target array; monitor current background copy throughput to the source array as a result of the background copy process; determine a maximum throughput of the source array to maintain normal host I/O response times; and repeatedly perform the following:
determine a current throughput to the source array by summing the host throughput and the current background copy throughput;
allocate an additional agent to the background copy process if the additional agent can be allocated without causing the current throughput to exceed the maximum throughput; and
remove an agent from the background copy process if the current throughput exceeds the maximum throughput. 9. The computer program product of claim 8, wherein the computer-usable program code is further configured to, prior to allocating the additional agent to the background copy process, determine if the additional agent can be allocated without causing the current throughput to exceed the maximum throughput. 10. The computer program product of claim 9, wherein determining if the additional agent can be allocated without causing the current throughput to exceed the maximum throughput comprises determining how much additional throughput the additional agent will impose on the source array. 11. The computer program product of claim 10, where determining how much additional throughput the additional agent will impose on the source array comprises recording an amount of time required for the additional agent to transfer a block of data from the source array to the target array and extrapolating the additional throughput from this amount of time. 12. The computer program product of claim 10, wherein allocating the additional agent comprises allocating the additional agent if the sum of the current throughput and the additional throughput will not exceed the maximum throughput. 13. The computer program product of claim 8, wherein the current host throughput fluctuates over time. 14. The computer program product of claim 8, wherein monitoring the current host throughput to the source array comprises continuously monitoring the current host throughput to the source array. 15. A system for dynamically allocating agents to a background copy process, the system comprising:
at least one processor; at least one memory device operably coupled to the at least one processor and storing instructions for execution on the at least one processor, the instructions causing the at least one processor to:
monitor current host throughput to a source array;
initiate a background copy process to copy data from the source array to a target array, wherein initiating the background copy process comprises allocating agents to copy data from the source array to the target array;
monitor current background copy throughput to the source array as a result of the background copy process;
determine a maximum throughput of the source array to maintain normal host I/O response times; and
repeatedly perform the following:
determine a current throughput to the source array by summing the host throughput and the current background copy throughput;
allocate an additional agent to the background copy process if the additional agent can be allocated without causing the current throughput to exceed the maximum throughput; and
remove an agent from the background copy process if the current throughput exceeds the maximum throughput. 16. The system of claim 15, wherein the instructions further cause the at least one processor to, prior to allocating the additional agent to the background copy process, determine if the additional agent can be allocated without causing the current throughput to exceed the maximum throughput. 17. The system of claim 16, wherein determining if the additional agent can be allocated without causing the current throughput to exceed the maximum throughput comprises determining how much additional throughput the additional agent will impose on the source array. 18. The system of claim 17, where determining how much additional throughput the additional agent will impose on the source array comprises recording an amount of time required for the additional agent to transfer a block of data from the source array to the target array and extrapolating the additional throughput from this amount of time. 19. The system of claim 17, wherein allocating the additional agent comprises allocating the additional agent if the sum of the current throughput and the additional throughput will not exceed the maximum throughput. 20. The system of claim 15, wherein monitoring the current host throughput to the source array comprises continuously monitoring the current host throughput to the source array. | A method for dynamically allocating copy agents to a background copy process is disclosed. In one embodiment, such a method includes monitoring current host throughput to a source array. The method initiates a background copy process to copy data from the source array to a target array. This includes allocating agents to copy data from the source array to the target array. While the background copy process is executing, the method monitors background copy throughput to the source array and dynamically adjusts the number of agents allocated to the background copy process in accordance with changes to the host throughput. A corresponding system and computer program product are also disclosed.1. A method for dynamically allocating agents to a background copy process, the method comprising:
monitoring current host throughput to a source array; initiating a background copy process to copy data from the source array to a target array, wherein initiating the background copy process comprises allocating agents to copy data from the source array to the target array; monitoring current background copy throughput to the source array as a result of the background copy process; determining a maximum throughput of the source array to maintain normal host I/O response times; and repeatedly performing the following:
determine a current throughput to the source array by summing the host throughput and the current background copy throughput;
allocate an additional agent to the background copy process if the additional agent can be allocated without causing the current throughput to exceed the maximum throughput; and
remove an agent from the background copy process if the current throughput exceeds the maximum throughput. 2. The method of claim 1, further comprising, prior to allocating the additional agent to the background copy process, determining if the additional agent can be allocated without causing the current throughput to exceed the maximum throughput. 3. The method of claim 2, wherein determining if the additional agent can be allocated without causing the current throughput to exceed the maximum throughput comprises determining how much additional throughput the additional agent will impose on the source array. 4. The method of claim 3, where determining how much additional throughput the additional agent will impose on the source array comprises recording an amount of time required for the additional agent to transfer a block of data from the source array to the target array and extrapolating the additional throughput from this amount of time. 5. The method of claim 3, wherein allocating the additional agent comprises allocating the additional agent if the sum of the current throughput and the additional throughput will not exceed the maximum throughput. 6. The method of claim 1, wherein the current host throughput fluctuates over time. 7. The method of claim 1, wherein monitoring the current host throughput to the source array comprises continuously monitoring the current host throughput to the source array. 8. A computer program product for dynamically allocating agents to a background copy process, the computer program product comprising a computer-readable storage medium having computer-usable program code embodied therein, the computer-usable program code configured to perform the following when executed by at least one processor:
monitor current host throughput to a source array; initiate a background copy process to copy data from the source array to a target array, wherein initiating the background copy process comprises allocating agents to copy data from the source array to the target array; monitor current background copy throughput to the source array as a result of the background copy process; determine a maximum throughput of the source array to maintain normal host I/O response times; and repeatedly perform the following:
determine a current throughput to the source array by summing the host throughput and the current background copy throughput;
allocate an additional agent to the background copy process if the additional agent can be allocated without causing the current throughput to exceed the maximum throughput; and
remove an agent from the background copy process if the current throughput exceeds the maximum throughput. 9. The computer program product of claim 8, wherein the computer-usable program code is further configured to, prior to allocating the additional agent to the background copy process, determine if the additional agent can be allocated without causing the current throughput to exceed the maximum throughput. 10. The computer program product of claim 9, wherein determining if the additional agent can be allocated without causing the current throughput to exceed the maximum throughput comprises determining how much additional throughput the additional agent will impose on the source array. 11. The computer program product of claim 10, where determining how much additional throughput the additional agent will impose on the source array comprises recording an amount of time required for the additional agent to transfer a block of data from the source array to the target array and extrapolating the additional throughput from this amount of time. 12. The computer program product of claim 10, wherein allocating the additional agent comprises allocating the additional agent if the sum of the current throughput and the additional throughput will not exceed the maximum throughput. 13. The computer program product of claim 8, wherein the current host throughput fluctuates over time. 14. The computer program product of claim 8, wherein monitoring the current host throughput to the source array comprises continuously monitoring the current host throughput to the source array. 15. A system for dynamically allocating agents to a background copy process, the system comprising:
at least one processor; at least one memory device operably coupled to the at least one processor and storing instructions for execution on the at least one processor, the instructions causing the at least one processor to:
monitor current host throughput to a source array;
initiate a background copy process to copy data from the source array to a target array, wherein initiating the background copy process comprises allocating agents to copy data from the source array to the target array;
monitor current background copy throughput to the source array as a result of the background copy process;
determine a maximum throughput of the source array to maintain normal host I/O response times; and
repeatedly perform the following:
determine a current throughput to the source array by summing the host throughput and the current background copy throughput;
allocate an additional agent to the background copy process if the additional agent can be allocated without causing the current throughput to exceed the maximum throughput; and
remove an agent from the background copy process if the current throughput exceeds the maximum throughput. 16. The system of claim 15, wherein the instructions further cause the at least one processor to, prior to allocating the additional agent to the background copy process, determine if the additional agent can be allocated without causing the current throughput to exceed the maximum throughput. 17. The system of claim 16, wherein determining if the additional agent can be allocated without causing the current throughput to exceed the maximum throughput comprises determining how much additional throughput the additional agent will impose on the source array. 18. The system of claim 17, where determining how much additional throughput the additional agent will impose on the source array comprises recording an amount of time required for the additional agent to transfer a block of data from the source array to the target array and extrapolating the additional throughput from this amount of time. 19. The system of claim 17, wherein allocating the additional agent comprises allocating the additional agent if the sum of the current throughput and the additional throughput will not exceed the maximum throughput. 20. The system of claim 15, wherein monitoring the current host throughput to the source array comprises continuously monitoring the current host throughput to the source array. | 2,100 |
5,954 | 5,954 | 15,270,654 | 2,191 | A job state machine may transition to a downloading state in response to a start message on the job pipeline, wherein the job object causes job data to be downloaded to the device when the job state machine is in the downloading state. The job state machine may transition to an installing state in response to an assertion message on the job pipeline, wherein the job object causes downloaded job data to be installed on the device when the job state machine is in the installing state. The job state machine may transition to a finished state when the job data is installed on the device. The job state machine may be recoverable to the waiting state, the downloading state, or the installing state in response to a job object failure while the job state machine is in the waiting state, downloading state, or installing state, respectively. | 1. A method of managing application installation on a device, the method comprising:
instantiating, with a job manager, a job object comprising a job state machine in a waiting state and a job pipeline configured to relay messages related to a job; transitioning, with the job object, the job state machine to a downloading state in response to a start message on the job pipeline, wherein the job object causes job data to be downloaded to the device when the job state machine is in the downloading state; transitioning, with the job object, the job state machine to an installing state in response to an assertion message on the job pipeline, wherein the job object causes downloaded job data to be installed on the device when the job state machine is in the installing state; and transitioning, with the job object, the job state machine to a finished state when the job data is installed on the device; wherein the job state machine is recoverable to the waiting state, the downloading state, or the installing state in response to a job object failure while the job state machine is in the waiting state, downloading state, or installing state, respectively. 2. The method of claim 1, further comprising:
transitioning, with the job object, the job state machine to a canceling state in response to a cancel message on the job pipeline and a determination that a job data download is pending; and transitioning, with the job object, the job state machine to a canceled state in response to a cancel message on the job pipeline and a determination that no job data download is pending. 3. The method of claim 1, further comprising transitioning, with the job object, the job state machine to a failed state in response to the job object failure. 4. The method of claim 1, further comprising:
transitioning, with the job object, the job state machine to a paused state in response to a pause message on the job pipeline; and transitioning, with the job object, the job state machine to the waiting state in response to a resume message on the job pipeline. 5. The method of claim 1, further comprising:
transitioning, with the job object, the job state machine to a pending install state before the transitioning to the installing state; wherein: the job object receives user approval of the job when the job state machine is in the pending install state; and the job state machine is recoverable to the pending install state in response to a job object failure while the job state machine is in the pending install state. 6. The method of claim 1, further comprising:
transitioning, with the job object, the job state machine to a preparing install state before the transitioning to the installing state; wherein: the job object receives an assertion that a precondition for installing has been met when the job state machine is in the preparing install state; the transitioning to the installing state is performed upon receiving the assertion; and the job state machine is recoverable to the preparing install state in response to a job object failure while the job state machine is in the preparing install state. 7. The method of claim 1, further comprising:
instantiating, with the job manager, a download object comprising a download state machine (DSM) in a waiting state; when the job state machine transitions to the downloading state, transitioning, with the download object, the download state machine to a DSM downloading state, wherein the download object downloads the job data to the device when the download state machine is in the DSM downloading state; and after a completion of job data downloading, transitioning, with the download object, the download state machine out of the DSM downloading state; wherein the download state machine is recoverable to the DSM downloading state in response to a download object failure while the download state machine is in the DSM downloading state. 8. The method of claim 7, wherein the completion of job data downloading by the download object triggers a transition of the job state machine out of the downloading state. 9. The method of claim 7, wherein the completion of job data downloading comprises a completed download, a download failure, or a download cancellation. 10. The method of claim 1, further comprising:
instantiating, with the job manager, an install object comprising an install state machine (ISM); when the job state machine transitions to the installing state, transitioning, with the install object, the install state machine to an ISM installing state, wherein the install object installs the job data to the device when the install state machine is in the ISM installing state; and after a completion of job data installation, transitioning, with the install object, the install state machine out of the ISM installing state; wherein the install state machine is recoverable to the ISM installing state in response to an install object failure while the install state machine is in the ISM installing state. 11. The method of claim 10, wherein the completion of job data installation by the install object triggers a transition of the job state machine out of the installing state. 12. The method of claim 10, further comprising:
instantiating, with the job manager, the install state machine in an icon state or transitioning, with the install object, the install state machine to the icon state, wherein the install object performs processing associated with displaying an icon on the device when the install state machine is in the icon state.; and transitioning, with the install object, the install state machine to a placeholder state when the icon is displayed on the device. 13. The method of claim 1, wherein the job comprises an application installation, an application upgrade, or an application restoration. 14. A non-transitory computer-readable medium comprising code that, when executed by a processor of a device, causes the processor to:
instantiate, with a job manager, a job object comprising a job state machine in a waiting state and a job pipeline configured to relay messages related to a job; transition, with the job object, the job state machine to a downloading state in response to a start message on the job pipeline, wherein the job object causes job data to be downloaded to the device when the job state machine is in the downloading state; transition, with the job object, the job state machine to an installing state in response to an assertion message on the job pipeline, wherein the job object causes downloaded job data to be installed on the device when the job state machine is in the installing state; and transition, with the job object, the job state machine to a finished state when the job data is installed on the device; wherein the job state machine is recoverable to the waiting state, the downloading state, or the installing state in response to a job object failure while the job state machine is in the waiting state, downloading state, or installing state, respectively. 15. The non-transitory computer-readable medium of claim 14, further comprising code that, when executed by the processor, causes the processor to: transition, with the job object, the job state machine to a canceling state in response to a cancel message on the job pipeline and a determination that a job data download is pending; and
transition, with the job object, the job state machine to a canceled state in response to a cancel message on the job pipeline and a determination that no job data download is pending. 16. The non-transitory computer-readable medium of claim 14, further comprising code that, when executed by the processor, causes the processor to transition, with the job object, the job state machine to a failed state in response to the job object failure. 17. The non-transitory computer-readable medium of claim 14, further comprising code that, when executed by the processor, causes the processor to:
transition, with the job object, the job state machine to a paused state in response to a pause message on the job pipeline; and transition, with the job object, the job state machine to the waiting state in response to a resume message on the job pipeline. 18. The non-transitory computer-readable medium of claim 14, further comprising code that, when executed by the processor, causes the processor to:
transition, with the job object, the job state machine to a pending install state before the transitioning to the installing state; wherein:
the job object receives user approval of the job when the job state machine is in the pending install state; and
the job state machine is recoverable to the pending install state in response to a job object failure while the job state machine is in the pending install state. 19. The non-transitory computer-readable medium of claim 14, further comprising code that, when executed by the processor, causes the processor to:
transition, with the job object, the job state machine to a preparing install state before the transitioning to the installing state; wherein: the job object receives an assertion that a precondition for installing has been met when the job state machine is in the preparing install state; the transitioning to the installing state is performed upon receiving the assertion; and the job state machine is recoverable to the preparing install state in response to a job object failure while the job state machine is in the preparing install state. 20. The non-transitory computer-readable medium of claim 14, further comprising code that, when executed by the processor, causes the processor to:
instantiate, with the job manager, a download object comprising a download state machine (DSM) in a waiting state; when the job state machine transitions to the downloading state, transition, with the download object, the download state machine to a DSM downloading state, wherein the download object downloads the job data to the device when the download state machine is in the DSM downloading state; and after a completion of job data downloading, transition, with the download object, the download state machine out of the DSM downloading state; wherein the download state machine is recoverable to the DSM downloading state in response to a download object failure while the download state machine is in the DSM downloading state. 21. The non-transitory computer-readable medium of claim 20, wherein the completion of job data downloading by the download object triggers a transition of the job state machine out of the downloading state. 22. The non-transitory computer-readable medium of claim 20, wherein the completion of job data downloading comprises a completed download, a download failure, or a download cancellation. 23. The non-transitory computer-readable medium of claim 14, further comprising code that, when executed by the processor, causes the processor to:
instantiate, with the job manager, an install object comprising an install state machine (ISM); when the job state machine transitions to the installing state, transition, with the install object, the install state machine to an ISM installing state, wherein the install object installs the job data to the device when the install state machine is in the ISM installing state; and after a completion of job data installation, transition, with the install object, the install state machine out of the ISM installing state; wherein the install state machine is recoverable to the ISM installing state in response to an install object failure while the install state machine is in the ISM installing state. 24. The non-transitory computer-readable medium of claim 23, wherein the completion of job data installation by the install object triggers a transition of the job state machine out of the installing state. 25. The non-transitory computer-readable medium of claim 23, further comprising code that, when executed by the processor, causes the processor to:
instantiate, with the job manager, the install state machine in an icon state or transition, with the install object, the install state machine to the icon state, wherein the install object performs processing associated with displaying an icon on the device when the install state machine is in the icon state.; and transition, with the install object, the install state machine to a placeholder state when the icon is displayed on the device. 26. The non-transitory computer-readable medium of claim 14, wherein the job comprises an application installation, an application upgrade, or an application restoration. 27. A method of managing application installation on a device, the method comprising:
instantiating, with a job manager, a job object comprising a job state machine in a waiting state and a job pipeline configured to relay messages related to the job; downloading, with the job object, job data in response to a start message on the job pipeline, wherein the job state machine is in a downloading state during the downloading; installing, with the job object, downloaded job data on the device, wherein the job state machine is in an installing state during the installing; and completing, with the job object, the job, wherein the job state machine is in a finished state during the completing; wherein the job is recoverable to the instantiating step, the downloading step, or the installing step in response to a job object failure by restoring the job state machine to the waiting state, downloading state, or installing state, respectively. 28. The method of claim 27, further comprising:
canceling, with the job object, the job in response to a cancel message on the job pipeline and a determination that a job data download is pending, wherein the job state machine is in a canceling state during the canceling; and ending, with the job object, the job in response to a cancel message on the job pipeline and a determination that no job data download is pending, wherein the job state machine is in a canceled state during the ending. 29. The method of claim 27, wherein the job state machine is in a failed state after the job object failure. 30. The method of claim 27, further comprising:
pausing, with the job object, the job in response to a pause message on the job pipeline, wherein the job state machine is in a paused state during the pausing; and resuming, with the job object, the job in response to a resume message on the job pipeline, wherein the job state machine is returned to the waiting state upon resuming. 31. The method of claim 27, further comprising:
requesting, with the job object, user approval of the job before the installing; wherein:
the job state machine is in a pending install state during the requesting; and
the job is recoverable to the requesting step in response to a job object failure by restoring the job state machine to the pending install state. 32. The method of claim 27, further comprising:
preparing, with the job object, an installation before the installing state; wherein:
the job state machine is in a preparing install state during the preparing;
the job object receives an assertion that a precondition for installing has been met when the job state machine is in the preparing install state;
the installing is performed upon receiving the assertion; and
the job is recoverable to the preparing install state in response to a job object failure by restoring the job state machine to the preparing install state. 33. The method of claim 27, further comprising:
instantiating, with the job manager, a download object comprising a download state machine (DSM) in a waiting state; when the job state machine transitions to the downloading state, transitioning, with the download object, the download state machine to a DSM downloading state, wherein the download object downloads the job data to the device when the download state machine is in the DSM downloading state; and after a completion of job data downloading, transitioning, with the download object, the download state machine out of the DSM downloading state; wherein the download state machine is recoverable to the DSM downloading state in response to a download object failure while the download state machine is in the DSM downloading state. 34. The method of claim 33, wherein the completion of job data downloading by the download object triggers a transition of the job state machine out of the downloading state. 35. The method of claim 33, wherein the completion of job data downloading comprises a completed download, a download failure, or a download cancellation. 36. The method of claim 27, further comprising:
instantiating, with the job manager, an install object comprising an install state machine (ISM); when the job state machine transitions to the installing state, transitioning, with the install object, the install state machine to an ISM installing state, wherein the install object installs the job data to the device when the install state machine is in the ISM installing state; and after a completion of job data installation, transitioning, with the install object, the install state machine out of the ISM installing state; wherein the install state machine is recoverable to the ISM installing state in response to an install object failure while the install state machine is in the ISM installing state. 37. The method of claim 36, wherein the completion of job data installation by the install object triggers a transition of the job state machine out of the installing state. 38. The method of claim 36, further comprising:
instantiating, with the job manager, the install state machine in an icon state or transitioning, with the install object, the install state machine to the icon state, wherein the install object performs processing associated with displaying an icon on the device when the install state machine is in the icon state.; and transitioning, with the install object, the install state machine to a placeholder state when the icon is displayed on the device. 39. The method of claim 27, wherein the job comprises an application installation, an application upgrade, or an application restoration. 40. A non-transitory computer-readable medium comprising code that, when executed by a processor of a device, causes the processor to:
instantiate, with a job manager, a job object comprising a job state machine in a waiting state and a job pipeline configured to relay messages related to the job; download, with the job object, job data in response to a start message on the job pipeline, wherein the job state machine is in a downloading state during the downloading; install, with the job object, downloaded job data on the device, wherein the job state machine is in an installing state during the installing; and complete, with the job object, the job, wherein the job state machine is in a finished state during the completing; wherein the job is recoverable to the instantiating step, the downloading step, or the installing step in response to a job object failure by restoring the job state machine to the waiting state, downloading state, or installing state, respectively. 41. The non-transitory computer-readable medium of claim 40, further comprising code that, when executed by the processor, causes the processor to:
cancel, with the job object, the job in response to a cancel message on the job pipeline and a determination that a job data download is pending, wherein the job state machine is in a canceling state during the canceling; and end, with the job object, the job in response to a cancel message on the job pipeline and a determination that no job data download is pending, wherein the job state machine is in a canceled state during the ending. 42. The non-transitory computer-readable medium of claim 40, wherein the job state machine is in a failed state after the job object failure. 43. The non-transitory computer-readable medium of claim 40, further comprising code that, when executed by the processor, causes the processor to:
pause, with the job object, the job in response to a pause message on the job pipeline, wherein the job state machine is in a paused state during the pausing; and resume, with the job object, the job in response to a resume message on the job pipeline, wherein the job state machine is returned to the waiting state upon resuming. 44. The non-transitory computer-readable medium of claim 40, further comprising code that, when executed by the processor, causes the processor to:
request, with the job object, user approval of the job before the installing; wherein:
the job state machine is in a pending install state during the requesting; and
the job is recoverable to the requesting step in response to a job object failure by restoring the job state machine to the pending install state. 45. The non-transitory computer-readable medium of claim 40, further comprising code that, when executed by the processor, causes the processor to:
prepare, with the job object, an installation before the installing state; wherein:
the job state machine is in a preparing install state during the preparing;
the job object receives an assertion that a precondition for installing has been met when the job state machine is in the preparing install state;
the installing is performed upon receiving the assertion; and
the job is recoverable to the preparing install state in response to a job object failure by restoring the job state machine to the preparing install state. 46. The non-transitory computer-readable medium of claim 40, further comprising code that, when executed by the processor, causes the processor to:
instantiate, with the job manager, a download object comprising a download state machine (DSM) in a waiting state; when the job state machine transitions to the downloading state, transition, with the download object, the download state machine to a DSM downloading state, wherein the download object downloads the job data to the device when the download state machine is in the DSM downloading state; and after a completion of job data downloading, transition, with the download object, the download state machine out of the DSM downloading state; wherein the download state machine is recoverable to the DSM downloading state in response to a download object failure while the download state machine is in the DSM downloading state. 47. The non-transitory computer-readable medium of claim 46, wherein the completion of job data downloading by the download object triggers a transition of the job state machine out of the downloading state. 48. The non-transitory computer-readable medium of claim 46, wherein the completion of job data downloading comprises a completed download, a download failure, or a download cancellation. 49. The non-transitory computer-readable medium of claim 40, further comprising code that, when executed by the processor, causes the processor to:
instantiate, with the job manager, an install object comprising an install state machine (ISM); when the job state machine transitions to the installing state, transition, with the install object, the install state machine to an ISM installing state, wherein the install object installs the job data to the device when the install state machine is in the ISM installing state; and after a completion of job data installation, transition, with the install object, the install state machine out of the ISM installing state; wherein the install state machine is recoverable to the ISM installing state in response to an install object failure while the install state machine is in the ISM installing state. 50. The non-transitory computer-readable medium of claim 49, wherein the completion of job data installation by the install object triggers a transition of the job state machine out of the installing state. 51. The non-transitory computer-readable medium of claim 49, further comprising code that, when executed by the processor, causes the processor to:
instantiate, with the job manager, the install state machine in an icon state or transition, with the install object, the install state machine to the icon state, wherein the install object performs processing associated with displaying an icon on the device when the install state machine is in the icon state.; and transition, with the install object, the install state machine to a placeholder state when the icon is displayed on the device. 52. The non-transitory computer-readable medium of claim 40, wherein the job comprises an application installation, an application upgrade, or an application restoration. | A job state machine may transition to a downloading state in response to a start message on the job pipeline, wherein the job object causes job data to be downloaded to the device when the job state machine is in the downloading state. The job state machine may transition to an installing state in response to an assertion message on the job pipeline, wherein the job object causes downloaded job data to be installed on the device when the job state machine is in the installing state. The job state machine may transition to a finished state when the job data is installed on the device. The job state machine may be recoverable to the waiting state, the downloading state, or the installing state in response to a job object failure while the job state machine is in the waiting state, downloading state, or installing state, respectively.1. A method of managing application installation on a device, the method comprising:
instantiating, with a job manager, a job object comprising a job state machine in a waiting state and a job pipeline configured to relay messages related to a job; transitioning, with the job object, the job state machine to a downloading state in response to a start message on the job pipeline, wherein the job object causes job data to be downloaded to the device when the job state machine is in the downloading state; transitioning, with the job object, the job state machine to an installing state in response to an assertion message on the job pipeline, wherein the job object causes downloaded job data to be installed on the device when the job state machine is in the installing state; and transitioning, with the job object, the job state machine to a finished state when the job data is installed on the device; wherein the job state machine is recoverable to the waiting state, the downloading state, or the installing state in response to a job object failure while the job state machine is in the waiting state, downloading state, or installing state, respectively. 2. The method of claim 1, further comprising:
transitioning, with the job object, the job state machine to a canceling state in response to a cancel message on the job pipeline and a determination that a job data download is pending; and transitioning, with the job object, the job state machine to a canceled state in response to a cancel message on the job pipeline and a determination that no job data download is pending. 3. The method of claim 1, further comprising transitioning, with the job object, the job state machine to a failed state in response to the job object failure. 4. The method of claim 1, further comprising:
transitioning, with the job object, the job state machine to a paused state in response to a pause message on the job pipeline; and transitioning, with the job object, the job state machine to the waiting state in response to a resume message on the job pipeline. 5. The method of claim 1, further comprising:
transitioning, with the job object, the job state machine to a pending install state before the transitioning to the installing state; wherein: the job object receives user approval of the job when the job state machine is in the pending install state; and the job state machine is recoverable to the pending install state in response to a job object failure while the job state machine is in the pending install state. 6. The method of claim 1, further comprising:
transitioning, with the job object, the job state machine to a preparing install state before the transitioning to the installing state; wherein: the job object receives an assertion that a precondition for installing has been met when the job state machine is in the preparing install state; the transitioning to the installing state is performed upon receiving the assertion; and the job state machine is recoverable to the preparing install state in response to a job object failure while the job state machine is in the preparing install state. 7. The method of claim 1, further comprising:
instantiating, with the job manager, a download object comprising a download state machine (DSM) in a waiting state; when the job state machine transitions to the downloading state, transitioning, with the download object, the download state machine to a DSM downloading state, wherein the download object downloads the job data to the device when the download state machine is in the DSM downloading state; and after a completion of job data downloading, transitioning, with the download object, the download state machine out of the DSM downloading state; wherein the download state machine is recoverable to the DSM downloading state in response to a download object failure while the download state machine is in the DSM downloading state. 8. The method of claim 7, wherein the completion of job data downloading by the download object triggers a transition of the job state machine out of the downloading state. 9. The method of claim 7, wherein the completion of job data downloading comprises a completed download, a download failure, or a download cancellation. 10. The method of claim 1, further comprising:
instantiating, with the job manager, an install object comprising an install state machine (ISM); when the job state machine transitions to the installing state, transitioning, with the install object, the install state machine to an ISM installing state, wherein the install object installs the job data to the device when the install state machine is in the ISM installing state; and after a completion of job data installation, transitioning, with the install object, the install state machine out of the ISM installing state; wherein the install state machine is recoverable to the ISM installing state in response to an install object failure while the install state machine is in the ISM installing state. 11. The method of claim 10, wherein the completion of job data installation by the install object triggers a transition of the job state machine out of the installing state. 12. The method of claim 10, further comprising:
instantiating, with the job manager, the install state machine in an icon state or transitioning, with the install object, the install state machine to the icon state, wherein the install object performs processing associated with displaying an icon on the device when the install state machine is in the icon state.; and transitioning, with the install object, the install state machine to a placeholder state when the icon is displayed on the device. 13. The method of claim 1, wherein the job comprises an application installation, an application upgrade, or an application restoration. 14. A non-transitory computer-readable medium comprising code that, when executed by a processor of a device, causes the processor to:
instantiate, with a job manager, a job object comprising a job state machine in a waiting state and a job pipeline configured to relay messages related to a job; transition, with the job object, the job state machine to a downloading state in response to a start message on the job pipeline, wherein the job object causes job data to be downloaded to the device when the job state machine is in the downloading state; transition, with the job object, the job state machine to an installing state in response to an assertion message on the job pipeline, wherein the job object causes downloaded job data to be installed on the device when the job state machine is in the installing state; and transition, with the job object, the job state machine to a finished state when the job data is installed on the device; wherein the job state machine is recoverable to the waiting state, the downloading state, or the installing state in response to a job object failure while the job state machine is in the waiting state, downloading state, or installing state, respectively. 15. The non-transitory computer-readable medium of claim 14, further comprising code that, when executed by the processor, causes the processor to: transition, with the job object, the job state machine to a canceling state in response to a cancel message on the job pipeline and a determination that a job data download is pending; and
transition, with the job object, the job state machine to a canceled state in response to a cancel message on the job pipeline and a determination that no job data download is pending. 16. The non-transitory computer-readable medium of claim 14, further comprising code that, when executed by the processor, causes the processor to transition, with the job object, the job state machine to a failed state in response to the job object failure. 17. The non-transitory computer-readable medium of claim 14, further comprising code that, when executed by the processor, causes the processor to:
transition, with the job object, the job state machine to a paused state in response to a pause message on the job pipeline; and transition, with the job object, the job state machine to the waiting state in response to a resume message on the job pipeline. 18. The non-transitory computer-readable medium of claim 14, further comprising code that, when executed by the processor, causes the processor to:
transition, with the job object, the job state machine to a pending install state before the transitioning to the installing state; wherein:
the job object receives user approval of the job when the job state machine is in the pending install state; and
the job state machine is recoverable to the pending install state in response to a job object failure while the job state machine is in the pending install state. 19. The non-transitory computer-readable medium of claim 14, further comprising code that, when executed by the processor, causes the processor to:
transition, with the job object, the job state machine to a preparing install state before the transitioning to the installing state; wherein: the job object receives an assertion that a precondition for installing has been met when the job state machine is in the preparing install state; the transitioning to the installing state is performed upon receiving the assertion; and the job state machine is recoverable to the preparing install state in response to a job object failure while the job state machine is in the preparing install state. 20. The non-transitory computer-readable medium of claim 14, further comprising code that, when executed by the processor, causes the processor to:
instantiate, with the job manager, a download object comprising a download state machine (DSM) in a waiting state; when the job state machine transitions to the downloading state, transition, with the download object, the download state machine to a DSM downloading state, wherein the download object downloads the job data to the device when the download state machine is in the DSM downloading state; and after a completion of job data downloading, transition, with the download object, the download state machine out of the DSM downloading state; wherein the download state machine is recoverable to the DSM downloading state in response to a download object failure while the download state machine is in the DSM downloading state. 21. The non-transitory computer-readable medium of claim 20, wherein the completion of job data downloading by the download object triggers a transition of the job state machine out of the downloading state. 22. The non-transitory computer-readable medium of claim 20, wherein the completion of job data downloading comprises a completed download, a download failure, or a download cancellation. 23. The non-transitory computer-readable medium of claim 14, further comprising code that, when executed by the processor, causes the processor to:
instantiate, with the job manager, an install object comprising an install state machine (ISM); when the job state machine transitions to the installing state, transition, with the install object, the install state machine to an ISM installing state, wherein the install object installs the job data to the device when the install state machine is in the ISM installing state; and after a completion of job data installation, transition, with the install object, the install state machine out of the ISM installing state; wherein the install state machine is recoverable to the ISM installing state in response to an install object failure while the install state machine is in the ISM installing state. 24. The non-transitory computer-readable medium of claim 23, wherein the completion of job data installation by the install object triggers a transition of the job state machine out of the installing state. 25. The non-transitory computer-readable medium of claim 23, further comprising code that, when executed by the processor, causes the processor to:
instantiate, with the job manager, the install state machine in an icon state or transition, with the install object, the install state machine to the icon state, wherein the install object performs processing associated with displaying an icon on the device when the install state machine is in the icon state.; and transition, with the install object, the install state machine to a placeholder state when the icon is displayed on the device. 26. The non-transitory computer-readable medium of claim 14, wherein the job comprises an application installation, an application upgrade, or an application restoration. 27. A method of managing application installation on a device, the method comprising:
instantiating, with a job manager, a job object comprising a job state machine in a waiting state and a job pipeline configured to relay messages related to the job; downloading, with the job object, job data in response to a start message on the job pipeline, wherein the job state machine is in a downloading state during the downloading; installing, with the job object, downloaded job data on the device, wherein the job state machine is in an installing state during the installing; and completing, with the job object, the job, wherein the job state machine is in a finished state during the completing; wherein the job is recoverable to the instantiating step, the downloading step, or the installing step in response to a job object failure by restoring the job state machine to the waiting state, downloading state, or installing state, respectively. 28. The method of claim 27, further comprising:
canceling, with the job object, the job in response to a cancel message on the job pipeline and a determination that a job data download is pending, wherein the job state machine is in a canceling state during the canceling; and ending, with the job object, the job in response to a cancel message on the job pipeline and a determination that no job data download is pending, wherein the job state machine is in a canceled state during the ending. 29. The method of claim 27, wherein the job state machine is in a failed state after the job object failure. 30. The method of claim 27, further comprising:
pausing, with the job object, the job in response to a pause message on the job pipeline, wherein the job state machine is in a paused state during the pausing; and resuming, with the job object, the job in response to a resume message on the job pipeline, wherein the job state machine is returned to the waiting state upon resuming. 31. The method of claim 27, further comprising:
requesting, with the job object, user approval of the job before the installing; wherein:
the job state machine is in a pending install state during the requesting; and
the job is recoverable to the requesting step in response to a job object failure by restoring the job state machine to the pending install state. 32. The method of claim 27, further comprising:
preparing, with the job object, an installation before the installing state; wherein:
the job state machine is in a preparing install state during the preparing;
the job object receives an assertion that a precondition for installing has been met when the job state machine is in the preparing install state;
the installing is performed upon receiving the assertion; and
the job is recoverable to the preparing install state in response to a job object failure by restoring the job state machine to the preparing install state. 33. The method of claim 27, further comprising:
instantiating, with the job manager, a download object comprising a download state machine (DSM) in a waiting state; when the job state machine transitions to the downloading state, transitioning, with the download object, the download state machine to a DSM downloading state, wherein the download object downloads the job data to the device when the download state machine is in the DSM downloading state; and after a completion of job data downloading, transitioning, with the download object, the download state machine out of the DSM downloading state; wherein the download state machine is recoverable to the DSM downloading state in response to a download object failure while the download state machine is in the DSM downloading state. 34. The method of claim 33, wherein the completion of job data downloading by the download object triggers a transition of the job state machine out of the downloading state. 35. The method of claim 33, wherein the completion of job data downloading comprises a completed download, a download failure, or a download cancellation. 36. The method of claim 27, further comprising:
instantiating, with the job manager, an install object comprising an install state machine (ISM); when the job state machine transitions to the installing state, transitioning, with the install object, the install state machine to an ISM installing state, wherein the install object installs the job data to the device when the install state machine is in the ISM installing state; and after a completion of job data installation, transitioning, with the install object, the install state machine out of the ISM installing state; wherein the install state machine is recoverable to the ISM installing state in response to an install object failure while the install state machine is in the ISM installing state. 37. The method of claim 36, wherein the completion of job data installation by the install object triggers a transition of the job state machine out of the installing state. 38. The method of claim 36, further comprising:
instantiating, with the job manager, the install state machine in an icon state or transitioning, with the install object, the install state machine to the icon state, wherein the install object performs processing associated with displaying an icon on the device when the install state machine is in the icon state.; and transitioning, with the install object, the install state machine to a placeholder state when the icon is displayed on the device. 39. The method of claim 27, wherein the job comprises an application installation, an application upgrade, or an application restoration. 40. A non-transitory computer-readable medium comprising code that, when executed by a processor of a device, causes the processor to:
instantiate, with a job manager, a job object comprising a job state machine in a waiting state and a job pipeline configured to relay messages related to the job; download, with the job object, job data in response to a start message on the job pipeline, wherein the job state machine is in a downloading state during the downloading; install, with the job object, downloaded job data on the device, wherein the job state machine is in an installing state during the installing; and complete, with the job object, the job, wherein the job state machine is in a finished state during the completing; wherein the job is recoverable to the instantiating step, the downloading step, or the installing step in response to a job object failure by restoring the job state machine to the waiting state, downloading state, or installing state, respectively. 41. The non-transitory computer-readable medium of claim 40, further comprising code that, when executed by the processor, causes the processor to:
cancel, with the job object, the job in response to a cancel message on the job pipeline and a determination that a job data download is pending, wherein the job state machine is in a canceling state during the canceling; and end, with the job object, the job in response to a cancel message on the job pipeline and a determination that no job data download is pending, wherein the job state machine is in a canceled state during the ending. 42. The non-transitory computer-readable medium of claim 40, wherein the job state machine is in a failed state after the job object failure. 43. The non-transitory computer-readable medium of claim 40, further comprising code that, when executed by the processor, causes the processor to:
pause, with the job object, the job in response to a pause message on the job pipeline, wherein the job state machine is in a paused state during the pausing; and resume, with the job object, the job in response to a resume message on the job pipeline, wherein the job state machine is returned to the waiting state upon resuming. 44. The non-transitory computer-readable medium of claim 40, further comprising code that, when executed by the processor, causes the processor to:
request, with the job object, user approval of the job before the installing; wherein:
the job state machine is in a pending install state during the requesting; and
the job is recoverable to the requesting step in response to a job object failure by restoring the job state machine to the pending install state. 45. The non-transitory computer-readable medium of claim 40, further comprising code that, when executed by the processor, causes the processor to:
prepare, with the job object, an installation before the installing state; wherein:
the job state machine is in a preparing install state during the preparing;
the job object receives an assertion that a precondition for installing has been met when the job state machine is in the preparing install state;
the installing is performed upon receiving the assertion; and
the job is recoverable to the preparing install state in response to a job object failure by restoring the job state machine to the preparing install state. 46. The non-transitory computer-readable medium of claim 40, further comprising code that, when executed by the processor, causes the processor to:
instantiate, with the job manager, a download object comprising a download state machine (DSM) in a waiting state; when the job state machine transitions to the downloading state, transition, with the download object, the download state machine to a DSM downloading state, wherein the download object downloads the job data to the device when the download state machine is in the DSM downloading state; and after a completion of job data downloading, transition, with the download object, the download state machine out of the DSM downloading state; wherein the download state machine is recoverable to the DSM downloading state in response to a download object failure while the download state machine is in the DSM downloading state. 47. The non-transitory computer-readable medium of claim 46, wherein the completion of job data downloading by the download object triggers a transition of the job state machine out of the downloading state. 48. The non-transitory computer-readable medium of claim 46, wherein the completion of job data downloading comprises a completed download, a download failure, or a download cancellation. 49. The non-transitory computer-readable medium of claim 40, further comprising code that, when executed by the processor, causes the processor to:
instantiate, with the job manager, an install object comprising an install state machine (ISM); when the job state machine transitions to the installing state, transition, with the install object, the install state machine to an ISM installing state, wherein the install object installs the job data to the device when the install state machine is in the ISM installing state; and after a completion of job data installation, transition, with the install object, the install state machine out of the ISM installing state; wherein the install state machine is recoverable to the ISM installing state in response to an install object failure while the install state machine is in the ISM installing state. 50. The non-transitory computer-readable medium of claim 49, wherein the completion of job data installation by the install object triggers a transition of the job state machine out of the installing state. 51. The non-transitory computer-readable medium of claim 49, further comprising code that, when executed by the processor, causes the processor to:
instantiate, with the job manager, the install state machine in an icon state or transition, with the install object, the install state machine to the icon state, wherein the install object performs processing associated with displaying an icon on the device when the install state machine is in the icon state.; and transition, with the install object, the install state machine to a placeholder state when the icon is displayed on the device. 52. The non-transitory computer-readable medium of claim 40, wherein the job comprises an application installation, an application upgrade, or an application restoration. | 2,100 |
5,955 | 5,955 | 15,158,902 | 2,132 | Dynamically configuring a storage system to facilitate independent scaling of resources, including: detecting a change to a topology of the storage system consisting of different sets of blades configured within one of a plurality of chassis; and reconfiguring the storage system to change an allocation of resources to one or more authorities responsive to detecting the change to the topology of the storage system. | 1. A method of dynamically configuring a storage system to facilitate independent scaling of resources, the method comprising:
detecting a change to a topology of the storage system consisting of different sets of blades configured within one of a plurality of chassis; and reconfiguring the storage system to change an allocation of resources to one or more authorities responsive to detecting the change to the topology of the storage system. 2. The method of claim 1, further comprising:
executing an authority on a first set of blades; and wherein reconfiguring the storage system further comprises executing the authority on a second set of blades. 3. The method of claim 1, further comprising:
associating storage on a first set of blades with a write group; and wherein reconfiguring the storage system further comprises associating, in dependence upon a write group formation policy, storage on a second set of blades with the write group. 4. The method of claim 1 wherein:
detecting the change to the topology of the storage system further comprises detecting that an amount of processing resources within the storage system has changed; and
reconfiguring the storage system further comprises increasing or decreasing an amount of processing resources allocated to one or more authorities. 5. The method of claim 1 wherein:
detecting the change to the topology of the storage system further comprises detecting that an amount of storage resources within the storage system has changed; and
reconfiguring the storage system further comprises increasing or decreasing an amount of storage associated with one or more write groups. 6. The method of claim 1 wherein detecting the change to the topology of the storage system further comprises detecting that a utilization of a particular resource has reached a utilization threshold. 7. The method of claim 1 wherein an authority executing on compute resources within a first blade causes data to be stored on compute resources within a second blade. 8. An apparatus for dynamically configuring a storage system to facilitate independent scaling of resources, the apparatus including a computer processor and a computer memory, the computer memory including computer program instructions that, when executed by the computer processor, cause the apparatus to carry out the steps of:
detecting a change to a topology of the storage system consisting of different sets of blades configured within one of a plurality of chassis; and reconfiguring the storage system to change an allocation of resources to one or more authorities responsive to detecting the change to the topology of the storage system. 9. The apparatus of claim 8 further comprising computer program instructions that, when executed by the computer processor, cause the apparatus to carry out the step of:
executing an authority on a first set of blades; and
wherein reconfiguring the storage system further comprises executing the authority on a second set of blades. 10. The apparatus of claim 8 further comprising computer program instructions that, when executed by the computer processor, cause the apparatus to carry out the step of:
associating storage on a first set of blades with a write group; and
wherein reconfiguring the storage system further comprises associating, in dependence upon a write group formation policy, storage on a second set of blades with the write group. 11. The apparatus of claim 8 wherein:
detecting the change to the topology of the storage system further comprises detecting that an amount of processing resources within the storage system has changed; and
reconfiguring the storage system further comprises increasing or decreasing an amount of processing resources allocated to one or more authorities. 12. The apparatus of claim 8 wherein:
detecting the change to the topology of the storage system further comprises detecting that an amount of storage resources within the storage system has changed; and
reconfiguring the storage system further comprises increasing or decreasing an amount of storage associated with one or more write groups. 13. The apparatus of claim 8 wherein detecting the change to the topology of the storage system further comprises detecting that a utilization of a particular resource has reached a utilization threshold. 14. The apparatus of claim 8 wherein an authority executing on compute resources within a first blade causes data to be stored on compute resources within a second blade. 15. A computer program product for dynamically configuring a storage system to facilitate independent scaling of resources, the computer program product disposed on a non-transitory storage medium, the computer program product including computer program instructions that, when executed by a computer, cause the computer to carry out the steps of:
detecting a change to a topology of the storage system consisting of different sets of blades configured within one of a plurality of chassis; and reconfiguring the storage system to change an allocation of resources to one or more authorities responsive to detecting the change to the topology of the storage system. 16. The computer program product of claim of 15 further comprising computer program instructions that, when executed by the computer, cause the computer to carry out the step of:
executing an authority on a first set of blades; and
wherein reconfiguring the storage system further comprises executing the authority on a second set of blades. 17. The computer program product of claim of 15 further comprising computer program instructions that, when executed by the computer, cause the computer to carry out the step of:
associating storage on a first set of blades with a write group; and
wherein reconfiguring the storage system further comprises associating, in dependence upon a write group formation policy, storage on a second set of blades with the write group. 18. The computer program product of claim of 15 wherein:
detecting the change to the topology of the storage system further comprises detecting that an amount of processing resources within the storage system has changed; and
reconfiguring the storage system further comprises increasing or decreasing an amount of processing resources allocated to one or more authorities. 19. The computer program product of claim of 15 wherein:
detecting the change to the topology of the storage system further comprises detecting that an amount of storage resources within the storage system has changed; and
reconfiguring the storage system further comprises increasing or decreasing an amount of storage associated with one or more write groups. 20. The computer program product of claim of 15 wherein detecting the change to the topology of the storage system further comprises detecting that a utilization of a particular resource has reached a utilization threshold. | Dynamically configuring a storage system to facilitate independent scaling of resources, including: detecting a change to a topology of the storage system consisting of different sets of blades configured within one of a plurality of chassis; and reconfiguring the storage system to change an allocation of resources to one or more authorities responsive to detecting the change to the topology of the storage system.1. A method of dynamically configuring a storage system to facilitate independent scaling of resources, the method comprising:
detecting a change to a topology of the storage system consisting of different sets of blades configured within one of a plurality of chassis; and reconfiguring the storage system to change an allocation of resources to one or more authorities responsive to detecting the change to the topology of the storage system. 2. The method of claim 1, further comprising:
executing an authority on a first set of blades; and wherein reconfiguring the storage system further comprises executing the authority on a second set of blades. 3. The method of claim 1, further comprising:
associating storage on a first set of blades with a write group; and wherein reconfiguring the storage system further comprises associating, in dependence upon a write group formation policy, storage on a second set of blades with the write group. 4. The method of claim 1 wherein:
detecting the change to the topology of the storage system further comprises detecting that an amount of processing resources within the storage system has changed; and
reconfiguring the storage system further comprises increasing or decreasing an amount of processing resources allocated to one or more authorities. 5. The method of claim 1 wherein:
detecting the change to the topology of the storage system further comprises detecting that an amount of storage resources within the storage system has changed; and
reconfiguring the storage system further comprises increasing or decreasing an amount of storage associated with one or more write groups. 6. The method of claim 1 wherein detecting the change to the topology of the storage system further comprises detecting that a utilization of a particular resource has reached a utilization threshold. 7. The method of claim 1 wherein an authority executing on compute resources within a first blade causes data to be stored on compute resources within a second blade. 8. An apparatus for dynamically configuring a storage system to facilitate independent scaling of resources, the apparatus including a computer processor and a computer memory, the computer memory including computer program instructions that, when executed by the computer processor, cause the apparatus to carry out the steps of:
detecting a change to a topology of the storage system consisting of different sets of blades configured within one of a plurality of chassis; and reconfiguring the storage system to change an allocation of resources to one or more authorities responsive to detecting the change to the topology of the storage system. 9. The apparatus of claim 8 further comprising computer program instructions that, when executed by the computer processor, cause the apparatus to carry out the step of:
executing an authority on a first set of blades; and
wherein reconfiguring the storage system further comprises executing the authority on a second set of blades. 10. The apparatus of claim 8 further comprising computer program instructions that, when executed by the computer processor, cause the apparatus to carry out the step of:
associating storage on a first set of blades with a write group; and
wherein reconfiguring the storage system further comprises associating, in dependence upon a write group formation policy, storage on a second set of blades with the write group. 11. The apparatus of claim 8 wherein:
detecting the change to the topology of the storage system further comprises detecting that an amount of processing resources within the storage system has changed; and
reconfiguring the storage system further comprises increasing or decreasing an amount of processing resources allocated to one or more authorities. 12. The apparatus of claim 8 wherein:
detecting the change to the topology of the storage system further comprises detecting that an amount of storage resources within the storage system has changed; and
reconfiguring the storage system further comprises increasing or decreasing an amount of storage associated with one or more write groups. 13. The apparatus of claim 8 wherein detecting the change to the topology of the storage system further comprises detecting that a utilization of a particular resource has reached a utilization threshold. 14. The apparatus of claim 8 wherein an authority executing on compute resources within a first blade causes data to be stored on compute resources within a second blade. 15. A computer program product for dynamically configuring a storage system to facilitate independent scaling of resources, the computer program product disposed on a non-transitory storage medium, the computer program product including computer program instructions that, when executed by a computer, cause the computer to carry out the steps of:
detecting a change to a topology of the storage system consisting of different sets of blades configured within one of a plurality of chassis; and reconfiguring the storage system to change an allocation of resources to one or more authorities responsive to detecting the change to the topology of the storage system. 16. The computer program product of claim of 15 further comprising computer program instructions that, when executed by the computer, cause the computer to carry out the step of:
executing an authority on a first set of blades; and
wherein reconfiguring the storage system further comprises executing the authority on a second set of blades. 17. The computer program product of claim of 15 further comprising computer program instructions that, when executed by the computer, cause the computer to carry out the step of:
associating storage on a first set of blades with a write group; and
wherein reconfiguring the storage system further comprises associating, in dependence upon a write group formation policy, storage on a second set of blades with the write group. 18. The computer program product of claim of 15 wherein:
detecting the change to the topology of the storage system further comprises detecting that an amount of processing resources within the storage system has changed; and
reconfiguring the storage system further comprises increasing or decreasing an amount of processing resources allocated to one or more authorities. 19. The computer program product of claim of 15 wherein:
detecting the change to the topology of the storage system further comprises detecting that an amount of storage resources within the storage system has changed; and
reconfiguring the storage system further comprises increasing or decreasing an amount of storage associated with one or more write groups. 20. The computer program product of claim of 15 wherein detecting the change to the topology of the storage system further comprises detecting that a utilization of a particular resource has reached a utilization threshold. | 2,100 |
5,956 | 5,956 | 15,353,549 | 2,183 | A method for supporting architecture speculation in an out of order processor is disclosed. The method comprises fetching two threads into the processor, wherein a first thread executes in a speculative state and a second thread executes in a non-speculative state. The method also comprises enabling a speculative scope for an execution of the first thread and a non-speculative scope for an execution of the second thread in an architecture of the processor, wherein the speculative scope and the non-speculative scope can both be fetched into the architecture and be present concurrently. | 1. A method for supporting architecture speculation in an out of order processor, the method comprising:
fetching two threads into the processor, wherein a first thread executes in a speculative state and a second thread executes in a non-speculative state; and enabling a speculative scope for an execution of the first thread and a non-speculative scope for an execution of the second thread in an architecture of the processor, wherein the speculative scope and the non-speculative scope can both be fetched into the architecture and be present concurrently. 2. The method of claim 1, wherein the speculative scope sets its respective mode differently from the non-speculative scope. 3. The method of claim 2, wherein in a non-speculative mode, register reads are from committed registers, register writes are to committed registers, and memory writes are written to memory. 4. The method of claim 2, wherein in a speculative mode, register writes are written to a speculative scratch shadow register, register reads are from a latest write, and memory writes to a retirement memory buffer. 5. The method of claim 2, wherein both a speculative scope and a non-speculative scope execute simultaneously. 6. The method of claim 4, wherein code in the speculative mode can be rolled back if an exception occurs. 7. A method for supporting architecture speculation in an out of order processor, the method comprising:
fetching two threads into the processor, wherein a first thread executes in a speculative state and a second thread executes in a non-speculative state; and enabling a speculative scope for an execution of the first thread and a non-speculative scope for an execution of the second thread in an architecture of the processor. 8. The method of claim 7, further comprising:
fetching the speculative scope and the non-speculative scope to enable both scopes to be present in the architecture concurrently. 9. The method of claim 7, wherein the speculative scope sets its respective mode differently from the non-speculative scope. 10. The method of claim 7, wherein in a non-speculative mode, register reads are from committed registers, register writes are to committed registers, and memory writes are written to memory. 11. The method of claim 7, wherein in a speculative mode, register writes are written to a speculative scratch shadow register, register reads are from a latest write, and memory writes to a retirement memory buffer. 12. The method of claim 7, wherein both a speculative scope and a non-speculative scope execute simultaneously. 13. The method of claim 11, wherein code in the speculative mode can be rolled back if an exception occurs. 14. A microprocessor coupled to a memory, wherein the memory has computer readable instructions which when executed by the microprocessor cause the microprocessor to implement a method for supporting architecture speculation in an out of order processor, the method comprising:
fetching two threads into the processor, wherein a first thread executes in a speculative state and a second thread executes in a non-speculative state; and enabling a speculative scope for an execution of the first thread and a non-speculative scope for an execution of the second thread in an architecture of the processor. 15. The microprocessor of claim 14, wherein the speculative scope and the non-speculative scope can both be fetched into the architecture and be present concurrently. 16. The microprocessor of claim 14, wherein the speculative scope sets its respective mode differently from the non-speculative scope. 17. The microprocessor of claim 14, wherein in a non-speculative mode, register reads are from committed registers, register writes are to committed registers, and memory writes are written to memory. 18. The microprocessor of claim 14, wherein in a speculative mode, register writes are written to a speculative scratch shadow register, register reads are from a latest write, and memory writes to a retirement memory buffer. 19. The microprocessor of claim 14, wherein both a speculative scope and a non-speculative scope execute simultaneously. 20. The microprocessor of claim 19, wherein one scope is fetched into the architecture after a current scope thereby allowing dependencies between scopes to be honored. 21. The microprocessor of claim 14, wherein code in the speculative mode can be rolled back if an exception occurs. | A method for supporting architecture speculation in an out of order processor is disclosed. The method comprises fetching two threads into the processor, wherein a first thread executes in a speculative state and a second thread executes in a non-speculative state. The method also comprises enabling a speculative scope for an execution of the first thread and a non-speculative scope for an execution of the second thread in an architecture of the processor, wherein the speculative scope and the non-speculative scope can both be fetched into the architecture and be present concurrently.1. A method for supporting architecture speculation in an out of order processor, the method comprising:
fetching two threads into the processor, wherein a first thread executes in a speculative state and a second thread executes in a non-speculative state; and enabling a speculative scope for an execution of the first thread and a non-speculative scope for an execution of the second thread in an architecture of the processor, wherein the speculative scope and the non-speculative scope can both be fetched into the architecture and be present concurrently. 2. The method of claim 1, wherein the speculative scope sets its respective mode differently from the non-speculative scope. 3. The method of claim 2, wherein in a non-speculative mode, register reads are from committed registers, register writes are to committed registers, and memory writes are written to memory. 4. The method of claim 2, wherein in a speculative mode, register writes are written to a speculative scratch shadow register, register reads are from a latest write, and memory writes to a retirement memory buffer. 5. The method of claim 2, wherein both a speculative scope and a non-speculative scope execute simultaneously. 6. The method of claim 4, wherein code in the speculative mode can be rolled back if an exception occurs. 7. A method for supporting architecture speculation in an out of order processor, the method comprising:
fetching two threads into the processor, wherein a first thread executes in a speculative state and a second thread executes in a non-speculative state; and enabling a speculative scope for an execution of the first thread and a non-speculative scope for an execution of the second thread in an architecture of the processor. 8. The method of claim 7, further comprising:
fetching the speculative scope and the non-speculative scope to enable both scopes to be present in the architecture concurrently. 9. The method of claim 7, wherein the speculative scope sets its respective mode differently from the non-speculative scope. 10. The method of claim 7, wherein in a non-speculative mode, register reads are from committed registers, register writes are to committed registers, and memory writes are written to memory. 11. The method of claim 7, wherein in a speculative mode, register writes are written to a speculative scratch shadow register, register reads are from a latest write, and memory writes to a retirement memory buffer. 12. The method of claim 7, wherein both a speculative scope and a non-speculative scope execute simultaneously. 13. The method of claim 11, wherein code in the speculative mode can be rolled back if an exception occurs. 14. A microprocessor coupled to a memory, wherein the memory has computer readable instructions which when executed by the microprocessor cause the microprocessor to implement a method for supporting architecture speculation in an out of order processor, the method comprising:
fetching two threads into the processor, wherein a first thread executes in a speculative state and a second thread executes in a non-speculative state; and enabling a speculative scope for an execution of the first thread and a non-speculative scope for an execution of the second thread in an architecture of the processor. 15. The microprocessor of claim 14, wherein the speculative scope and the non-speculative scope can both be fetched into the architecture and be present concurrently. 16. The microprocessor of claim 14, wherein the speculative scope sets its respective mode differently from the non-speculative scope. 17. The microprocessor of claim 14, wherein in a non-speculative mode, register reads are from committed registers, register writes are to committed registers, and memory writes are written to memory. 18. The microprocessor of claim 14, wherein in a speculative mode, register writes are written to a speculative scratch shadow register, register reads are from a latest write, and memory writes to a retirement memory buffer. 19. The microprocessor of claim 14, wherein both a speculative scope and a non-speculative scope execute simultaneously. 20. The microprocessor of claim 19, wherein one scope is fetched into the architecture after a current scope thereby allowing dependencies between scopes to be honored. 21. The microprocessor of claim 14, wherein code in the speculative mode can be rolled back if an exception occurs. | 2,100 |
5,957 | 5,957 | 14,747,980 | 2,135 | A processor maintains an access log indicating a stream of cache misses at a cache of the processor. In response to each of at least a subset of cache misses at the cache, the processor records a corresponding entry in the access log, indicating a physical memory address of the memory access request that resulted in the corresponding miss. In addition, the processor maintains an address translation log that indicates a mapping of physical memory addresses to virtual memory addresses. In response to an address translation (e.g., a page walk) that translates a virtual address to a physical address, the processor stores a mapping of the physical address to the corresponding virtual address at an entry of the address translation log. Software executing at the processor can use the two logs for memory management. | 1. A method comprising:
recording, at a processor, a first log indicating a set of physical memory addresses associated with a stream of cache misses at the processor; and providing the first log to software executing at the processor. 2. The method of claim 1, further comprising:
recording, at the processor, a second log indicating a mapping of the set of physical memory addresses to a corresponding set of virtual addresses; and providing the second log to the software executing at the processor. 3. The method of claim 2, wherein providing the first log and the second log comprises:
providing the first log and the second log to the software in response to a number of physical memory addresses in the first set exceeding a threshold. 4. The method of claim 2, wherein the first log comprises a plurality of entries, each entry comprising:
a first field indicating a physical address associated with memory access request that resulted in a cache miss at the processor; and a second field indicating a type of the memory access request. 5. The method of claim 2, further comprising:
in response to an indication of a flush at a translation lookaside buffer (TLB) of the processor, recording the TLB flush at the second log. 6. The method of claim 5, further comprising:
marking an entry of the log as invalid in response to a request; and omitting the marked entry from the second log in response to recording the TLB flush. 7. The method of claim 2, further comprising:
in response to a memory access request resulting in a cache miss, omitting a physical address of the memory access request from the first log in response to determining the physical address is located in an excluded region of memory. 8. The method of claim 7, further comprising:
determining the physical address is located in the excluded region of memory based on an entry of a page table including the physical address. 9. The method of claim 2, wherein providing the first log comprises:
filtering a physical address from the first log in response to determining the physical address is located in an excluded region of memory. 10. A method, comprising:
periodically sampling, at a processor, a set of physical memory addresses associated with a stream of cache misses at the processor to generate a first log; recording, at the processor, a second log indicating a mapping of the set of physical memory addresses to a corresponding set of virtual addresses; and providing the first log and the second log to software executing at the processor. 11. The method of claim 10, wherein recording the second log comprises:
recording a mapping of a physical address to a corresponding virtual address at the second log in response to a page table walk to identify the physical address. 12. The method of claim 10, wherein providing the first log and the second log comprises:
providing the first log and the second log to the software in response to a number of physical memory addresses in the first set exceeding a threshold. 13. A processor comprising:
a processor core to execute software; a cache; and a stream recording module to record a first log indicating a set of physical memory addresses associated with a stream of cache misses at the cache and to provide the first log to the software. 14. The processor of claim 13, further comprising
an address recording module to record a second log indicating a mapping of the set of physical memory addresses to a corresponding set of virtual addresses and to provide the second log to the software. 15. The processor of claim 14, wherein the stream recording module is to provide the first log to the software in response to a number of physical memory addresses in the first set exceeding a threshold. 16. The processor of claim 14, wherein the first log comprises a plurality of entries, each entry comprising:
a first field indicating a physical address associated with memory access request that resulted in a cache miss at the processor; and a second field indicating a type of the memory access request. 17. The processor of claim 16, further comprising:
a translation lookaside buffer (TLB); and wherein the address recording module, is to record a TLB flush at the second log, in response to an indication of a flush at the TLB. 18. The processor of claim 16, wherein:
the stream recording module is to omit a physical address of the memory access request from the first log in response to determining the physical address is located in an excluded region of memory. 19. The processor of claim 18, wherein:
the stream recording module is to identify that the physical address is located in the excluded region of memory based on an entry of a page table including the physical address. 20. The processor of claim 14, wherein:
in response to the software receiving the first log and the second log, the processor is to transfer a block of data associated with the set of physical to a cache of the processor. | A processor maintains an access log indicating a stream of cache misses at a cache of the processor. In response to each of at least a subset of cache misses at the cache, the processor records a corresponding entry in the access log, indicating a physical memory address of the memory access request that resulted in the corresponding miss. In addition, the processor maintains an address translation log that indicates a mapping of physical memory addresses to virtual memory addresses. In response to an address translation (e.g., a page walk) that translates a virtual address to a physical address, the processor stores a mapping of the physical address to the corresponding virtual address at an entry of the address translation log. Software executing at the processor can use the two logs for memory management.1. A method comprising:
recording, at a processor, a first log indicating a set of physical memory addresses associated with a stream of cache misses at the processor; and providing the first log to software executing at the processor. 2. The method of claim 1, further comprising:
recording, at the processor, a second log indicating a mapping of the set of physical memory addresses to a corresponding set of virtual addresses; and providing the second log to the software executing at the processor. 3. The method of claim 2, wherein providing the first log and the second log comprises:
providing the first log and the second log to the software in response to a number of physical memory addresses in the first set exceeding a threshold. 4. The method of claim 2, wherein the first log comprises a plurality of entries, each entry comprising:
a first field indicating a physical address associated with memory access request that resulted in a cache miss at the processor; and a second field indicating a type of the memory access request. 5. The method of claim 2, further comprising:
in response to an indication of a flush at a translation lookaside buffer (TLB) of the processor, recording the TLB flush at the second log. 6. The method of claim 5, further comprising:
marking an entry of the log as invalid in response to a request; and omitting the marked entry from the second log in response to recording the TLB flush. 7. The method of claim 2, further comprising:
in response to a memory access request resulting in a cache miss, omitting a physical address of the memory access request from the first log in response to determining the physical address is located in an excluded region of memory. 8. The method of claim 7, further comprising:
determining the physical address is located in the excluded region of memory based on an entry of a page table including the physical address. 9. The method of claim 2, wherein providing the first log comprises:
filtering a physical address from the first log in response to determining the physical address is located in an excluded region of memory. 10. A method, comprising:
periodically sampling, at a processor, a set of physical memory addresses associated with a stream of cache misses at the processor to generate a first log; recording, at the processor, a second log indicating a mapping of the set of physical memory addresses to a corresponding set of virtual addresses; and providing the first log and the second log to software executing at the processor. 11. The method of claim 10, wherein recording the second log comprises:
recording a mapping of a physical address to a corresponding virtual address at the second log in response to a page table walk to identify the physical address. 12. The method of claim 10, wherein providing the first log and the second log comprises:
providing the first log and the second log to the software in response to a number of physical memory addresses in the first set exceeding a threshold. 13. A processor comprising:
a processor core to execute software; a cache; and a stream recording module to record a first log indicating a set of physical memory addresses associated with a stream of cache misses at the cache and to provide the first log to the software. 14. The processor of claim 13, further comprising
an address recording module to record a second log indicating a mapping of the set of physical memory addresses to a corresponding set of virtual addresses and to provide the second log to the software. 15. The processor of claim 14, wherein the stream recording module is to provide the first log to the software in response to a number of physical memory addresses in the first set exceeding a threshold. 16. The processor of claim 14, wherein the first log comprises a plurality of entries, each entry comprising:
a first field indicating a physical address associated with memory access request that resulted in a cache miss at the processor; and a second field indicating a type of the memory access request. 17. The processor of claim 16, further comprising:
a translation lookaside buffer (TLB); and wherein the address recording module, is to record a TLB flush at the second log, in response to an indication of a flush at the TLB. 18. The processor of claim 16, wherein:
the stream recording module is to omit a physical address of the memory access request from the first log in response to determining the physical address is located in an excluded region of memory. 19. The processor of claim 18, wherein:
the stream recording module is to identify that the physical address is located in the excluded region of memory based on an entry of a page table including the physical address. 20. The processor of claim 14, wherein:
in response to the software receiving the first log and the second log, the processor is to transfer a block of data associated with the set of physical to a cache of the processor. | 2,100 |
5,958 | 5,958 | 14,159,109 | 2,171 | An aspect provides a method, including: capturing, using an image sensor of an information handling device, a user gesture input; determining, using a processor, that the user gesture input comprises an activating gesture input; capturing, using the image sensor of the information handling device, controlling gesture input of the user; detecting, within the captured controlling gesturing input, gestures provided on a surface and mimicking use of a mouse; and controlling an application running on the information handling device based on the controlling gesture input of the user. Other aspects are described and claimed. | 1. A method, comprising:
capturing, using an image sensor of an information handling device, a user gesture input; determining, using a processor, that the user gesture input comprises an activating gesture input; capturing, using the image sensor of the information handling device, controlling gesture input of the user; detecting, within the captured controlling gesturing input, gestures provided on a surface and mimicking use of a mouse; and controlling an application running on the information handling device based on the controlling gesture input of the user. 2. The method of claim 1, wherein the determining comprises determining that the activating gesture input comprises a user hand forming a specific shape. 3. The method of claim 1, wherein the determining that the activating gesture input comprises a user hand forming a specific shape comprises determining that the specific shape comprises a mouse holding shape. 4. The method of claim 1, wherein:
the detecting further comprises detecting that the controlling gesture input comprises movement of the object used to provide the activating gesture input; and said controlling an application comprises moving an on-screen cursor according to the movement of the object. 5. The method of claim 1, wherein the detecting comprises detecting that the user gesture input is performed on a substantially planar surface that is substantially perpendicular to the image sensor. 6. The method of claim 1, wherein:
the detecting further comprises detecting that the controlling gesture input comprises finger click gesturing; and said controlling an application comprises performing an action associated with a mouse button click according to the finger click gesturing. 7. The method of claim 6, wherein said detected finger click gesturing is selected from the group consisting of a single finger click gesturing and a multiple finger click gesturing. 8. The method of claim 1, wherein:
the detecting further comprises detecting that the controlling gesture input comprises finger extension gesturing; and said controlling an application comprises performing a scrolling action associated with a direction of movement according to the finger extension gesturing. 9. The method of claim 1, wherein:
the detecting further comprises detecting that the controlling gesture input comprises finger extension gesturing; and said controlling an application comprises performing one or more of a rotate and a zoom action associated with a direction of movement according to the finger extension gesturing. 10. The method of claim 1, wherein:
said detecting, within the captured controlling gesturing input, further comprises detecting content gesture input of the user; and said controlling an application comprises entering said content into an application running on the information handling device based on the content gesture input of the user. 11. An information handling device, comprising:
an image sensor that captures user gesture input; a processor operatively coupled to the image sensor; a memory device that stores instructions accessible to the processor, the instructions being executable by the processor to: capture, using the image sensor, a user gesture input; determine that the user gesture input comprises an activating gesture input; capture controlling gesture input of the user; detect, within the captured controlling gesturing input, gestures provided on a surface and mimicking use of a mouse; and control an application running on the information handling device based on the controlling gesture input of the user. 12. The information handling device of claim 11, wherein to determine comprises determining that the activating gesture input comprises a user hand forming a specific shape. 13. The information handling device of claim 11, wherein to determine that the activating gesture input comprises a user hand forming a specific shape comprises determining that the specific shape comprises a mouse holding shape. 14. The information handling device of claim 11, wherein:
to detect further comprises detecting that the controlling gesture input comprises movement of the object used to provide the activating gesture input; and to control an application comprises moving an on-screen cursor according to the movement of the object. 15. The information handling device of claim 1, wherein to detect comprises detecting that the user gesture input is performed on a substantially planar surface that is substantially perpendicular to the image sensor. 16. The information handling device of claim 11, wherein:
to detect further comprises detecting that the controlling gesture input comprises finger click gesturing; and to control an application comprises performing an action associated with a mouse button click according to the finger click gesturing. 17. The information handling device of claim 16, wherein said detected finger click gesturing is selected from the group consisting of a single finger click gesturing and a multiple finger click gesturing. 18. The information handling device of claim 11, wherein:
to detect further comprises detecting that the controlling gesture input comprises finger extension gesturing; and to control an application comprises performing a scrolling action associated with a direction of movement according to the finger extension gesturing. 19. The information handling device of claim 11, wherein:
to detect further comprises detecting that the controlling gesture input comprises finger extension gesturing; and to control an application comprises performing one or more of a rotate and a zoom action associated with a direction of movement according to the finger extension gesturing. 20. A product, comprising:
a storage device having code stored therewith, the code being executable by a processor and comprising: code that captures, using an image sensor of an information handling device, a user gesture input; code that determines, using a processor, that the user gesture input comprises an activating gesture input; code that captures, using the image sensor of the information handling device, controlling gesture input of the user; code that detects, within the captured controlling gesturing input, gestures provided on a surface and mimicking use of a mouse; and code that controls an application running on the information handling device based on the controlling gesture input of the user. | An aspect provides a method, including: capturing, using an image sensor of an information handling device, a user gesture input; determining, using a processor, that the user gesture input comprises an activating gesture input; capturing, using the image sensor of the information handling device, controlling gesture input of the user; detecting, within the captured controlling gesturing input, gestures provided on a surface and mimicking use of a mouse; and controlling an application running on the information handling device based on the controlling gesture input of the user. Other aspects are described and claimed.1. A method, comprising:
capturing, using an image sensor of an information handling device, a user gesture input; determining, using a processor, that the user gesture input comprises an activating gesture input; capturing, using the image sensor of the information handling device, controlling gesture input of the user; detecting, within the captured controlling gesturing input, gestures provided on a surface and mimicking use of a mouse; and controlling an application running on the information handling device based on the controlling gesture input of the user. 2. The method of claim 1, wherein the determining comprises determining that the activating gesture input comprises a user hand forming a specific shape. 3. The method of claim 1, wherein the determining that the activating gesture input comprises a user hand forming a specific shape comprises determining that the specific shape comprises a mouse holding shape. 4. The method of claim 1, wherein:
the detecting further comprises detecting that the controlling gesture input comprises movement of the object used to provide the activating gesture input; and said controlling an application comprises moving an on-screen cursor according to the movement of the object. 5. The method of claim 1, wherein the detecting comprises detecting that the user gesture input is performed on a substantially planar surface that is substantially perpendicular to the image sensor. 6. The method of claim 1, wherein:
the detecting further comprises detecting that the controlling gesture input comprises finger click gesturing; and said controlling an application comprises performing an action associated with a mouse button click according to the finger click gesturing. 7. The method of claim 6, wherein said detected finger click gesturing is selected from the group consisting of a single finger click gesturing and a multiple finger click gesturing. 8. The method of claim 1, wherein:
the detecting further comprises detecting that the controlling gesture input comprises finger extension gesturing; and said controlling an application comprises performing a scrolling action associated with a direction of movement according to the finger extension gesturing. 9. The method of claim 1, wherein:
the detecting further comprises detecting that the controlling gesture input comprises finger extension gesturing; and said controlling an application comprises performing one or more of a rotate and a zoom action associated with a direction of movement according to the finger extension gesturing. 10. The method of claim 1, wherein:
said detecting, within the captured controlling gesturing input, further comprises detecting content gesture input of the user; and said controlling an application comprises entering said content into an application running on the information handling device based on the content gesture input of the user. 11. An information handling device, comprising:
an image sensor that captures user gesture input; a processor operatively coupled to the image sensor; a memory device that stores instructions accessible to the processor, the instructions being executable by the processor to: capture, using the image sensor, a user gesture input; determine that the user gesture input comprises an activating gesture input; capture controlling gesture input of the user; detect, within the captured controlling gesturing input, gestures provided on a surface and mimicking use of a mouse; and control an application running on the information handling device based on the controlling gesture input of the user. 12. The information handling device of claim 11, wherein to determine comprises determining that the activating gesture input comprises a user hand forming a specific shape. 13. The information handling device of claim 11, wherein to determine that the activating gesture input comprises a user hand forming a specific shape comprises determining that the specific shape comprises a mouse holding shape. 14. The information handling device of claim 11, wherein:
to detect further comprises detecting that the controlling gesture input comprises movement of the object used to provide the activating gesture input; and to control an application comprises moving an on-screen cursor according to the movement of the object. 15. The information handling device of claim 1, wherein to detect comprises detecting that the user gesture input is performed on a substantially planar surface that is substantially perpendicular to the image sensor. 16. The information handling device of claim 11, wherein:
to detect further comprises detecting that the controlling gesture input comprises finger click gesturing; and to control an application comprises performing an action associated with a mouse button click according to the finger click gesturing. 17. The information handling device of claim 16, wherein said detected finger click gesturing is selected from the group consisting of a single finger click gesturing and a multiple finger click gesturing. 18. The information handling device of claim 11, wherein:
to detect further comprises detecting that the controlling gesture input comprises finger extension gesturing; and to control an application comprises performing a scrolling action associated with a direction of movement according to the finger extension gesturing. 19. The information handling device of claim 11, wherein:
to detect further comprises detecting that the controlling gesture input comprises finger extension gesturing; and to control an application comprises performing one or more of a rotate and a zoom action associated with a direction of movement according to the finger extension gesturing. 20. A product, comprising:
a storage device having code stored therewith, the code being executable by a processor and comprising: code that captures, using an image sensor of an information handling device, a user gesture input; code that determines, using a processor, that the user gesture input comprises an activating gesture input; code that captures, using the image sensor of the information handling device, controlling gesture input of the user; code that detects, within the captured controlling gesturing input, gestures provided on a surface and mimicking use of a mouse; and code that controls an application running on the information handling device based on the controlling gesture input of the user. | 2,100 |
5,959 | 5,959 | 14,562,941 | 2,176 | Systems and methods are provided for displaying received publishing content on a web page along with one or more user elements by which one or more users may submit sentiment input or textual input in relation to the received publishing content or a subportion of the received publishing content; receiving a user input related to the displayed publishing content displayed on the web page, the user input including an identification of a subportion of the displayed publishing content and a sentiment input or a textual input; analyzing any user input received from each of one or more of the plurality of users in relation to the subportion; computing a sentiment score based on analysis of the analyzed user inputs received from each of one or more of the plurality of users in relation to the subportion; and displaying indicia representing the sentiment score computed for the subportion. | 1. A computer-implemented method for evaluating user input relating to electronic published content, the method comprising:
receiving, over an electronic network, electronic publishing content for display online; displaying the received publishing content on a web page along with one or more user elements by which one or more users may submit sentiment input or textual input in relation to the received publishing content or a subportion of the received publishing content; receiving, from each of a plurality of users, a user input related to the displayed publishing content displayed on the web page, the user input including an identification of a subportion of the displayed publishing content and a sentiment input or a textual input; analyzing, for each subportion of the displayed publishing content, any user input received from each of one or more of the plurality of users in relation to the subportion; computing, for each subportion of the displayed publishing content, a sentiment score based on analysis of the analyzed user inputs received from each of one or more of the plurality of users in relation to the subportion; and displaying, for each subportion of the displayed publishing content, indicia representing the sentiment score computed for the subportion, along with at least one user element by which one or more further users are enabled to provide user input to further modify the computed and indicated sentiment score. 2. The computer-implemented method of claim 1, wherein the textual input is a user comment. 3. The computer-implemented method of claim 1, wherein the user input further comprises one or more tags received from the user in relation to one or more subportions of the publishing content. 4. The computer-implemented method of claim 1, wherein the user input is a sentiment input or a textual input. 5. The method of claim 1, further comprising:
dividing the received electronic publishing content into a plurality of subportions. 6. The computer-implemented method of claim 1, wherein the sentiment score is associated with at least one of the plurality of subportions. 7. The computer-implemented method of claim 4, further comprising:
changing an indicator associated with the at least one subportion based on the modified sentiment score. 8. A computer system for evaluating user input relating to electronic published content, the system comprising:
a memory device storing instructions for evaluating user input; and a processor configured to execute the instructions to perform a method of: receiving, over an electronic network, electronic publishing content for display online; displaying the received publishing content on a web page along with one or more user elements by which one or more users may submit sentiment input or textual input in relation to the received publishing content or a subportion of the received publishing content; receiving, from each of a plurality of users, a user input related to the displayed publishing content displayed on the web page, the user input including an identification of a subportion of the displayed publishing content and a sentiment input or a textual input; analyzing, for each subportion of the displayed publishing content, any user input received from each of one or more of the plurality of users in relation to the subportion; computing, for each subportion of the displayed publishing content, a sentiment score based on analysis of the analyzed user inputs received from each of one or more of the plurality of users in relation to the subportion; and displaying, for each subportion of the displayed publishing content, indicia representing the sentiment score computed for the subportion, along with at least one user element by which one or more further users are enabled to provide user input to further modify the computed and indicated sentiment score. 9. The computer system of claim 8, wherein the textual input is a user comment. 10. The computer system of claim 8, wherein the user input further comprises one or more tags received from the user in relation to one or more subportions of the publishing content. 11. The computer system of claim 8, wherein the sentiment score is assigned to one of a sentiment input and a textual input. 12. The computer system of claim 8, further comprising:
dividing the received electronic publishing content into a plurality of subportions. 13. The computer system of claim 8, wherein the sentiment score is associated with at least one of the plurality of subportions. 14. The computer system of claim 13, further comprising:
changing an indicator associated with the at least one subportion based on the modified sentiment score. 15. A non-transitory computer-readable medium storing instructions, then instructions, when executed by a computer system cause the computer system to perform a method, the method comprising:
receiving, over an electronic network, electronic publishing content for display online; displaying the received publishing content on a web page along with one or more user elements by which one or more users may submit sentiment input or textual input in relation to the received publishing content or a subportion of the received publishing content; receiving, from each of a plurality of users, a user input related to the displayed publishing content displayed on the web page, the user input including an identification of a subportion of the displayed publishing content and a sentiment input or a textual input; analyzing, for each subportion of the displayed publishing content, any user input received from each of one or more of the plurality of users in relation to the subportion; computing, for each subportion of the displayed publishing content, a sentiment score based on analysis of the analyzed user inputs received from each of one or more of the plurality of users in relation to the subportion; and displaying, for each subportion of the displayed publishing content, indicia representing the sentiment score computed for the subportion, along with at least one user element by which one or more further users are enabled to provide user input to further modify the computed and indicated sentiment score. 16. The non-transitory computer-readable medium of claim 15, wherein the textual input is a user comment. 17. The non-transitory computer-readable medium of claim 15, wherein the user input further comprises one or more tags received from the user in relation to one or more subportions of the publishing content. 18. The non-transitory computer-readable medium of claim 15, wherein the sentiment score is assigned to one of a sentiment input and a textual input. 19. The non-transitory computer-readable medium of claim 15, further comprising:
dividing the received electronic publishing content into a plurality of subportions. 20. The non-transitory computer-readable medium of claim 15, wherein the sentiment score is associated with at least one of the plurality of subportions. | Systems and methods are provided for displaying received publishing content on a web page along with one or more user elements by which one or more users may submit sentiment input or textual input in relation to the received publishing content or a subportion of the received publishing content; receiving a user input related to the displayed publishing content displayed on the web page, the user input including an identification of a subportion of the displayed publishing content and a sentiment input or a textual input; analyzing any user input received from each of one or more of the plurality of users in relation to the subportion; computing a sentiment score based on analysis of the analyzed user inputs received from each of one or more of the plurality of users in relation to the subportion; and displaying indicia representing the sentiment score computed for the subportion.1. A computer-implemented method for evaluating user input relating to electronic published content, the method comprising:
receiving, over an electronic network, electronic publishing content for display online; displaying the received publishing content on a web page along with one or more user elements by which one or more users may submit sentiment input or textual input in relation to the received publishing content or a subportion of the received publishing content; receiving, from each of a plurality of users, a user input related to the displayed publishing content displayed on the web page, the user input including an identification of a subportion of the displayed publishing content and a sentiment input or a textual input; analyzing, for each subportion of the displayed publishing content, any user input received from each of one or more of the plurality of users in relation to the subportion; computing, for each subportion of the displayed publishing content, a sentiment score based on analysis of the analyzed user inputs received from each of one or more of the plurality of users in relation to the subportion; and displaying, for each subportion of the displayed publishing content, indicia representing the sentiment score computed for the subportion, along with at least one user element by which one or more further users are enabled to provide user input to further modify the computed and indicated sentiment score. 2. The computer-implemented method of claim 1, wherein the textual input is a user comment. 3. The computer-implemented method of claim 1, wherein the user input further comprises one or more tags received from the user in relation to one or more subportions of the publishing content. 4. The computer-implemented method of claim 1, wherein the user input is a sentiment input or a textual input. 5. The method of claim 1, further comprising:
dividing the received electronic publishing content into a plurality of subportions. 6. The computer-implemented method of claim 1, wherein the sentiment score is associated with at least one of the plurality of subportions. 7. The computer-implemented method of claim 4, further comprising:
changing an indicator associated with the at least one subportion based on the modified sentiment score. 8. A computer system for evaluating user input relating to electronic published content, the system comprising:
a memory device storing instructions for evaluating user input; and a processor configured to execute the instructions to perform a method of: receiving, over an electronic network, electronic publishing content for display online; displaying the received publishing content on a web page along with one or more user elements by which one or more users may submit sentiment input or textual input in relation to the received publishing content or a subportion of the received publishing content; receiving, from each of a plurality of users, a user input related to the displayed publishing content displayed on the web page, the user input including an identification of a subportion of the displayed publishing content and a sentiment input or a textual input; analyzing, for each subportion of the displayed publishing content, any user input received from each of one or more of the plurality of users in relation to the subportion; computing, for each subportion of the displayed publishing content, a sentiment score based on analysis of the analyzed user inputs received from each of one or more of the plurality of users in relation to the subportion; and displaying, for each subportion of the displayed publishing content, indicia representing the sentiment score computed for the subportion, along with at least one user element by which one or more further users are enabled to provide user input to further modify the computed and indicated sentiment score. 9. The computer system of claim 8, wherein the textual input is a user comment. 10. The computer system of claim 8, wherein the user input further comprises one or more tags received from the user in relation to one or more subportions of the publishing content. 11. The computer system of claim 8, wherein the sentiment score is assigned to one of a sentiment input and a textual input. 12. The computer system of claim 8, further comprising:
dividing the received electronic publishing content into a plurality of subportions. 13. The computer system of claim 8, wherein the sentiment score is associated with at least one of the plurality of subportions. 14. The computer system of claim 13, further comprising:
changing an indicator associated with the at least one subportion based on the modified sentiment score. 15. A non-transitory computer-readable medium storing instructions, then instructions, when executed by a computer system cause the computer system to perform a method, the method comprising:
receiving, over an electronic network, electronic publishing content for display online; displaying the received publishing content on a web page along with one or more user elements by which one or more users may submit sentiment input or textual input in relation to the received publishing content or a subportion of the received publishing content; receiving, from each of a plurality of users, a user input related to the displayed publishing content displayed on the web page, the user input including an identification of a subportion of the displayed publishing content and a sentiment input or a textual input; analyzing, for each subportion of the displayed publishing content, any user input received from each of one or more of the plurality of users in relation to the subportion; computing, for each subportion of the displayed publishing content, a sentiment score based on analysis of the analyzed user inputs received from each of one or more of the plurality of users in relation to the subportion; and displaying, for each subportion of the displayed publishing content, indicia representing the sentiment score computed for the subportion, along with at least one user element by which one or more further users are enabled to provide user input to further modify the computed and indicated sentiment score. 16. The non-transitory computer-readable medium of claim 15, wherein the textual input is a user comment. 17. The non-transitory computer-readable medium of claim 15, wherein the user input further comprises one or more tags received from the user in relation to one or more subportions of the publishing content. 18. The non-transitory computer-readable medium of claim 15, wherein the sentiment score is assigned to one of a sentiment input and a textual input. 19. The non-transitory computer-readable medium of claim 15, further comprising:
dividing the received electronic publishing content into a plurality of subportions. 20. The non-transitory computer-readable medium of claim 15, wherein the sentiment score is associated with at least one of the plurality of subportions. | 2,100 |
5,960 | 5,960 | 14,722,327 | 2,119 | A method and apparatus for controlling the temperature in a processing chamber for semiconductor processing is disclosed herein. In one embodiment, a processing chamber for semiconductor processing is provided. The processing chamber includes a chamber body and a temperature control system. The temperature control system includes a temperature sensor configured to measure a temperature in an upper dome of the processing chamber, a blower, and a controller configured to control the temperature control system. The temperature control system is configured to carry out the method provided herein for controlling the temperature in a processing chamber. | 1. A processing chamber for semiconductor processing, the processing chamber comprising:
a chamber body comprising:
an upper dome;
a lower dome, the upper dome and the lower dome defining an interior volume of the processing chamber; and
a temperature control system comprising:
a temperature sensor to measure a temperature of the upper dome;
a blower; and
a controller in communication with the blower and the temperature sensor. 2. The processing chamber of claim 1, wherein the temperature sensor is a pyrometer. 3. The processing chamber of claim 1, wherein the temperature sensor uses light having a wavelength between 1.5 um to 6 um to measure the temperature of the upper dome. 4. The processing chamber of claim 1, wherein the controller is a PID controller. 5. The processing chamber of claim 4, wherein the PID controller is set to a desired temperature set point such that a film will not form on the upper dome during processing. 6. The processing chamber of claim 1, wherein the controller comprises:
an input coupled to the temperature sensor; and an output coupled to the blower. 7. The processing chamber of claim 1, wherein the blower is operable to provide a cool gas flow to the upper dome. 8. The processing chamber of claim 1, wherein the temperature sensor is operable to transmit a measured temperature of the upper dome to the controller. 9. A temperature control system for a processing chamber for semiconductor processing, the temperature control system comprising:
a temperature sensor to measure a temperature of a process-exposed component of the processing chamber; a blower to direct a cooling gas flow toward the process-exposed component; and a controller in communication with the blower and the temperature sensor. 10. The temperature control system of claim 9, wherein the temperature sensor is a pyrometer. 11. The temperature control system of claim 9, wherein the temperature sensor uses light having a wavelength between 1.5 μm to 6 μm to measure the temperature of the upper dome. 12. The temperature control system of claim 9, wherein the controller is a PID controller. 13. The temperature control system of claim 12, wherein the PID controller is set to a desired temperature set point such that a film will not form on the upper dome during processing. 14. The temperature control system of claim 9, wherein the controller comprises:
an input coupled to the temperature sensor; and an output coupled to the blower. 15. The temperature control system of claim 9, wherein the blower directs cooling gas toward an upper dome of the processing chamber. 16. The temperature control system of claim 9, wherein the temperature sensor is operable to transmit a measured temperature of the upper dome to the controller. 17. A method for controlling the temperature in a processing chamber for semiconductor processing, the method comprising:
measuring a temperature of an upper dome of the processing chamber using a temperature sensor; transmitting the measured temperature from the temperature sensor to a PID controller; calculating a controller output based on the measured temperature; operating a blower based on the controller output to control the temperature of the upper dome. 18. The method of claim 17, wherein measuring a temperature of an upper dome of the processing chamber using a temperature sensor comprises:
measuring the temperature of the upper dome of the processing chamber using light having a wavelength between 1.6 μm to 6 μm. 19. The method of claim 17, further comprising:
setting the PID controller to a desired temperature set point. 20. The method of claim 19, wherein calculating a controller output based on the measured temperature comprises:
comparing the measured temperature to the desired temperature set point. | A method and apparatus for controlling the temperature in a processing chamber for semiconductor processing is disclosed herein. In one embodiment, a processing chamber for semiconductor processing is provided. The processing chamber includes a chamber body and a temperature control system. The temperature control system includes a temperature sensor configured to measure a temperature in an upper dome of the processing chamber, a blower, and a controller configured to control the temperature control system. The temperature control system is configured to carry out the method provided herein for controlling the temperature in a processing chamber.1. A processing chamber for semiconductor processing, the processing chamber comprising:
a chamber body comprising:
an upper dome;
a lower dome, the upper dome and the lower dome defining an interior volume of the processing chamber; and
a temperature control system comprising:
a temperature sensor to measure a temperature of the upper dome;
a blower; and
a controller in communication with the blower and the temperature sensor. 2. The processing chamber of claim 1, wherein the temperature sensor is a pyrometer. 3. The processing chamber of claim 1, wherein the temperature sensor uses light having a wavelength between 1.5 um to 6 um to measure the temperature of the upper dome. 4. The processing chamber of claim 1, wherein the controller is a PID controller. 5. The processing chamber of claim 4, wherein the PID controller is set to a desired temperature set point such that a film will not form on the upper dome during processing. 6. The processing chamber of claim 1, wherein the controller comprises:
an input coupled to the temperature sensor; and an output coupled to the blower. 7. The processing chamber of claim 1, wherein the blower is operable to provide a cool gas flow to the upper dome. 8. The processing chamber of claim 1, wherein the temperature sensor is operable to transmit a measured temperature of the upper dome to the controller. 9. A temperature control system for a processing chamber for semiconductor processing, the temperature control system comprising:
a temperature sensor to measure a temperature of a process-exposed component of the processing chamber; a blower to direct a cooling gas flow toward the process-exposed component; and a controller in communication with the blower and the temperature sensor. 10. The temperature control system of claim 9, wherein the temperature sensor is a pyrometer. 11. The temperature control system of claim 9, wherein the temperature sensor uses light having a wavelength between 1.5 μm to 6 μm to measure the temperature of the upper dome. 12. The temperature control system of claim 9, wherein the controller is a PID controller. 13. The temperature control system of claim 12, wherein the PID controller is set to a desired temperature set point such that a film will not form on the upper dome during processing. 14. The temperature control system of claim 9, wherein the controller comprises:
an input coupled to the temperature sensor; and an output coupled to the blower. 15. The temperature control system of claim 9, wherein the blower directs cooling gas toward an upper dome of the processing chamber. 16. The temperature control system of claim 9, wherein the temperature sensor is operable to transmit a measured temperature of the upper dome to the controller. 17. A method for controlling the temperature in a processing chamber for semiconductor processing, the method comprising:
measuring a temperature of an upper dome of the processing chamber using a temperature sensor; transmitting the measured temperature from the temperature sensor to a PID controller; calculating a controller output based on the measured temperature; operating a blower based on the controller output to control the temperature of the upper dome. 18. The method of claim 17, wherein measuring a temperature of an upper dome of the processing chamber using a temperature sensor comprises:
measuring the temperature of the upper dome of the processing chamber using light having a wavelength between 1.6 μm to 6 μm. 19. The method of claim 17, further comprising:
setting the PID controller to a desired temperature set point. 20. The method of claim 19, wherein calculating a controller output based on the measured temperature comprises:
comparing the measured temperature to the desired temperature set point. | 2,100 |
5,961 | 5,961 | 16,219,575 | 2,119 | A method of controlling an additive manufacturing process in which a directed energy source is used to selectively melt material to form a workpiece, forming a melt pool in the process of melting. The method includes: using an imaging apparatus to generate an image of the melt pool comprising an array of individual image elements, the image including a measurement of at least one physical property for each of the individual image elements; from the measurements, mapping a melt pool boundary of the melt pool; applying Green's theorem to the melt pool boundary; and controlling at least one aspect of the additive manufacturing process with reference to the Green's theorem application. | 1. A method of controlling an additive manufacturing process in which a directed energy source is used to selectively melt material to form a workpiece, forming a melt pool in the process of melting, the method comprising:
using an imaging apparatus to generate an image of the melt pool comprising an array of individual image elements, the image including a measurement of at least one physical property for each of the individual image elements; from the measurements, mapping a melt pool boundary of the melt pool; applying Green's theorem to the melt pool boundary and the area enclosed thereby and evaluating Green's theorem for indications of a process fault by generating an error factor; and controlling at least one aspect of the additive manufacturing process with reference to the Green's theorem application. 2. The method of claim 1, wherein the measurement for each of the image elements includes at least one scalar value. 3. The method of claim 1, wherein the step of applying Green's theorem to the melt pool boundary includes:
representing the melt pool boundary as a closed curve; evaluating Green's theorem for the closed curve. 4. The method of claim 1 wherein the step of mapping the boundary of the melt pool includes:
establishing a threshold value;
comparing the measurement for each of the image elements to the threshold value; and
defining each of the image elements which matches the threshold value to constitute a portion of the melt pool boundary. 5. The method of claim 4, wherein the threshold value is a range having predetermined upper and lower boundaries. 6. (canceled) 7. The method of claim 1, wherein the error factor of the Green's theorem evaluation exceeding a predetermined limit value indicates a process fault. 8. The method of claim 1, wherein the Green's theorem evaluation is used as an input into a statistical process control method for the additive manufacturing process. 9. The method of claim 1, wherein the Green's theorem evaluation is used to create populations of unfaulted and faulted process states. 10. The method of claim 9, wherein the Green's theorem evaluation of the current process is assigned to the populations of unfaulted and faulted process through a multiple model hypothesis test framework. 11. The method of claim 6 wherein the step of controlling includes taking a discrete action in response to the Green's theorem evaluation indicating a process fault. 12. The method of claim 11 wherein the discrete action is stopping the additive manufacturing process. 13. The method of claim 11 wherein the discrete action is providing a visual or audible alarm to a local or remote operator. 14. The method of claim 1 wherein the step of controlling includes changing at least one process parameter of the additive manufacturing process. 15. The method of claim 14 wherein the controlled process parameter includes at least one of: directed energy source power level and beam scan velocity. 16. A method of making a workpiece, comprising:
depositing a material in a build chamber; directing a build beam from a directed energy source to selectively fuse the material in a pattern corresponding to a cross-sectional layer of the workpiece, wherein a melt pool is formed by the directed energy source; using an imaging apparatus to generate an image of the melt pool comprising an array of individual image elements, the image including a measurement of at least one physical property for each of the individual image elements; from the measurements, mapping a melt pool boundary of the melt pool; applying Green's theorem to the melt pool boundary and the area enclosed thereby through the following steps;
determining an error factor;
comparing the error factor to a predetermined limit value; and
controlling at least one aspect of making the workpiece with reference to the Green's theorem application. 17. The method of claim 16, wherein the step of applying Green's theorem to the melt pool boundary generates indications of a process fault. 18. The method of claim 16, wherein the step of comparing the error factor to a predetermined limit value generates an indication is used as an input into a statistical process control method for the additive manufacturing process. 19. The method of claim 16, wherein the step of comparing the error factor to a predetermined limit value is used to create populations of unfaulted and faulted process states. | A method of controlling an additive manufacturing process in which a directed energy source is used to selectively melt material to form a workpiece, forming a melt pool in the process of melting. The method includes: using an imaging apparatus to generate an image of the melt pool comprising an array of individual image elements, the image including a measurement of at least one physical property for each of the individual image elements; from the measurements, mapping a melt pool boundary of the melt pool; applying Green's theorem to the melt pool boundary; and controlling at least one aspect of the additive manufacturing process with reference to the Green's theorem application.1. A method of controlling an additive manufacturing process in which a directed energy source is used to selectively melt material to form a workpiece, forming a melt pool in the process of melting, the method comprising:
using an imaging apparatus to generate an image of the melt pool comprising an array of individual image elements, the image including a measurement of at least one physical property for each of the individual image elements; from the measurements, mapping a melt pool boundary of the melt pool; applying Green's theorem to the melt pool boundary and the area enclosed thereby and evaluating Green's theorem for indications of a process fault by generating an error factor; and controlling at least one aspect of the additive manufacturing process with reference to the Green's theorem application. 2. The method of claim 1, wherein the measurement for each of the image elements includes at least one scalar value. 3. The method of claim 1, wherein the step of applying Green's theorem to the melt pool boundary includes:
representing the melt pool boundary as a closed curve; evaluating Green's theorem for the closed curve. 4. The method of claim 1 wherein the step of mapping the boundary of the melt pool includes:
establishing a threshold value;
comparing the measurement for each of the image elements to the threshold value; and
defining each of the image elements which matches the threshold value to constitute a portion of the melt pool boundary. 5. The method of claim 4, wherein the threshold value is a range having predetermined upper and lower boundaries. 6. (canceled) 7. The method of claim 1, wherein the error factor of the Green's theorem evaluation exceeding a predetermined limit value indicates a process fault. 8. The method of claim 1, wherein the Green's theorem evaluation is used as an input into a statistical process control method for the additive manufacturing process. 9. The method of claim 1, wherein the Green's theorem evaluation is used to create populations of unfaulted and faulted process states. 10. The method of claim 9, wherein the Green's theorem evaluation of the current process is assigned to the populations of unfaulted and faulted process through a multiple model hypothesis test framework. 11. The method of claim 6 wherein the step of controlling includes taking a discrete action in response to the Green's theorem evaluation indicating a process fault. 12. The method of claim 11 wherein the discrete action is stopping the additive manufacturing process. 13. The method of claim 11 wherein the discrete action is providing a visual or audible alarm to a local or remote operator. 14. The method of claim 1 wherein the step of controlling includes changing at least one process parameter of the additive manufacturing process. 15. The method of claim 14 wherein the controlled process parameter includes at least one of: directed energy source power level and beam scan velocity. 16. A method of making a workpiece, comprising:
depositing a material in a build chamber; directing a build beam from a directed energy source to selectively fuse the material in a pattern corresponding to a cross-sectional layer of the workpiece, wherein a melt pool is formed by the directed energy source; using an imaging apparatus to generate an image of the melt pool comprising an array of individual image elements, the image including a measurement of at least one physical property for each of the individual image elements; from the measurements, mapping a melt pool boundary of the melt pool; applying Green's theorem to the melt pool boundary and the area enclosed thereby through the following steps;
determining an error factor;
comparing the error factor to a predetermined limit value; and
controlling at least one aspect of making the workpiece with reference to the Green's theorem application. 17. The method of claim 16, wherein the step of applying Green's theorem to the melt pool boundary generates indications of a process fault. 18. The method of claim 16, wherein the step of comparing the error factor to a predetermined limit value generates an indication is used as an input into a statistical process control method for the additive manufacturing process. 19. The method of claim 16, wherein the step of comparing the error factor to a predetermined limit value is used to create populations of unfaulted and faulted process states. | 2,100 |
5,962 | 5,962 | 12,770,992 | 2,177 | Selected data is temporarily formatted and charted to assist a user in visualizing the selected data without the user having to manually create the display of the charted data. Once the temporary formatting and charting is automatically applied to the selected data, a user may interact with the visual formatting and charts to gain additional information. For example, the chart may be formatted differently, reference lines may be added, the chart may be sorted, the type of chart(s) displayed may be changed, the user may graphically navigate over the chart to obtain more detailed information, and the like. Once the user has completed interacting with the temporarily formatted and created chart(s) the visualizations are removed from the display. | 1. A method for temporarily formatting and charting data, comprising:
receiving a selection of data; determining values within the selected data; temporarily displaying a chart representing the values from the selected data; wherein the chart is automatically created and displayed upon receiving the selection of data. 2. The method of claim 1, wherein temporarily displaying the chart comprises displaying a separate chart for each row or each column within the selected data. 3. The method of claim 2, further comprising displaying each separate chart using a common axis. 4. The method of claim 2, further comprising displaying the value of a point in the chart when the point in the chart is navigated. 5. The method of claim 4, further comprising comparing the value of the point that is navigated to each of the other displayed charts and providing an indication of a difference of the values between the charts. 6. The method of claim 2, further comprising determining labels for the charts by traversing the data until the labels are reached within the data from which the data is selected. 7. The method of claim 2, further comprising displaying a reference line across the charts and formatting values in relation to a value of the reference line. 8. The method of claim 7, wherein selecting a point on one of the displayed charts displays information relating to a comparison of the reference line to a value of the selected point. 9. A computer-readable storage medium having computer-executable instructions for temporarily formatting and charting data, the instructions executing on a processor of a computer, comprising:
receiving a selection of data that is selected from a table of data; determining all of the values within the selected data; automatically determining a chart type to display the selection of data; temporarily displaying a chart representing the values from the selected data; wherein the chart is automatically created and displayed upon receiving the selection of data. 10. The computer-readable storage medium of claim 9, wherein temporarily displaying the chart comprises displaying a separate chart for each row or column within the selected data; wherein each of the separate charts includes a common axis. 11. The computer-readable storage medium of claim 10, further comprising removing a display of unselected data upon the temporary display of the chart. 12. The computer-readable storage medium of claim 10, further comprising sorting the displayed charts and updating the display in response to the sorting. 13. The computer-readable storage medium of claim 10, further comprising comparing the value of the point that is navigated to each of the other displayed charts and providing an indication of a difference of the values between the charts. 14. The computer-readable storage medium of claim 10, further comprising determining labels for the charts by traversing the table of data until the labels are reached within the data from which the data is selected. 15. The computer-readable storage medium of claim 10, further comprising displaying a reference line across the charts and formatting values in relation to a value of the reference line. 16. The computer-readable storage medium of claim 15, wherein selecting a point on one of the displayed charts displays information relating to a comparison of the reference line to a value of the selected point. 17. A system for temporarily formatting and charting data, comprising:
a processor and a computer-readable medium; an operating environment stored on the computer-readable medium and executing on the processor; a productivity application and a visual manager operating on the processor; and configured to perform tasks, comprising:
receive a selection of data that is selected from data presented in rows and columns within the productivity application;
determine values of the selected data;
determine a chart type to display the selection of data; and
temporarily display a chart representing the values from the selected data in response to the selection of data. 18. The system of claim 17, wherein temporarily displaying the chart comprises displaying a separate chart for each row within the selected data; wherein each of the separate charts includes a common axis. 19. The system of claim 18, further comprising displaying a value of a point in the chart in response to a user navigating a pointing device over the value. 20. The system of claim 18, further comprising determining a label of the chart from the table of data by accessing a label row of the table of data. | Selected data is temporarily formatted and charted to assist a user in visualizing the selected data without the user having to manually create the display of the charted data. Once the temporary formatting and charting is automatically applied to the selected data, a user may interact with the visual formatting and charts to gain additional information. For example, the chart may be formatted differently, reference lines may be added, the chart may be sorted, the type of chart(s) displayed may be changed, the user may graphically navigate over the chart to obtain more detailed information, and the like. Once the user has completed interacting with the temporarily formatted and created chart(s) the visualizations are removed from the display.1. A method for temporarily formatting and charting data, comprising:
receiving a selection of data; determining values within the selected data; temporarily displaying a chart representing the values from the selected data; wherein the chart is automatically created and displayed upon receiving the selection of data. 2. The method of claim 1, wherein temporarily displaying the chart comprises displaying a separate chart for each row or each column within the selected data. 3. The method of claim 2, further comprising displaying each separate chart using a common axis. 4. The method of claim 2, further comprising displaying the value of a point in the chart when the point in the chart is navigated. 5. The method of claim 4, further comprising comparing the value of the point that is navigated to each of the other displayed charts and providing an indication of a difference of the values between the charts. 6. The method of claim 2, further comprising determining labels for the charts by traversing the data until the labels are reached within the data from which the data is selected. 7. The method of claim 2, further comprising displaying a reference line across the charts and formatting values in relation to a value of the reference line. 8. The method of claim 7, wherein selecting a point on one of the displayed charts displays information relating to a comparison of the reference line to a value of the selected point. 9. A computer-readable storage medium having computer-executable instructions for temporarily formatting and charting data, the instructions executing on a processor of a computer, comprising:
receiving a selection of data that is selected from a table of data; determining all of the values within the selected data; automatically determining a chart type to display the selection of data; temporarily displaying a chart representing the values from the selected data; wherein the chart is automatically created and displayed upon receiving the selection of data. 10. The computer-readable storage medium of claim 9, wherein temporarily displaying the chart comprises displaying a separate chart for each row or column within the selected data; wherein each of the separate charts includes a common axis. 11. The computer-readable storage medium of claim 10, further comprising removing a display of unselected data upon the temporary display of the chart. 12. The computer-readable storage medium of claim 10, further comprising sorting the displayed charts and updating the display in response to the sorting. 13. The computer-readable storage medium of claim 10, further comprising comparing the value of the point that is navigated to each of the other displayed charts and providing an indication of a difference of the values between the charts. 14. The computer-readable storage medium of claim 10, further comprising determining labels for the charts by traversing the table of data until the labels are reached within the data from which the data is selected. 15. The computer-readable storage medium of claim 10, further comprising displaying a reference line across the charts and formatting values in relation to a value of the reference line. 16. The computer-readable storage medium of claim 15, wherein selecting a point on one of the displayed charts displays information relating to a comparison of the reference line to a value of the selected point. 17. A system for temporarily formatting and charting data, comprising:
a processor and a computer-readable medium; an operating environment stored on the computer-readable medium and executing on the processor; a productivity application and a visual manager operating on the processor; and configured to perform tasks, comprising:
receive a selection of data that is selected from data presented in rows and columns within the productivity application;
determine values of the selected data;
determine a chart type to display the selection of data; and
temporarily display a chart representing the values from the selected data in response to the selection of data. 18. The system of claim 17, wherein temporarily displaying the chart comprises displaying a separate chart for each row within the selected data; wherein each of the separate charts includes a common axis. 19. The system of claim 18, further comprising displaying a value of a point in the chart in response to a user navigating a pointing device over the value. 20. The system of claim 18, further comprising determining a label of the chart from the table of data by accessing a label row of the table of data. | 2,100 |
5,963 | 5,963 | 15,879,818 | 2,191 | Reducing classloading of hierarchically configured applications via provisioning is disclosed. In one example, a hierarchically configured application is launched within a first container of a container application platform according to a set of resource descriptions that define a structure of the hierarchically configured application, valid operations may be performed by elements of the hierarchically configured application, and handlers for each operation. After the process of loading the classes representing the resource descriptions and operation handlers is performed, services to be used by the hierarchically configured application are installed. The state of each service is then determined, and one or more serialized data structures representing the state of the services is generated. Subsequently, the hierarchically configured application is launched within a second container, with the serialized data structures being used to install the services within the second container while incurring lower classloading overhead. | 1. A computing system, comprising:
a first computing device comprising a first memory and a first processor device communicatively coupled to the first memory; and a second computing device comprising a second memory and a second processor device communicatively coupled to the second memory; the first computing device to:
launch a hierarchically configured application within a first container of a container application platform according to one or more resource descriptions for the hierarchically configured application;
determine a state of each service of one or more services initiated by the hierarchically configured application; and
generate one or more serialized data structures representing the state of each service of the one or more services; and
the second computing device to subsequently launch the hierarchically configured application within a second container of the container application platform based on the one or more serialized data structures. 2. The computing system of claim 1, wherein:
the first container comprises a provisioning container of the container application platform; and the second container comprises an execution container of the container application platform. 3. The computing system of claim 1, wherein to determine the state of each service of the one or more services launched by the hierarchically configured application is to:
identify an injection point for each service of the one or more services; identify one or more dependencies for each service of the one or more services; and obtain a reference to each instance of each service of the one or more services. 4. The computing system of claim 3, wherein to generate the one or more serialized data structures representing the state of each service of the one or more services is to:
serialize each injection point for each service of the one or more services into the one or more serialized data structures; and serialize the one or more dependencies for each service of the one or more services into the one or more serialized data structures. 5. The computing system of claim 4, wherein to serialize each injection point and serialize the one or more dependencies is to serialize each injection point and serialize the one or more dependencies using a domain-specific language. 6. The computing system of claim 3, wherein to generate the one or more serialized data structures representing the state of each service of the one or more services is to serialize each reference to each instance of each service of the one or more services into the one or more serialized data structures. 7. The computing system of claim 6, wherein to serialize each reference to each instance of each service is to serialize each reference to each instance of each service using a service externalizer. 8. The computing system of claim 1, wherein to subsequently launch the hierarchically configured application within the second container of the container application platform based on the one or more serialized data structures is to:
deserialize each instance of each service of the one or more services from the one or more serialized data structures; determine, based on the one or more serialized data structures, whether one or more dependencies exist for each instance of each service of the one or more services; responsive to determining that one or more dependencies exist for each instance of each service of the one or more services, set the one or more dependencies; install the hierarchically configured application into the second container of the container application platform; and execute the hierarchically configured application within the second container. 9. The computing system of claim 1, wherein the hierarchically configured application comprises a WildFly hierarchically configured application. 10. The computing system of claim 1, wherein the container application platform comprises an OpenShift container application platform. 11. A method comprising:
launching a hierarchically configured application within a first container of a container application platform according to one or more resource descriptions for the hierarchically configured application; determining a state of each service of one or more services initiated by the hierarchically configured application; generating one or more serialized data structures representing the state of each service of the one or more services; and subsequently launching the hierarchically configured application within a second container of the container application platform based on the one or more serialized data structures. 12. The method of claim 11, wherein:
the first container comprises a provisioning container of the container application platform; and the second container comprises an execution container of the container application platform. 13. The method of claim 11, wherein determining the state of each service of the one or more services launched by the hierarchically configured application comprises:
identifying an injection point for each service of the one or more services; identifying one or more dependencies for each service of the one or more services; and obtaining a reference to each instance of each service of the one or more services. 14. The method of claim 13, wherein generating the one or more serialized data structures representing the state of each service of the one or more services comprises:
serializing each injection point for each service of the one or more services into the one or more serialized data structures; and serializing the one or more dependencies for each service of the one or more services into the one or more serialized data structures. 15. The method of claim 14, wherein serializing each injection point and serializing the one or more dependencies comprises serializing each injection point and serializing the one or more dependencies using a domain-specific language. 16. The method of claim 13, wherein generating the one or more serialized data structures representing the state of each service of the one or more services comprises serializing each reference to each instance of each service of the one or more services into the one or more serialized data structures. 17. The method of claim 16, wherein serializing each reference to each instance of each service comprises serializing each reference to each instance of each service using a service externalizer. 18. The method of claim 11, wherein subsequently launching the hierarchically configured application within the first container of the container application platform based on the one or more serialized data structures comprises:
deserializing each instance of each service of the one or more services from the one or more serialized data structures; determining, based on the one or more serialized data structures, whether one or more dependencies exist for each instance of each service of the one or more services; responsive to determining that one or more dependencies exist for each instance of each service of the one or more services, setting the one or more dependencies; installing the hierarchically configured application into the second container of the container application platform; and executing the hierarchically configured application within the second container. 19. The method of claim 11, wherein:
the hierarchically configured application comprises a WildFly hierarchically configured application; and the container application platform comprises an OpenShift container application platform. 20. A computer program product stored on a non-transitory computer-readable storage medium and including instructions to cause a processor device to:
launch a hierarchically configured application within a first container of a container application platform according to one or more resource descriptions for the hierarchically configured application; determine a state of each service of one or more services initiated by the hierarchically configured application; generate one or more serialized data structures representing the state of each service of the one or more services; and subsequently launch the hierarchically configured application within a second container of the container application platform based on the one or more serialized data structures. | Reducing classloading of hierarchically configured applications via provisioning is disclosed. In one example, a hierarchically configured application is launched within a first container of a container application platform according to a set of resource descriptions that define a structure of the hierarchically configured application, valid operations may be performed by elements of the hierarchically configured application, and handlers for each operation. After the process of loading the classes representing the resource descriptions and operation handlers is performed, services to be used by the hierarchically configured application are installed. The state of each service is then determined, and one or more serialized data structures representing the state of the services is generated. Subsequently, the hierarchically configured application is launched within a second container, with the serialized data structures being used to install the services within the second container while incurring lower classloading overhead.1. A computing system, comprising:
a first computing device comprising a first memory and a first processor device communicatively coupled to the first memory; and a second computing device comprising a second memory and a second processor device communicatively coupled to the second memory; the first computing device to:
launch a hierarchically configured application within a first container of a container application platform according to one or more resource descriptions for the hierarchically configured application;
determine a state of each service of one or more services initiated by the hierarchically configured application; and
generate one or more serialized data structures representing the state of each service of the one or more services; and
the second computing device to subsequently launch the hierarchically configured application within a second container of the container application platform based on the one or more serialized data structures. 2. The computing system of claim 1, wherein:
the first container comprises a provisioning container of the container application platform; and the second container comprises an execution container of the container application platform. 3. The computing system of claim 1, wherein to determine the state of each service of the one or more services launched by the hierarchically configured application is to:
identify an injection point for each service of the one or more services; identify one or more dependencies for each service of the one or more services; and obtain a reference to each instance of each service of the one or more services. 4. The computing system of claim 3, wherein to generate the one or more serialized data structures representing the state of each service of the one or more services is to:
serialize each injection point for each service of the one or more services into the one or more serialized data structures; and serialize the one or more dependencies for each service of the one or more services into the one or more serialized data structures. 5. The computing system of claim 4, wherein to serialize each injection point and serialize the one or more dependencies is to serialize each injection point and serialize the one or more dependencies using a domain-specific language. 6. The computing system of claim 3, wherein to generate the one or more serialized data structures representing the state of each service of the one or more services is to serialize each reference to each instance of each service of the one or more services into the one or more serialized data structures. 7. The computing system of claim 6, wherein to serialize each reference to each instance of each service is to serialize each reference to each instance of each service using a service externalizer. 8. The computing system of claim 1, wherein to subsequently launch the hierarchically configured application within the second container of the container application platform based on the one or more serialized data structures is to:
deserialize each instance of each service of the one or more services from the one or more serialized data structures; determine, based on the one or more serialized data structures, whether one or more dependencies exist for each instance of each service of the one or more services; responsive to determining that one or more dependencies exist for each instance of each service of the one or more services, set the one or more dependencies; install the hierarchically configured application into the second container of the container application platform; and execute the hierarchically configured application within the second container. 9. The computing system of claim 1, wherein the hierarchically configured application comprises a WildFly hierarchically configured application. 10. The computing system of claim 1, wherein the container application platform comprises an OpenShift container application platform. 11. A method comprising:
launching a hierarchically configured application within a first container of a container application platform according to one or more resource descriptions for the hierarchically configured application; determining a state of each service of one or more services initiated by the hierarchically configured application; generating one or more serialized data structures representing the state of each service of the one or more services; and subsequently launching the hierarchically configured application within a second container of the container application platform based on the one or more serialized data structures. 12. The method of claim 11, wherein:
the first container comprises a provisioning container of the container application platform; and the second container comprises an execution container of the container application platform. 13. The method of claim 11, wherein determining the state of each service of the one or more services launched by the hierarchically configured application comprises:
identifying an injection point for each service of the one or more services; identifying one or more dependencies for each service of the one or more services; and obtaining a reference to each instance of each service of the one or more services. 14. The method of claim 13, wherein generating the one or more serialized data structures representing the state of each service of the one or more services comprises:
serializing each injection point for each service of the one or more services into the one or more serialized data structures; and serializing the one or more dependencies for each service of the one or more services into the one or more serialized data structures. 15. The method of claim 14, wherein serializing each injection point and serializing the one or more dependencies comprises serializing each injection point and serializing the one or more dependencies using a domain-specific language. 16. The method of claim 13, wherein generating the one or more serialized data structures representing the state of each service of the one or more services comprises serializing each reference to each instance of each service of the one or more services into the one or more serialized data structures. 17. The method of claim 16, wherein serializing each reference to each instance of each service comprises serializing each reference to each instance of each service using a service externalizer. 18. The method of claim 11, wherein subsequently launching the hierarchically configured application within the first container of the container application platform based on the one or more serialized data structures comprises:
deserializing each instance of each service of the one or more services from the one or more serialized data structures; determining, based on the one or more serialized data structures, whether one or more dependencies exist for each instance of each service of the one or more services; responsive to determining that one or more dependencies exist for each instance of each service of the one or more services, setting the one or more dependencies; installing the hierarchically configured application into the second container of the container application platform; and executing the hierarchically configured application within the second container. 19. The method of claim 11, wherein:
the hierarchically configured application comprises a WildFly hierarchically configured application; and the container application platform comprises an OpenShift container application platform. 20. A computer program product stored on a non-transitory computer-readable storage medium and including instructions to cause a processor device to:
launch a hierarchically configured application within a first container of a container application platform according to one or more resource descriptions for the hierarchically configured application; determine a state of each service of one or more services initiated by the hierarchically configured application; generate one or more serialized data structures representing the state of each service of the one or more services; and subsequently launch the hierarchically configured application within a second container of the container application platform based on the one or more serialized data structures. | 2,100 |
5,964 | 5,964 | 13,993,419 | 2,167 | A non-transitory computer-readable storage medium storing a set of instructions executable by a processor. The set of instructions is operable to receive a current patient set of data relating to a current patient; compare the current patient set of data to a plurality of previous patient sets of data, each of the previous patient sets of data corresponding to a previous patient; select one of the previous patient sets of data based on a level of similarity between the selected previous patient set of data and the current patient set of data; and provide the selected previous patient set of data to a user. | 1. A non-transitory computer-readable storage medium storing a set of instructions executable by a processor, the set of instructions being operable to:
receive a current patient set of data relating to a current patient; compare the current patient set of data to a plurality of previous patient sets of data, each of the previous patient sets of data corresponding to a previous patient; select a plurality of the previous patient sets of data based on a level of similarity between the selected plurality of previous patient sets of data and the current patient set of data; provide the plurality of selected previous patient sets of data to a user; generate a treatment plan based on corresponding treatment plans of the plurality of selected previous patient data sets; and weighting each of the corresponding treatment plans based on a similarity of each of the plurality of selected previous patients to the current patient. 2. The non-transitory computer-readable storage medium of claim 1, wherein the current patient data set comprises one of a set of clinical information about the current patient, a set of calculated information about the patient, a set of quality of life preferences of the patient, and an initial treatment plan for the current patient. 3. The non-transitory computer-readable storage medium of claim 1, wherein the previous patient sets of data comprise one of sets of clinical information about the previous patients, sets of calculated information about the previous patients, treatment plans of the previous patients, and outcome information of the previous patients. 4. The non-transitory computer-readable storage medium of claim 1, wherein a plurality of previous patient sets of data are selected, and wherein the plurality of selected previous patient sets of data are ranked by a level of similarity. 5. (canceled) 6. (canceled) 7. (canceled) 8. (canceled) 9. The non-transitory computer-readable storage medium of claim 1, wherein a first element of the treatment plan is copied from a first treatment plan of one of the plurality of selected previous patients, and wherein a second element of the treatment plan is copied from a second treatment plan of a further one of the plurality of selected previous patients, the second element being an element relating to an attribute of the current patient that differs from a corresponding attribute of the selected one of the previous patients, the second element further being an element relating to an attribute of the current patient that is similar to a corresponding attribute of the further one of the previous patients. 10. The non-transitory computer-readable storage medium of claim 1, wherein the level of similarity is based on a distance metric between the current patient and the selected one of the previous patients. 11. The non-transitory computer-readable storage medium of claim 10, wherein the distance metric is one of a Euclidean distance, a city block distance, and a Mahalanobis distance. 12. A system, comprising:
a user interface receiving a current patient set of data relating to a current patient; a database storing a plurality of previous patient sets of data, each of the previous patient sets of data corresponding to a previous patient; a similarity search mechanism searching the plurality of previous patient sets of data and selecting a plurality of the previous patient sets of data having a high degree of similarity to the current patient set of data, wherein the plurality of selected previous patient sets of data is provided to the user by the user interface; and a plan generation system generating a treatment plan for the current patient based on the plurality of selected previous patient data sets, wherein the treatment plans of each of the selected plurality of patients is weighted based on a similarity of each of the selected plurality of the previous patients to the current patient. 13. The system of claim 12, wherein the current patient data set is one of a set of clinical information about the current patient, a set of calculated information about the patient, a set of quality of life preferences of the patient, and an initial treatment plan for the current patient. 14. The system of claim 12, wherein the previous patient sets of data comprise one of sets of clinical information about the previous patients, sets of calculated information about the previous patients, treatment plans of the previous patients, and outcome information of the previous patients. 15. The system of claim 12, wherein a plurality of previous patient sets of data are selected, and wherein the plurality of selected previous patient sets of data are ranked by a level of similarity to the current patient set of data. 16. (canceled) 17. (canceled) 18. (canceled) 19. (canceled) 20. The system of claim 16, wherein a first element of the treatment plan is copied from a first treatment plan of the plurality of selected previous patients, and wherein a second element of the treatment plan is copied from a second treatment plan of a further one of the plurality of previous patients, the second element being an element relating to an attribute of the current patient that differs from a corresponding attribute of the selected one of the previous patients, the second element further being an element relating to an attribute of the current patient that is similar to a corresponding attribute of the further one of the previous patients. 21. The system of claim 12, wherein the degree of similarity is based on a distance metric between the current patient and the selected one of the previous patients, and wherein the distance metric is one of a Euclidean distance, a city block distance, and a Mahalanobis distance. 22. (canceled) 23. The system of claim 12, wherein the user interface is a graphical user interface. 24. The system of claim 23, wherein the graphical user interface comprises a retrieval criteria selection element indicating a weighting of a plurality of retrieval criteria. | A non-transitory computer-readable storage medium storing a set of instructions executable by a processor. The set of instructions is operable to receive a current patient set of data relating to a current patient; compare the current patient set of data to a plurality of previous patient sets of data, each of the previous patient sets of data corresponding to a previous patient; select one of the previous patient sets of data based on a level of similarity between the selected previous patient set of data and the current patient set of data; and provide the selected previous patient set of data to a user.1. A non-transitory computer-readable storage medium storing a set of instructions executable by a processor, the set of instructions being operable to:
receive a current patient set of data relating to a current patient; compare the current patient set of data to a plurality of previous patient sets of data, each of the previous patient sets of data corresponding to a previous patient; select a plurality of the previous patient sets of data based on a level of similarity between the selected plurality of previous patient sets of data and the current patient set of data; provide the plurality of selected previous patient sets of data to a user; generate a treatment plan based on corresponding treatment plans of the plurality of selected previous patient data sets; and weighting each of the corresponding treatment plans based on a similarity of each of the plurality of selected previous patients to the current patient. 2. The non-transitory computer-readable storage medium of claim 1, wherein the current patient data set comprises one of a set of clinical information about the current patient, a set of calculated information about the patient, a set of quality of life preferences of the patient, and an initial treatment plan for the current patient. 3. The non-transitory computer-readable storage medium of claim 1, wherein the previous patient sets of data comprise one of sets of clinical information about the previous patients, sets of calculated information about the previous patients, treatment plans of the previous patients, and outcome information of the previous patients. 4. The non-transitory computer-readable storage medium of claim 1, wherein a plurality of previous patient sets of data are selected, and wherein the plurality of selected previous patient sets of data are ranked by a level of similarity. 5. (canceled) 6. (canceled) 7. (canceled) 8. (canceled) 9. The non-transitory computer-readable storage medium of claim 1, wherein a first element of the treatment plan is copied from a first treatment plan of one of the plurality of selected previous patients, and wherein a second element of the treatment plan is copied from a second treatment plan of a further one of the plurality of selected previous patients, the second element being an element relating to an attribute of the current patient that differs from a corresponding attribute of the selected one of the previous patients, the second element further being an element relating to an attribute of the current patient that is similar to a corresponding attribute of the further one of the previous patients. 10. The non-transitory computer-readable storage medium of claim 1, wherein the level of similarity is based on a distance metric between the current patient and the selected one of the previous patients. 11. The non-transitory computer-readable storage medium of claim 10, wherein the distance metric is one of a Euclidean distance, a city block distance, and a Mahalanobis distance. 12. A system, comprising:
a user interface receiving a current patient set of data relating to a current patient; a database storing a plurality of previous patient sets of data, each of the previous patient sets of data corresponding to a previous patient; a similarity search mechanism searching the plurality of previous patient sets of data and selecting a plurality of the previous patient sets of data having a high degree of similarity to the current patient set of data, wherein the plurality of selected previous patient sets of data is provided to the user by the user interface; and a plan generation system generating a treatment plan for the current patient based on the plurality of selected previous patient data sets, wherein the treatment plans of each of the selected plurality of patients is weighted based on a similarity of each of the selected plurality of the previous patients to the current patient. 13. The system of claim 12, wherein the current patient data set is one of a set of clinical information about the current patient, a set of calculated information about the patient, a set of quality of life preferences of the patient, and an initial treatment plan for the current patient. 14. The system of claim 12, wherein the previous patient sets of data comprise one of sets of clinical information about the previous patients, sets of calculated information about the previous patients, treatment plans of the previous patients, and outcome information of the previous patients. 15. The system of claim 12, wherein a plurality of previous patient sets of data are selected, and wherein the plurality of selected previous patient sets of data are ranked by a level of similarity to the current patient set of data. 16. (canceled) 17. (canceled) 18. (canceled) 19. (canceled) 20. The system of claim 16, wherein a first element of the treatment plan is copied from a first treatment plan of the plurality of selected previous patients, and wherein a second element of the treatment plan is copied from a second treatment plan of a further one of the plurality of previous patients, the second element being an element relating to an attribute of the current patient that differs from a corresponding attribute of the selected one of the previous patients, the second element further being an element relating to an attribute of the current patient that is similar to a corresponding attribute of the further one of the previous patients. 21. The system of claim 12, wherein the degree of similarity is based on a distance metric between the current patient and the selected one of the previous patients, and wherein the distance metric is one of a Euclidean distance, a city block distance, and a Mahalanobis distance. 22. (canceled) 23. The system of claim 12, wherein the user interface is a graphical user interface. 24. The system of claim 23, wherein the graphical user interface comprises a retrieval criteria selection element indicating a weighting of a plurality of retrieval criteria. | 2,100 |
5,965 | 5,965 | 13,901,343 | 2,176 | A system and process are disclosed for providing users with page previews during page loading events, such that the delay experienced before the display of page content is reduced. The previews may include screenshots of the pages or of portions thereof, and may be generated periodically and cached by the system for delivery to user devices. The process of generating and delivering the previews via the Internet or some other network may be implemented partly or wholly within an intermediary system that sits logically between the user devices and content servers. The process may be used with existing browsers without the need for any browser modifications, or may be used with a “preview-aware” browser that includes special program code for providing page previews. | 1. A computer system that acts as an intermediary between user devices and content sites, the computer system comprising one or more computing devices, and being programmed to at least:
receive, from a user device, a request for a content page hosted by a content site; deliver a preview version of the content page to the user device for temporary display on the user device, such that the user is presented with page content while a non-preview version of the content page is being loaded on the user device; retrieve a substantially current version of the content page from the content site; generate a representation of a difference between the preview version and the substantially current version of the content page based at least partly on an analysis of differences between the preview version and the substantially current version of the content page; and deliver the representation of the difference to the user device. 2. The computer system of claim 1, wherein the representation of the difference comprises one or more images, each image comprising a representation of a portion of the substantially current version that differs from a corresponding portion of the preview version. 3. The computer system of claim 1, wherein the representation of the difference comprises a portion of textual content that differs from a corresponding portion of the preview version. 4. The computer system of claim 1, wherein the preview version comprises a screenshot image of at least a portion of the content page. 5. A computer-implemented method for responding to content requests, the computer-implemented method comprising:
receiving, by a content preview system comprising one or more computing devices, a request for a content page hosted by a content site separate from the content preview system, the request received from a user device; in response to the request:
delivering a preview version of the content page to the user device;
generating a preview update based at least partly on an analysis of differences between the preview version and a substantially current version of the content page; and
delivering the preview update to the user device. 6. The computer-implemented method of claim 5, further comprising determining to generate the preview update based at least partly on a configuration setting associated with the user device. 7. The computer-implemented method of claim 5, further comprising enabling user devices to request only preview content. 8. The computer-implemented method of claim 5, further comprising retrieving the substantially current version of the content page from one of: the content site or a content cache. 9. The computer-implemented method of claim 5, further comprising retrieving the preview version from a content cache. 10. The computer-implemented method of claim 5, further comprising generating the preview version prior to receiving the request from the user device. 11. The computer-implemented method of claim 5, wherein the preview version comprises one or more preview images, each preview image comprising a representation of at least a portion of the content page. 12. The computer-implemented method of claim 11, wherein the preview update comprises a replacement image, the replacement image comprising a representation of a portion of the substantially current content page corresponding to a location of the portion represented by a preview image. 13. The computer-implemented method of claim 11, wherein generating the preview update comprises determining whether to include a replacement image of a particular portion of the content page based at least partly on whether a preview image corresponding to the particular location was generated based on a characteristic of the user device. 14. The computer-implemented method of claim 11, wherein generating the preview update comprises determining whether to include a replacement image of a particular location of the content page based at least partly on whether the particular location is associated with an advertisement. 15. The computer-implemented method of claim 5, wherein the preview version comprises preview textual content corresponding to a first text portion of the content page. 16. The computer-implemented method of claim 15, wherein the preview update comprises replacement textual content, the replacement textual content corresponding to the first text portion of the content page. 17. The computer-implemented method of claim 5, wherein the preview update is delivered without receiving a follow-up request from the user device. 18. The computer-implemented method of claim 5, further comprising delivering a least a portion of the preview version and at least a portion of the preview update in parallel. 19. The computer-implemented method of claim 5, further comprising determining whether to provide a preview update to the user device. 20. The computer-implemented method of claim 19, wherein determining whether to provide a preview update is based at least partly on one of: a characteristic of a network connection to the user device, available computing resources associated with the user device, or a user-selectable configuration option associated with the user device. 21. A non-transitory computer readable medium comprising a computer-executable browser module configured to at least:
receive, from an intermediary system in response to a request for a content page, a preview representation of the content page; cause display of the preview representation; receive a preview update without transmitting a second request, the preview update comprising an updated portion corresponding to a first portion of the preview representation; and automatically cause display of the preview update, wherein at least a portion of the preview representation remains displayed. 22. The non-transitory computer readable medium of claim 21, wherein the browser module is further configured to provide a user-selectable configuration option regarding whether to receive a preview update. 23. The non-transitory computer readable medium of claim 21, wherein automatically causing display of the preview update comprises replacing display of the first portion of the preview representation with the corresponding updated portion of the preview update. 24. The non-transitory computer readable medium of claim 21, wherein the preview update is based at least partly on a comparison of differences between the preview representation and a substantially current version of the content page. 25. The non-transitory computer readable medium of claim 21, wherein the preview representation comprises a second portion generated for a characteristic of the browser module, and wherein the second portion remains displayed after automatically causing display of preview update. 26. The non-transitory computer readable medium of claim 21, wherein the first portion of the preview representation comprises one of an image, text, or a dynamic view of content rendered at the intermediary system. | A system and process are disclosed for providing users with page previews during page loading events, such that the delay experienced before the display of page content is reduced. The previews may include screenshots of the pages or of portions thereof, and may be generated periodically and cached by the system for delivery to user devices. The process of generating and delivering the previews via the Internet or some other network may be implemented partly or wholly within an intermediary system that sits logically between the user devices and content servers. The process may be used with existing browsers without the need for any browser modifications, or may be used with a “preview-aware” browser that includes special program code for providing page previews.1. A computer system that acts as an intermediary between user devices and content sites, the computer system comprising one or more computing devices, and being programmed to at least:
receive, from a user device, a request for a content page hosted by a content site; deliver a preview version of the content page to the user device for temporary display on the user device, such that the user is presented with page content while a non-preview version of the content page is being loaded on the user device; retrieve a substantially current version of the content page from the content site; generate a representation of a difference between the preview version and the substantially current version of the content page based at least partly on an analysis of differences between the preview version and the substantially current version of the content page; and deliver the representation of the difference to the user device. 2. The computer system of claim 1, wherein the representation of the difference comprises one or more images, each image comprising a representation of a portion of the substantially current version that differs from a corresponding portion of the preview version. 3. The computer system of claim 1, wherein the representation of the difference comprises a portion of textual content that differs from a corresponding portion of the preview version. 4. The computer system of claim 1, wherein the preview version comprises a screenshot image of at least a portion of the content page. 5. A computer-implemented method for responding to content requests, the computer-implemented method comprising:
receiving, by a content preview system comprising one or more computing devices, a request for a content page hosted by a content site separate from the content preview system, the request received from a user device; in response to the request:
delivering a preview version of the content page to the user device;
generating a preview update based at least partly on an analysis of differences between the preview version and a substantially current version of the content page; and
delivering the preview update to the user device. 6. The computer-implemented method of claim 5, further comprising determining to generate the preview update based at least partly on a configuration setting associated with the user device. 7. The computer-implemented method of claim 5, further comprising enabling user devices to request only preview content. 8. The computer-implemented method of claim 5, further comprising retrieving the substantially current version of the content page from one of: the content site or a content cache. 9. The computer-implemented method of claim 5, further comprising retrieving the preview version from a content cache. 10. The computer-implemented method of claim 5, further comprising generating the preview version prior to receiving the request from the user device. 11. The computer-implemented method of claim 5, wherein the preview version comprises one or more preview images, each preview image comprising a representation of at least a portion of the content page. 12. The computer-implemented method of claim 11, wherein the preview update comprises a replacement image, the replacement image comprising a representation of a portion of the substantially current content page corresponding to a location of the portion represented by a preview image. 13. The computer-implemented method of claim 11, wherein generating the preview update comprises determining whether to include a replacement image of a particular portion of the content page based at least partly on whether a preview image corresponding to the particular location was generated based on a characteristic of the user device. 14. The computer-implemented method of claim 11, wherein generating the preview update comprises determining whether to include a replacement image of a particular location of the content page based at least partly on whether the particular location is associated with an advertisement. 15. The computer-implemented method of claim 5, wherein the preview version comprises preview textual content corresponding to a first text portion of the content page. 16. The computer-implemented method of claim 15, wherein the preview update comprises replacement textual content, the replacement textual content corresponding to the first text portion of the content page. 17. The computer-implemented method of claim 5, wherein the preview update is delivered without receiving a follow-up request from the user device. 18. The computer-implemented method of claim 5, further comprising delivering a least a portion of the preview version and at least a portion of the preview update in parallel. 19. The computer-implemented method of claim 5, further comprising determining whether to provide a preview update to the user device. 20. The computer-implemented method of claim 19, wherein determining whether to provide a preview update is based at least partly on one of: a characteristic of a network connection to the user device, available computing resources associated with the user device, or a user-selectable configuration option associated with the user device. 21. A non-transitory computer readable medium comprising a computer-executable browser module configured to at least:
receive, from an intermediary system in response to a request for a content page, a preview representation of the content page; cause display of the preview representation; receive a preview update without transmitting a second request, the preview update comprising an updated portion corresponding to a first portion of the preview representation; and automatically cause display of the preview update, wherein at least a portion of the preview representation remains displayed. 22. The non-transitory computer readable medium of claim 21, wherein the browser module is further configured to provide a user-selectable configuration option regarding whether to receive a preview update. 23. The non-transitory computer readable medium of claim 21, wherein automatically causing display of the preview update comprises replacing display of the first portion of the preview representation with the corresponding updated portion of the preview update. 24. The non-transitory computer readable medium of claim 21, wherein the preview update is based at least partly on a comparison of differences between the preview representation and a substantially current version of the content page. 25. The non-transitory computer readable medium of claim 21, wherein the preview representation comprises a second portion generated for a characteristic of the browser module, and wherein the second portion remains displayed after automatically causing display of preview update. 26. The non-transitory computer readable medium of claim 21, wherein the first portion of the preview representation comprises one of an image, text, or a dynamic view of content rendered at the intermediary system. | 2,100 |
5,966 | 5,966 | 15,214,245 | 2,164 | Systems and methods for improving accuracy of web content classification by removing perceived noise are provided. The system receives a Uniform Resource Locator (URL) of a web page that needs to be classified, and parses the web page so as to construct a tree containing a list of tags. Unwanted tags are removed from the list of tags to yield a tree containing only desired tags that form part of the web page. Subsequently, a list of hyperlinks are based on processing of the tree having desired tags, wherein the list of hyperlinks can include unwanted/undesired/invalid hyperlinks and valid hyperlinks. Unwanted hyperlinks can accordingly be removed from the list of hyperlinks, and each valid hyperlink can be categorized based on a list of categories, and a final category for the web page is determined based on a vector analysis of each category assigned to each valid hyperlink. | 1. A system for web page classification comprising:
a non-transitory storage device having embodied therein one or more routines operable to facilitate categorization of content of a web page; and one or more processors coupled to the non-transitory storage device and operable to execute the one or more routines, wherein the one or more routines include:
a Uniform Resource Locator (URL) receive module, which when executed by the one or more processors, receives a URL of a web page to be categorized;
a URL tree construction module, which when executed by the one or more processors, constructs a tree for the web page, wherein the tree represents a layout and a hierarchy of a plurality of tags that are used to represent the web page;
a tag based filtration module, which when executed by the one or more processors, filters out a first set of tags from the plurality of tags to obtain desired tags that are indicative of relevant and actual content displayed by or linked by the web page;
a hyperlink list retrieval module, which when executed by the one or more processors, retrieves a list of hyperlinks that form part of the web page based on processing of the desired tags;
a valid hyperlink list generation module, which when executed by the one or more processors, processes the list of hyperlinks to generate a valid hyperlink list based on rejection of any or a combination of irrelevant hyperlinks, stop hyperlinks, and hyperlinks having a distance from a valid hyperlink of greater than a defined threshold; and
a valid hyperlink list based categorization module, which when executed by the one or more processors, processes the valid hyperlink list to associate a final category from a plurality of categories with the web page. 2. The system of claim 1, wherein the first set of tags comprises tags indicative of display parameters of the web page, tags indicative of a template associated with the web page, tags indicative of layout parameters of the web page, tags indicative of advertisement information to be displayed concurrently with the content of the web page, leaf node tags, and tags indicative of formatting attributes of the web page. 3. The system of claim 1, wherein the valid hyperlink list based categorization module is further configured to, for each valid hyperlink in the valid hyperlink list, associate a category from the plurality of categories to the valid hyperlink, to generate a category vector containing information regarding a number of valid hyperlinks observed within the web page that are associated with the plurality of categories and to identify the final category based on the category from the plurality of categories whose number is greatest. 4. The system of claim 1, wherein the URL tree construction module is configured to preprocess the web page to construct the tree. 5. The system of claim 1, wherein the final category is associated with the URL. 6. The system of claim 1, wherein the final category is selected from any or a combination of News, Sports, Current affairs, Movies, Television, Entertainment, Business, Technology, Photos, Blogs, Country, World, City, Life & Style, Porn, Malicious URL, Phishing URL, Spamming URL, Malware URL, and a multi-type attack URL. 7. The system of claim 1, wherein the web page is assigned one or more sub-categories within the final category based on processing of the valid hyperlink list with respect to a list of available sub-categories within the final category. 8. The system of claim 1, wherein the web page is represented in a form of a HyperText Markup Language (HTML) or an extensible HyperText Markup Language (XHTML) document. 9. The system of claim 1, wherein one or more final categories are associated with the web page based on the processing of the valid hyperlink list. 10. A method comprising:
receiving, by a computer system, a Uniform Resource Locator (URL) of a web page to be categorized; constructing, by the computer system, a tree for the web page, wherein the tree represents a layout and a hierarchy of a plurality of tags that are used in the web page; filtering out, by the computer system, a first set of tags from the plurality of tags to obtain desired tags that are indicative of relevant and actual content displayed by or linked by the web page; retrieving, by the computer system, a list of hyperlinks that form part of the web page based on processing of the desired tags; processing, by the computer system, the list of hyperlinks to generate a valid hyperlink list based on rejection of any or a combination of irrelevant hyperlinks, stop hyperlinks, and hyperlinks having a distance from a valid hyperlink of greater than a defined threshold; and processing, by the computer system, the valid hyperlink list to associate a final category from a plurality of categories with the web page. 11. The method of claim 10, wherein the first set of tags comprise tags indicative of display parameters of the web page, tags indicative of a template of the web page, tags indicative of layout parameters of the web page, tags indicative of advertisement information to be displayed concurrently with content of the web page, leaf node tags, and tags indicative of formatting attributes of the web page. 12. The method of claim 10, further comprising for each valid hyperlink in the valid hyperlink list:
associating a category from the plurality of categories to the valid hyperlink, to generate a category vector containing information regarding a number of valid hyperlinks observed within the web page that are associated with the plurality of categories; and identifying the final category based on the category of the plurality of categories whose number is greatest. 13. The method of claim 10, further comprising pre-processing the web page to construct the tree. 14. The method of claim 10, further comprising associating the final category with the URL. 15. The method of claim 10, wherein the final category is selected from any or a combination of News, Sports, Current affairs, Movies, Television, Entertainment, Business, Technology, Photos, Blogs, Country, World, City, Life & Style, Porn, Malicious URL, Phishing URL, Spamming URL, Malware URL, and a multi-type attack URL. 16. The method of claim 10, further comprising assigning the web page to one or more sub-categories within the final category based on processing of the valid hyperlink list with respect to a list of available sub-categories within the final category. 17. The method of claim 10, wherein the web page comprises a HyperText Markup Language (HTML) or an extensible HyperText Markup Language (XHTML) document. 18. The method of claim 10, wherein one or more final categories are associated with the web page based on the processing of the valid hyperlink list. | Systems and methods for improving accuracy of web content classification by removing perceived noise are provided. The system receives a Uniform Resource Locator (URL) of a web page that needs to be classified, and parses the web page so as to construct a tree containing a list of tags. Unwanted tags are removed from the list of tags to yield a tree containing only desired tags that form part of the web page. Subsequently, a list of hyperlinks are based on processing of the tree having desired tags, wherein the list of hyperlinks can include unwanted/undesired/invalid hyperlinks and valid hyperlinks. Unwanted hyperlinks can accordingly be removed from the list of hyperlinks, and each valid hyperlink can be categorized based on a list of categories, and a final category for the web page is determined based on a vector analysis of each category assigned to each valid hyperlink.1. A system for web page classification comprising:
a non-transitory storage device having embodied therein one or more routines operable to facilitate categorization of content of a web page; and one or more processors coupled to the non-transitory storage device and operable to execute the one or more routines, wherein the one or more routines include:
a Uniform Resource Locator (URL) receive module, which when executed by the one or more processors, receives a URL of a web page to be categorized;
a URL tree construction module, which when executed by the one or more processors, constructs a tree for the web page, wherein the tree represents a layout and a hierarchy of a plurality of tags that are used to represent the web page;
a tag based filtration module, which when executed by the one or more processors, filters out a first set of tags from the plurality of tags to obtain desired tags that are indicative of relevant and actual content displayed by or linked by the web page;
a hyperlink list retrieval module, which when executed by the one or more processors, retrieves a list of hyperlinks that form part of the web page based on processing of the desired tags;
a valid hyperlink list generation module, which when executed by the one or more processors, processes the list of hyperlinks to generate a valid hyperlink list based on rejection of any or a combination of irrelevant hyperlinks, stop hyperlinks, and hyperlinks having a distance from a valid hyperlink of greater than a defined threshold; and
a valid hyperlink list based categorization module, which when executed by the one or more processors, processes the valid hyperlink list to associate a final category from a plurality of categories with the web page. 2. The system of claim 1, wherein the first set of tags comprises tags indicative of display parameters of the web page, tags indicative of a template associated with the web page, tags indicative of layout parameters of the web page, tags indicative of advertisement information to be displayed concurrently with the content of the web page, leaf node tags, and tags indicative of formatting attributes of the web page. 3. The system of claim 1, wherein the valid hyperlink list based categorization module is further configured to, for each valid hyperlink in the valid hyperlink list, associate a category from the plurality of categories to the valid hyperlink, to generate a category vector containing information regarding a number of valid hyperlinks observed within the web page that are associated with the plurality of categories and to identify the final category based on the category from the plurality of categories whose number is greatest. 4. The system of claim 1, wherein the URL tree construction module is configured to preprocess the web page to construct the tree. 5. The system of claim 1, wherein the final category is associated with the URL. 6. The system of claim 1, wherein the final category is selected from any or a combination of News, Sports, Current affairs, Movies, Television, Entertainment, Business, Technology, Photos, Blogs, Country, World, City, Life & Style, Porn, Malicious URL, Phishing URL, Spamming URL, Malware URL, and a multi-type attack URL. 7. The system of claim 1, wherein the web page is assigned one or more sub-categories within the final category based on processing of the valid hyperlink list with respect to a list of available sub-categories within the final category. 8. The system of claim 1, wherein the web page is represented in a form of a HyperText Markup Language (HTML) or an extensible HyperText Markup Language (XHTML) document. 9. The system of claim 1, wherein one or more final categories are associated with the web page based on the processing of the valid hyperlink list. 10. A method comprising:
receiving, by a computer system, a Uniform Resource Locator (URL) of a web page to be categorized; constructing, by the computer system, a tree for the web page, wherein the tree represents a layout and a hierarchy of a plurality of tags that are used in the web page; filtering out, by the computer system, a first set of tags from the plurality of tags to obtain desired tags that are indicative of relevant and actual content displayed by or linked by the web page; retrieving, by the computer system, a list of hyperlinks that form part of the web page based on processing of the desired tags; processing, by the computer system, the list of hyperlinks to generate a valid hyperlink list based on rejection of any or a combination of irrelevant hyperlinks, stop hyperlinks, and hyperlinks having a distance from a valid hyperlink of greater than a defined threshold; and processing, by the computer system, the valid hyperlink list to associate a final category from a plurality of categories with the web page. 11. The method of claim 10, wherein the first set of tags comprise tags indicative of display parameters of the web page, tags indicative of a template of the web page, tags indicative of layout parameters of the web page, tags indicative of advertisement information to be displayed concurrently with content of the web page, leaf node tags, and tags indicative of formatting attributes of the web page. 12. The method of claim 10, further comprising for each valid hyperlink in the valid hyperlink list:
associating a category from the plurality of categories to the valid hyperlink, to generate a category vector containing information regarding a number of valid hyperlinks observed within the web page that are associated with the plurality of categories; and identifying the final category based on the category of the plurality of categories whose number is greatest. 13. The method of claim 10, further comprising pre-processing the web page to construct the tree. 14. The method of claim 10, further comprising associating the final category with the URL. 15. The method of claim 10, wherein the final category is selected from any or a combination of News, Sports, Current affairs, Movies, Television, Entertainment, Business, Technology, Photos, Blogs, Country, World, City, Life & Style, Porn, Malicious URL, Phishing URL, Spamming URL, Malware URL, and a multi-type attack URL. 16. The method of claim 10, further comprising assigning the web page to one or more sub-categories within the final category based on processing of the valid hyperlink list with respect to a list of available sub-categories within the final category. 17. The method of claim 10, wherein the web page comprises a HyperText Markup Language (HTML) or an extensible HyperText Markup Language (XHTML) document. 18. The method of claim 10, wherein one or more final categories are associated with the web page based on the processing of the valid hyperlink list. | 2,100 |
5,967 | 5,967 | 15,581,038 | 2,176 | A system for detecting, analyzing and manipulating devices, wherein a virtual graphics component displays an operating state of the device and is integrated in a moving image of the device as a complete image, and wherein the complete image is updated at regular time intervals. | 1. A system for detection, analysis and manipulation of a device, the system comprising:
a mobile terminal having an optical detection unit, a display, and an operating unit; a marking pattern arranged on the device, wherein via the optical detection unit, the marking pattern on the devices is detected and a representative image of the device is generated on the display; a cache or memory for storing device data received from the device; a graphics memory for storing device-specific, virtual graphics components; a processor that adapts a virtual graphics component based on device data received and combines or overlays it with the acquired image of the device to form a complete image; wherein the virtual graphics component displays an operating state of the device, and wherein the display outputs the complete image that is updated in predetermined intervals. 2. The system according to claim 1, wherein the complete image contains buttons, and wherein the buttons, which are adapted to be animated, are an interactive virtual user interface for access to states or a change in states of the device. 3. The system according to claim 1, wherein the marking pattern applied or arranged on the device comprises at least two mutually perpendicular lines or edges or a frame based on which the virtual graphics component is configured to be aligned to the image displayed on the device. 4. The system according to claim 1, wherein the marking pattern applied on the device includes dot or line-shaped encodings, based on which the device is identifiable. 5. The system according to claim 1, wherein complete image includes graphical user interface buttons, which can be displayed redundant to switching functions on the device or replace switching functions provided on the device. 6. The system according to claim 4, wherein, in the graphics memory or in the database, a plurality of virtual graphics components is stored, wherein corresponding to the detected encoding, pre-stored virtual graphics components matching the device are loadable and measured values and/or setting values or buttons of the device are transferred to the graphic components and are updated. 7. The system according to claim 4, wherein in a memory and/or the database, device-specific operating instructions, setup menus, analysis tools, error analysis or data log functions are stored, which are called up in dependence on the detected encoding, are transmitted server-based or locally between the mobile terminal and the memory and/or the database and/or are executable via the mobile terminal and/or the device. 8. The system according to claim 1, wherein, between the device and the mobile terminal, a radio connection, an optical connection and/or a cable are or is provided for a bidirectional data exchange, and wherein the device and the mobile terminal are interconnected via a network, a data bus system and/or an internet connection with the optional aid of a data server. 9. The system according to claim 1, wherein user-specific access rights are provided, which include various access levels, different access extents and/or different access depths for different users to features of the device. 10. The system according to claim 1, wherein the virtual graphics components contain animated switches and graphic elements for integration into the complete image shown by the display of the mobile terminal for the user, which are optically modeled after at least one real switch of the device. 11. The system according to claim 1, wherein the display of the mobile terminal comprises a touch sensitive screen. 12. The system according to claim 11, wherein the touch screen is configured to provide graphic user interface buttons and/or keypads for manipulating functions of the device within the virtual graphics component. 13. The system according to claim 11, wherein the mobile terminal and the device are designed such that for activation of a data transmission between the mobile terminal and the device, the mobile terminal is placed on the device with the touch-sensitive screen. 14. The system according to claim 13, wherein, for supporting the mobile terminal, the device comprises a support surface having a contact edge, wherein the support surface comprises at least one detection unit for detecting the contact of the device. 15. The system according to claim 3, wherein an analysis unit is provided, wherein the analysis unit is designed to transform the marking pattern, which comprises an alphanumeric or numerical encoding, into a measured value with a unit or another value and to integrate it into the complete image. 16. The system according to claim 1, wherein the detected and optically analyzed image is a moving image. 17. The system according to claim 1, wherein, during remote activation and with relative movement of the mobile terminal to the device, the mobile terminal continues to display the acquired and visually analyzed image of the device as a single or still image and continuously updates the virtual graphics component. 18. The system according to claim 1, wherein the device comprises a measuring device, a unit of equipment and/or an equipment component. 19. The system according to claim 1, wherein the device is a pressure sensor, temperature sensor, force sensor, density meter, flow meter, or level meter connected to processing equipment, including transfer piping or liquid storage tanks or industrial process vessel. 20. A method for detecting, analyzing and manipulating a device using a mobile terminal comprising an optical detection unit, a display, and an operating unit, the method comprising:
detecting via the optical detection unit, marking patterns on the device; generating an image of the devices on the display; deriving device data from the marking patterns; ascociating the device data with a virtual graphics component; loading a virtual graphics component based on the associated device data; processing the virtual graphics component into a complete image with a detected and optically analyzed image of the device; displaying, via the virtual graphics component, an operating state of the device; and updating the complete image at predetermined time intervals. | A system for detecting, analyzing and manipulating devices, wherein a virtual graphics component displays an operating state of the device and is integrated in a moving image of the device as a complete image, and wherein the complete image is updated at regular time intervals.1. A system for detection, analysis and manipulation of a device, the system comprising:
a mobile terminal having an optical detection unit, a display, and an operating unit; a marking pattern arranged on the device, wherein via the optical detection unit, the marking pattern on the devices is detected and a representative image of the device is generated on the display; a cache or memory for storing device data received from the device; a graphics memory for storing device-specific, virtual graphics components; a processor that adapts a virtual graphics component based on device data received and combines or overlays it with the acquired image of the device to form a complete image; wherein the virtual graphics component displays an operating state of the device, and wherein the display outputs the complete image that is updated in predetermined intervals. 2. The system according to claim 1, wherein the complete image contains buttons, and wherein the buttons, which are adapted to be animated, are an interactive virtual user interface for access to states or a change in states of the device. 3. The system according to claim 1, wherein the marking pattern applied or arranged on the device comprises at least two mutually perpendicular lines or edges or a frame based on which the virtual graphics component is configured to be aligned to the image displayed on the device. 4. The system according to claim 1, wherein the marking pattern applied on the device includes dot or line-shaped encodings, based on which the device is identifiable. 5. The system according to claim 1, wherein complete image includes graphical user interface buttons, which can be displayed redundant to switching functions on the device or replace switching functions provided on the device. 6. The system according to claim 4, wherein, in the graphics memory or in the database, a plurality of virtual graphics components is stored, wherein corresponding to the detected encoding, pre-stored virtual graphics components matching the device are loadable and measured values and/or setting values or buttons of the device are transferred to the graphic components and are updated. 7. The system according to claim 4, wherein in a memory and/or the database, device-specific operating instructions, setup menus, analysis tools, error analysis or data log functions are stored, which are called up in dependence on the detected encoding, are transmitted server-based or locally between the mobile terminal and the memory and/or the database and/or are executable via the mobile terminal and/or the device. 8. The system according to claim 1, wherein, between the device and the mobile terminal, a radio connection, an optical connection and/or a cable are or is provided for a bidirectional data exchange, and wherein the device and the mobile terminal are interconnected via a network, a data bus system and/or an internet connection with the optional aid of a data server. 9. The system according to claim 1, wherein user-specific access rights are provided, which include various access levels, different access extents and/or different access depths for different users to features of the device. 10. The system according to claim 1, wherein the virtual graphics components contain animated switches and graphic elements for integration into the complete image shown by the display of the mobile terminal for the user, which are optically modeled after at least one real switch of the device. 11. The system according to claim 1, wherein the display of the mobile terminal comprises a touch sensitive screen. 12. The system according to claim 11, wherein the touch screen is configured to provide graphic user interface buttons and/or keypads for manipulating functions of the device within the virtual graphics component. 13. The system according to claim 11, wherein the mobile terminal and the device are designed such that for activation of a data transmission between the mobile terminal and the device, the mobile terminal is placed on the device with the touch-sensitive screen. 14. The system according to claim 13, wherein, for supporting the mobile terminal, the device comprises a support surface having a contact edge, wherein the support surface comprises at least one detection unit for detecting the contact of the device. 15. The system according to claim 3, wherein an analysis unit is provided, wherein the analysis unit is designed to transform the marking pattern, which comprises an alphanumeric or numerical encoding, into a measured value with a unit or another value and to integrate it into the complete image. 16. The system according to claim 1, wherein the detected and optically analyzed image is a moving image. 17. The system according to claim 1, wherein, during remote activation and with relative movement of the mobile terminal to the device, the mobile terminal continues to display the acquired and visually analyzed image of the device as a single or still image and continuously updates the virtual graphics component. 18. The system according to claim 1, wherein the device comprises a measuring device, a unit of equipment and/or an equipment component. 19. The system according to claim 1, wherein the device is a pressure sensor, temperature sensor, force sensor, density meter, flow meter, or level meter connected to processing equipment, including transfer piping or liquid storage tanks or industrial process vessel. 20. A method for detecting, analyzing and manipulating a device using a mobile terminal comprising an optical detection unit, a display, and an operating unit, the method comprising:
detecting via the optical detection unit, marking patterns on the device; generating an image of the devices on the display; deriving device data from the marking patterns; ascociating the device data with a virtual graphics component; loading a virtual graphics component based on the associated device data; processing the virtual graphics component into a complete image with a detected and optically analyzed image of the device; displaying, via the virtual graphics component, an operating state of the device; and updating the complete image at predetermined time intervals. | 2,100 |
5,968 | 5,968 | 13,282,802 | 2,159 | A hybrid approach that combines a small “native” shell and a rich web-based user-interface (UI) is used to develop an application. The shell portion includes a database adapter developed for the targeted platform. The web-based UI can be shared between different devices (mobile/non-mobile) that can host a web-browser. Once the UI has been developed it can be used across mobile devices and desktop devices. A database proxy/adaptor intercepts database calls to be performed on a database defined according to a first specification. The database calls are intercepted before they can be executed against the database. The database adapter processes the database calls to be performed against a different database. The results are returned to the caller as if the database command was performed following the first specification against the database identified in the database command. | 1. A method for intercepting and processing database commands, comprising:
detecting a database command from a caller that requests an operation to be performed on a database that follows an Application Programming Interface (API) that is defined using a first specification; intercepting the database command; preventing the database command from being executed against the database; processing the intercepted database command comprising identifying a different database from the database on which to perform the database command; passing the processed intercepted database command to a database engine to perform the requested operation defined by the database command on the different database; receiving results from the database engine; and providing the results to the caller. 2. The method of claim 1, wherein the database command is issued by a web browser. 3. The method of claim 1, wherein intercepting the database command occurs before the database command reaches a language engine associated with a web page that issues the database command. 4. The method of claim 1, further comprising pre-processing the intercepted database command to emulate the processing of the database command as defined by the first specification. 5. The method of claim 1, wherein intercepting the database command comprises intercepting each received database command and automatically passing the database command to a database adapter that is coupled to the different database. 6. The method of claim 1, wherein each intercepted database call is performed on the different database. 7. The method of claim 1, further comprising processing a database command following the different specification before the database command is issued by a web browser. 8. The method of claim 1, further comprising loading the different database before the database associated with a database command is identified by a web browser. 9. The method of claim 1, further comprising pre-processing the results received by the database engine to emulate returned results following the first specification. 10. A computer-readable medium having computer-executable instructions for intercepting and processing database commands, comprising:
detecting a database command issued by a web browser that requests an operation to be performed on a database; intercepting the database command; preventing the database command from being executed against the database; passing the database command to a database engine to perform the requested operation defined by the database command on a different database; receiving results from the database engine; pre-processing the results received by the database engine to emulate returned results following the first specification; and providing the results to the caller. 11. The computer-readable medium of claim 10, wherein intercepting the database command occurs before the database command reaches a language engine associated with a web page that issues the database command. 12. The computer-readable medium of claim 10, wherein intercepting the database command comprises intercepting each received database command and automatically passing the database command to a database adapter that is coupled to the different database. 13. The computer-readable medium of claim 10, wherein each intercepted database call is performed on the different database. 14. The computer-readable medium of claim 10, further comprising processing a database command to perform an action on the different database before the database command is issued by a web browser. 15. The computer-readable medium of claim 10, further comprising loading the different database before the database is identified by a web browser. 16. A system for intercepting and processing database commands, comprising:
a database; a different database; a database engine configured to perform database commands; a web-browser; a processor and a computer-readable medium; an operating environment stored on the computer-readable medium and executing on the processor; and a database proxy/adaptor operating under the control of the operating environment and operative to:
detect a database command issued by the web browser that requests an operation to be performed on the database;
intercept the database command;
prevent the database command from being executed against the database;
pass the database command to the database engine to perform the requested operation defined by the database command on the different database;
receive results from the database engine; and
providing the results. 17. The system of claim 16, wherein intercepting the database command occurs before the database command reaches a language engine associated with the web page that issues the database command. 18. The system of claim 16, wherein each intercepted database call is performed on the different database. 19. The system of claim 16, further comprising processing a database command to perform an action on the different database before the database command is issued by the web browser. 20. The system of claim 16, further comprising loading the different database before the database is identified by the web browser. | A hybrid approach that combines a small “native” shell and a rich web-based user-interface (UI) is used to develop an application. The shell portion includes a database adapter developed for the targeted platform. The web-based UI can be shared between different devices (mobile/non-mobile) that can host a web-browser. Once the UI has been developed it can be used across mobile devices and desktop devices. A database proxy/adaptor intercepts database calls to be performed on a database defined according to a first specification. The database calls are intercepted before they can be executed against the database. The database adapter processes the database calls to be performed against a different database. The results are returned to the caller as if the database command was performed following the first specification against the database identified in the database command.1. A method for intercepting and processing database commands, comprising:
detecting a database command from a caller that requests an operation to be performed on a database that follows an Application Programming Interface (API) that is defined using a first specification; intercepting the database command; preventing the database command from being executed against the database; processing the intercepted database command comprising identifying a different database from the database on which to perform the database command; passing the processed intercepted database command to a database engine to perform the requested operation defined by the database command on the different database; receiving results from the database engine; and providing the results to the caller. 2. The method of claim 1, wherein the database command is issued by a web browser. 3. The method of claim 1, wherein intercepting the database command occurs before the database command reaches a language engine associated with a web page that issues the database command. 4. The method of claim 1, further comprising pre-processing the intercepted database command to emulate the processing of the database command as defined by the first specification. 5. The method of claim 1, wherein intercepting the database command comprises intercepting each received database command and automatically passing the database command to a database adapter that is coupled to the different database. 6. The method of claim 1, wherein each intercepted database call is performed on the different database. 7. The method of claim 1, further comprising processing a database command following the different specification before the database command is issued by a web browser. 8. The method of claim 1, further comprising loading the different database before the database associated with a database command is identified by a web browser. 9. The method of claim 1, further comprising pre-processing the results received by the database engine to emulate returned results following the first specification. 10. A computer-readable medium having computer-executable instructions for intercepting and processing database commands, comprising:
detecting a database command issued by a web browser that requests an operation to be performed on a database; intercepting the database command; preventing the database command from being executed against the database; passing the database command to a database engine to perform the requested operation defined by the database command on a different database; receiving results from the database engine; pre-processing the results received by the database engine to emulate returned results following the first specification; and providing the results to the caller. 11. The computer-readable medium of claim 10, wherein intercepting the database command occurs before the database command reaches a language engine associated with a web page that issues the database command. 12. The computer-readable medium of claim 10, wherein intercepting the database command comprises intercepting each received database command and automatically passing the database command to a database adapter that is coupled to the different database. 13. The computer-readable medium of claim 10, wherein each intercepted database call is performed on the different database. 14. The computer-readable medium of claim 10, further comprising processing a database command to perform an action on the different database before the database command is issued by a web browser. 15. The computer-readable medium of claim 10, further comprising loading the different database before the database is identified by a web browser. 16. A system for intercepting and processing database commands, comprising:
a database; a different database; a database engine configured to perform database commands; a web-browser; a processor and a computer-readable medium; an operating environment stored on the computer-readable medium and executing on the processor; and a database proxy/adaptor operating under the control of the operating environment and operative to:
detect a database command issued by the web browser that requests an operation to be performed on the database;
intercept the database command;
prevent the database command from being executed against the database;
pass the database command to the database engine to perform the requested operation defined by the database command on the different database;
receive results from the database engine; and
providing the results. 17. The system of claim 16, wherein intercepting the database command occurs before the database command reaches a language engine associated with the web page that issues the database command. 18. The system of claim 16, wherein each intercepted database call is performed on the different database. 19. The system of claim 16, further comprising processing a database command to perform an action on the different database before the database command is issued by the web browser. 20. The system of claim 16, further comprising loading the different database before the database is identified by the web browser. | 2,100 |
5,969 | 5,969 | 16,172,950 | 2,142 | A method and apparatus include a terminal device receiving a first message from a server that hosts a service available to the terminal device. The first message includes information about at least one attribute relating to a changeability of that at least one attribute having been changed. The terminal device sends a second message to the server in response to the first message that includes information identifying the information about the at least one attribute of the first message. The terminal device receives a third message from the server that includes information indicating a changeability setting for each of the at least one attribute identified in the first message. The terminal device updates how a user interface for the service is to be displayed so the user interface is displayed with attributes relating to the service being indicated as changeable or unchangeable in accordance with the third message. | 1-15. (canceled) 16. A method of communicating changeability attribute information comprising:
a terminal device sending a second message to a server in response to a first message that comprises information about a changeability of at least one attribute for one or more features of a service hosted by the server having been changed received from the server that hosts the service, the second message comprising information identifying the information about the at least one attribute of the first message, the terminal device being an electronic device comprising a processor connected to non-transitory memory configured to generate a user interface for use of the service including identifiers for features comprising a first feature having a first attribute of the at least one attribute; and the terminal device updating how the user interface for the service is to be displayed in a display device connected to the terminal device in response to a third message received from the server that comprises information indicating a changeability setting for the first attribute such that the user interface is displayable so that one of:
(i) the identifier of the first attribute is changed from indicating the first attribute is an unchangeable attribute to indicating that the first attribute is a changeable attribute that is adjustable by a user via input that is entered via the user interface in accordance with the information indicating the changeability setting of the third message, and
(ii) the identifier of the first attribute is changed from indicating the first attribute is a changeable attribute that is adjustable by a user via input that is entered via the user interface to indicating that the first attribute is an unchangeable attribute that the user cannot adjust in accordance with the information indicating the changeability setting of the third message. 17. The method of claim 16 wherein the updating of the user interface occurs such that the user interface is displayable so that only indicia for features having attributes that are changeable by the user are displayable. 18. The method of claim 16 wherein the updating of the user interface occurs such that the user interface is displayable so that indicia for attributes of the first feature that are changeable by the user are displayed adjacent to adjustability indicia that indicate at least one changeability option for the attributes. 19. The method of claim 18 wherein indicia corresponding to attributes the user is unauthorized to change are displayed to indicate those attributes are not changeable by the user. 20. The method of claim 18 wherein indicia corresponding to the attributes the user is unauthorized to change are not displayed on the user interface. 21. The method of claim 16 comprising:
the server sending the first message. 22. The method of claim 21, wherein the first message sent by the server is sent upon a determination that the attributes that are changed is more than a predetermined number of attributes. 23. The method of claim 21 wherein the first message is comprised of a field of private data, the field of private data having the information about changeability of attributes that have been changed. 24. The method of claim 16 wherein the terminal device sends the second message upon an application associated with the service being activated on the terminal device. 25. The method of claim 16 wherein the terminal device sends the second message in response to the terminal device being turned on after having been turned off. 26. The method of claim 16 wherein an application associated with the service is hidden from the user when the terminal device receives the first message and the terminal device sends the second message in response to the application associated with the service being activated so that the application is no longer hidden from the user. 27. The method of claim 16 wherein the first feature relates to call forwarding. 28. The method of claim 16 wherein the first feature is for a telephony service. 29. A communication system comprising:
at least one terminal device comprising a first terminal device, the first terminal device comprising a processor connected to non-transitory memory; a server that hosts a service, the server comprising a processor connected to a non-transitory computer readable medium, the server communicatively connectable to the first terminal device; the first terminal device configured to receive a first message from the server, the first message comprising information about a first attribute for a first feature of the service relating to a changeability of the first attribute having been changed; the first terminal device configured to send a second message to the server in response to the first message, the second message comprising information identifying the information about the first attribute; the first terminal device configured to receive a third message from the server, the third message comprising information indicating a changeability setting for the first attribute; the first terminal device configured to update how a user interface for the service is to be displayed in a display device connected to the first terminal device in response to the third message such that the user interface is displayable via the display device so that one of:
(i) an identifier for the first attribute is changed from indicating the first attribute is an unchangeable attribute to indicating that the first attribute is a changeable attribute by the user via input that is entered via the user interface in accordance with the information indicating the changeability setting of the third message, and
(ii) an identifier for the first attribute is changed from indicating the first attribute is a changeable attribute that is changeable by the user via input that is entered via the user interface to indicating that the first attribute is an unchangeable attribute that the user is not permitted to adjust in accordance with the information indicating the changeability setting of the third message. 30. The communication system of claim 29 wherein the at least one terminal device is comprised of a second terminal device, the second terminal device being an electronic device comprising a processor connected to non-transitory memory, and wherein:
the second terminal device configured to receive a fourth message from the server, the fourth message comprising information about a second attribute for a second feature of the service relating to a changeability of that second attribute having been changed;
the second terminal device configured to send a fifth message to the server in response to the fourth message, the fifth message comprising information identifying the information about the second attribute of the fourth message;
the second terminal device configured to receive a sixth message from the server, the sixth message comprising information indicating a changeability setting for the second attribute identified in the fourth message;
the second terminal device configured to update how a user interface of the second terminal device for the service is to be displayed in response to the sixth message such that the user interface of the second terminal device is displayable so that the second attribute is identified as being changeable or unchangeable in accordance with the information indicating the changeability setting of the sixth message. 31. The communication system of claim 30, wherein the second terminal device is configured to update how the user interface of the second terminal device for the service is to be displayed in response to the sixth message such that one of:
(i) an identifier for the second attribute is changed from indicating the second attribute is an unchangeable attribute to indicating that the second attribute is a changeable attribute by the user via input that is entered via the user interface in accordance with the information indicating the changeability setting of the sixth message, and (ii) an identifier for the second attribute is changed from indicating the second attribute is a changeable feature that is adjustable by the user via input that is entered via the user interface to indicating that the second attribute is an unchangeable attribute that the user is not permitted to adjust in accordance with the information indicating the changeability setting of the sixth message. 32. The communication system of claim 30 wherein the server is configured to generate information identifying changeability attributes associated with the service that the user of the first terminal device is authorized to change or adjust for the first feature for including within the third message. 33. The communication system of claim 29, wherein the first feature relates to call forwarding. 34. The communication system of claim 29, wherein the first feature is for a telephony service. 35. A terminal device, the terminal device comprising non-transitory memory communicatively connected to a processor unit, the memory having an application that defines a method performed by the terminal device when the application is run by the processor unit, the method comprising:
the terminal device sending a second message to a server that hosts a service in response to receiving a first message from the server, the second message comprising information identifying information about a first attribute for a first feature identified as having been changed in the first message; the terminal device updating how a user interface for a service is to be displayed in response to a third message indicating a changeability setting for the first attribute of the first feature of the service identified in the third message received from the server such that the user interface is displayable so that one of:
(i) an identifier for the first attribute is changed from indicating the first attribute is an unchangeable attribute to indicating that the first attribute is a changeable attribute by the user via input that is entered via the user interface in accordance with the information indicating the changeability setting of the third message, and
(ii) an identifier for the first attribute is changed from indicating the first attribute is a changeable attribute that is adjustable by the user via input that is entered via the user interface to indicating that the first attribute is an unchangeable attribute that the user is not permitted to adjust in accordance with the information indicating the changeability setting of the third message. | A method and apparatus include a terminal device receiving a first message from a server that hosts a service available to the terminal device. The first message includes information about at least one attribute relating to a changeability of that at least one attribute having been changed. The terminal device sends a second message to the server in response to the first message that includes information identifying the information about the at least one attribute of the first message. The terminal device receives a third message from the server that includes information indicating a changeability setting for each of the at least one attribute identified in the first message. The terminal device updates how a user interface for the service is to be displayed so the user interface is displayed with attributes relating to the service being indicated as changeable or unchangeable in accordance with the third message.1-15. (canceled) 16. A method of communicating changeability attribute information comprising:
a terminal device sending a second message to a server in response to a first message that comprises information about a changeability of at least one attribute for one or more features of a service hosted by the server having been changed received from the server that hosts the service, the second message comprising information identifying the information about the at least one attribute of the first message, the terminal device being an electronic device comprising a processor connected to non-transitory memory configured to generate a user interface for use of the service including identifiers for features comprising a first feature having a first attribute of the at least one attribute; and the terminal device updating how the user interface for the service is to be displayed in a display device connected to the terminal device in response to a third message received from the server that comprises information indicating a changeability setting for the first attribute such that the user interface is displayable so that one of:
(i) the identifier of the first attribute is changed from indicating the first attribute is an unchangeable attribute to indicating that the first attribute is a changeable attribute that is adjustable by a user via input that is entered via the user interface in accordance with the information indicating the changeability setting of the third message, and
(ii) the identifier of the first attribute is changed from indicating the first attribute is a changeable attribute that is adjustable by a user via input that is entered via the user interface to indicating that the first attribute is an unchangeable attribute that the user cannot adjust in accordance with the information indicating the changeability setting of the third message. 17. The method of claim 16 wherein the updating of the user interface occurs such that the user interface is displayable so that only indicia for features having attributes that are changeable by the user are displayable. 18. The method of claim 16 wherein the updating of the user interface occurs such that the user interface is displayable so that indicia for attributes of the first feature that are changeable by the user are displayed adjacent to adjustability indicia that indicate at least one changeability option for the attributes. 19. The method of claim 18 wherein indicia corresponding to attributes the user is unauthorized to change are displayed to indicate those attributes are not changeable by the user. 20. The method of claim 18 wherein indicia corresponding to the attributes the user is unauthorized to change are not displayed on the user interface. 21. The method of claim 16 comprising:
the server sending the first message. 22. The method of claim 21, wherein the first message sent by the server is sent upon a determination that the attributes that are changed is more than a predetermined number of attributes. 23. The method of claim 21 wherein the first message is comprised of a field of private data, the field of private data having the information about changeability of attributes that have been changed. 24. The method of claim 16 wherein the terminal device sends the second message upon an application associated with the service being activated on the terminal device. 25. The method of claim 16 wherein the terminal device sends the second message in response to the terminal device being turned on after having been turned off. 26. The method of claim 16 wherein an application associated with the service is hidden from the user when the terminal device receives the first message and the terminal device sends the second message in response to the application associated with the service being activated so that the application is no longer hidden from the user. 27. The method of claim 16 wherein the first feature relates to call forwarding. 28. The method of claim 16 wherein the first feature is for a telephony service. 29. A communication system comprising:
at least one terminal device comprising a first terminal device, the first terminal device comprising a processor connected to non-transitory memory; a server that hosts a service, the server comprising a processor connected to a non-transitory computer readable medium, the server communicatively connectable to the first terminal device; the first terminal device configured to receive a first message from the server, the first message comprising information about a first attribute for a first feature of the service relating to a changeability of the first attribute having been changed; the first terminal device configured to send a second message to the server in response to the first message, the second message comprising information identifying the information about the first attribute; the first terminal device configured to receive a third message from the server, the third message comprising information indicating a changeability setting for the first attribute; the first terminal device configured to update how a user interface for the service is to be displayed in a display device connected to the first terminal device in response to the third message such that the user interface is displayable via the display device so that one of:
(i) an identifier for the first attribute is changed from indicating the first attribute is an unchangeable attribute to indicating that the first attribute is a changeable attribute by the user via input that is entered via the user interface in accordance with the information indicating the changeability setting of the third message, and
(ii) an identifier for the first attribute is changed from indicating the first attribute is a changeable attribute that is changeable by the user via input that is entered via the user interface to indicating that the first attribute is an unchangeable attribute that the user is not permitted to adjust in accordance with the information indicating the changeability setting of the third message. 30. The communication system of claim 29 wherein the at least one terminal device is comprised of a second terminal device, the second terminal device being an electronic device comprising a processor connected to non-transitory memory, and wherein:
the second terminal device configured to receive a fourth message from the server, the fourth message comprising information about a second attribute for a second feature of the service relating to a changeability of that second attribute having been changed;
the second terminal device configured to send a fifth message to the server in response to the fourth message, the fifth message comprising information identifying the information about the second attribute of the fourth message;
the second terminal device configured to receive a sixth message from the server, the sixth message comprising information indicating a changeability setting for the second attribute identified in the fourth message;
the second terminal device configured to update how a user interface of the second terminal device for the service is to be displayed in response to the sixth message such that the user interface of the second terminal device is displayable so that the second attribute is identified as being changeable or unchangeable in accordance with the information indicating the changeability setting of the sixth message. 31. The communication system of claim 30, wherein the second terminal device is configured to update how the user interface of the second terminal device for the service is to be displayed in response to the sixth message such that one of:
(i) an identifier for the second attribute is changed from indicating the second attribute is an unchangeable attribute to indicating that the second attribute is a changeable attribute by the user via input that is entered via the user interface in accordance with the information indicating the changeability setting of the sixth message, and (ii) an identifier for the second attribute is changed from indicating the second attribute is a changeable feature that is adjustable by the user via input that is entered via the user interface to indicating that the second attribute is an unchangeable attribute that the user is not permitted to adjust in accordance with the information indicating the changeability setting of the sixth message. 32. The communication system of claim 30 wherein the server is configured to generate information identifying changeability attributes associated with the service that the user of the first terminal device is authorized to change or adjust for the first feature for including within the third message. 33. The communication system of claim 29, wherein the first feature relates to call forwarding. 34. The communication system of claim 29, wherein the first feature is for a telephony service. 35. A terminal device, the terminal device comprising non-transitory memory communicatively connected to a processor unit, the memory having an application that defines a method performed by the terminal device when the application is run by the processor unit, the method comprising:
the terminal device sending a second message to a server that hosts a service in response to receiving a first message from the server, the second message comprising information identifying information about a first attribute for a first feature identified as having been changed in the first message; the terminal device updating how a user interface for a service is to be displayed in response to a third message indicating a changeability setting for the first attribute of the first feature of the service identified in the third message received from the server such that the user interface is displayable so that one of:
(i) an identifier for the first attribute is changed from indicating the first attribute is an unchangeable attribute to indicating that the first attribute is a changeable attribute by the user via input that is entered via the user interface in accordance with the information indicating the changeability setting of the third message, and
(ii) an identifier for the first attribute is changed from indicating the first attribute is a changeable attribute that is adjustable by the user via input that is entered via the user interface to indicating that the first attribute is an unchangeable attribute that the user is not permitted to adjust in accordance with the information indicating the changeability setting of the third message. | 2,100 |
5,970 | 5,970 | 14,754,406 | 2,166 | A computing system is configured to be coupled to a remote storage system. The remote storage system comprises a key/value store. The computing system is configured to perform transactions on data stored at the remote storage system. The system includes a database client. The database client includes an interface configured to allow a user to request database operations using the database client. The system further includes a database engine coupled to the database client. The database engine is configured to receive requests for database operations from the database client. The database engine is further configured to obtain and operate on transaction state stored as one or more key/value pairs in the key/value store at the remote storage system from the remote storage system. The database engine is configured to transactionally perform data operations, using the transaction state, on one or more data key/value pairs in the key/value store. | 1. A computing system configured to be coupled to a remote storage system, where the remote storage system comprises a key/value store, the computing system being configured to perform transactions on data stored at the remote storage system, the system comprising:
a database client, wherein the database client comprises an interface configured to allow a user to request database operations using the database client; a database engine coupled to the database client, and configured to receive requests for database operations from the database client; wherein the database engine is configured to obtain and operate on transaction state stored as one or more key/value pairs in the key/value store at the remote storage system from the remote storage system; and wherein the database engine is configured to transactionally perform data operations, using the transaction state, on data stored as one or more data key/value pairs in the key/value store. 2. The computing system of claim 1, wherein the database engine is configured to update a write lease field in the transaction state stored as one or more key/value rows in the key/value store at the remote storage system to indicate that the database engine has acquired a write lease on the one or more data key/value pairs. 3. The computing system of claim 1, wherein the database engine is configured to update a current transaction field in the transaction state stored as one or more key/value rows in the key/value store at the remote storage system to include an identifier identifying a specific transaction that has begun on data in the one or more data key/value pairs. 4. The computing system of claim 1, wherein the database engine is configured to update an uncommitted transaction field in the transaction state stored as one or more key/value rows in the key/value store at the remote storage system to add and remove identifiers identifying a specific transaction as being an uncommitted transaction data in the one or more data key/value pairs. 5. The computing system of claim 1, wherein the database engine is configured to commit a transaction on data in the one or more data key/value pairs by updating the transaction state stored as one or more key/value pairs in the key/value store at the remote storage system from the remote storage system. 6. The computing system of claim 1, wherein the database engine is configured to commit a transaction on data in the one or more data key/value pairs by updating an uncommitted transaction field in the transaction state stored as one or more key/value rows in the key/value store at the remote storage system to remove an identifier identifying the transaction as being an uncommitted transaction on data in the one or more data key/value pairs. 7. The computing system of claim 1, wherein the database engine is configured to determine that a transaction on data in the one or more data key/value pairs should be aborted and rolled back and as a result, prevent updating an uncommitted transaction field in the transaction state stored as one or more key/value rows in the key/value store at the remote storage system, to prevent removal of an identifier identifying the transaction as being an uncommitted transaction on data in the one or more data key/value pairs. 8. The computing system of claim 1, wherein the database engine is configured to receive a notification that a different computing system has begun a transaction on data in the one or more data key/value pairs, but that the transaction has not been committed or aborted, and as a result, the database engine is configured to use the transaction state stored as one or more key/value pairs in the key/value store at the remote storage system to resume the transaction including performing data operations, using the transaction state, on the one or more data key/value pairs in the key/value store. 9. The computing system of claim 1, wherein the database engine is configured to receive a notification that a different computing system has begun a transaction on data in the one or more data key/value pairs, but that the transaction has not been committed or aborted, and as a result, the database engine is configured to use the transaction state stored as one or more key/value pairs in the key/value store at the remote storage system to commit the transaction on data in the one or more data key/value pairs by updating an uncommitted transaction field in the transaction state stored as one or more key/value rows in the key/value store at the remote storage system to remove an identifier identifying the transaction as being an uncommitted transaction on data in the one or more data key/value pairs. 10. The computing system of claim 1, wherein the database engine is configured to receive a notification that a different computing system has begun a transaction on data in the one or more data key/value pairs, but that the transaction has not been committed or aborted, and as a result, the database engine is configured abort and roll back the transaction by preventing updating of an uncommitted transaction field in the transaction state stored as one or more key/value rows in the key/value store at the remote storage system, to prevent removal of an identifier identifying the transaction as being an uncommitted transaction on data in the one or more data key/value pairs. 11. In a computing environment comprising a plurality of local systems coupled to a remote storage system, where the remote storage system comprises a key/value store, a method of performing a transaction on data stored as one or more data key/value pairs at the remote storage system using a database engine at a local system, the method comprising:
at the local system, receiving a request to perform a transaction on data stored at the remote storage system; obtaining, from the remote storage system, transaction state stored as one or more key/value pairs in the key/value store at the remote storage system; and transactionally performing one or more data operations on the data stored as one or more data key/value pairs in the key/value store by updating the transaction state stored as one or more key/value pairs in the key/value store at the remote storage system. 12. The method of claim 11, wherein updating the transaction state stored as one or more key/value pairs in the key/value store at the remote storage system comprises updating a write lease field in the transaction state stored as one or more key/value rows in the key/value store at the remote storage system to indicate that the database engine has acquired a write lease on the one or more data key/value pairs. 13. The method of claim 11, wherein updating the transaction state stored as one or more key/value pairs in the key/value store at the remote storage system comprises updating a current transaction field in the transaction state stored as one or more key/value rows in the key/value store at the remote storage system to include an identifier identifying that the transaction has begun on data in the one or more data key/value pairs. 14. The method of claim 11, wherein updating the transaction state stored as one or more key/value pairs in the key/value store at the remote storage system comprises updating an uncommitted transaction field in the transaction state stored as one or more key/value rows in the key/value store at the remote storage system to at least one of add or remove one or more identifiers identifying the transaction as being an uncommitted transaction on data in the one or more data key/value pairs. 15. The method of claim 11, further comprising committing the transaction on data in the one or more data key/value pairs by updating the transaction state stored as one or more key/value pairs in the key/value store at the remote storage system from the remote storage system. 16. The method of claim 11, further comprising committing the transaction on data in the one or more data key/value pairs by removing an identifier identifying the transaction as being an uncommitted transaction on data in the one or more data key/value pairs. 17. The method of claim 11, further comprising:
determining that the transaction on data in the one or more data key/value pairs should be aborted and rolled back; and as a result, preventing updating an uncommitted transaction field in the transaction state stored as one or more key/value rows in the key/value store at the remote storage system, to prevent removal of an identifier identifying the transaction as being an uncommitted transaction on data in the one or more data key/value pairs. 18. The method of claim 11, further comprising:
receiving a notification that a different computing system has begun a transaction on data in the one or more data key/value pairs, but that the transaction has not been committed or aborted; and as a result, using the transaction state stored as one or more key/value pairs in the key/value store at the remote storage system to resume the transaction including at least one of performing data operations on the data in the one or more data key/value pairs, aborting the transaction, or committing the transaction. 19. A remote storage system for implementing transactional data storage, the remote storage system comprising:
a data store, wherein the data store comprises a key/value store, wherein the key/value store comprises:
data to be operated on stored as key/value rows of the data store;
transaction state stored as one or more key/value rows of the data store; and
wherein the remote storage system is configured to provide the transaction state stored in the data store to, and receive updated transaction state from a plurality of different local systems that each have database engines that perform transactional database operations on the data store to read or write data to be operated on. 20. The remote storage system of claim 19, wherein the transaction state comprises:
a write lease field stored in one or more key/value rows in the key/value store, the write lease field including an indication that a database engine has acquired a write lease; a current transaction field stored in one or more key/value rows in the key/value store, wherein the current transaction field includes an identifier identifying a specific transaction that has begun; and an uncommitted transaction field in one or more key/value rows in the key/value store, wherein the uncommitted transaction field includes an identifier identifying the specific transaction as being an uncommitted transaction. | A computing system is configured to be coupled to a remote storage system. The remote storage system comprises a key/value store. The computing system is configured to perform transactions on data stored at the remote storage system. The system includes a database client. The database client includes an interface configured to allow a user to request database operations using the database client. The system further includes a database engine coupled to the database client. The database engine is configured to receive requests for database operations from the database client. The database engine is further configured to obtain and operate on transaction state stored as one or more key/value pairs in the key/value store at the remote storage system from the remote storage system. The database engine is configured to transactionally perform data operations, using the transaction state, on one or more data key/value pairs in the key/value store.1. A computing system configured to be coupled to a remote storage system, where the remote storage system comprises a key/value store, the computing system being configured to perform transactions on data stored at the remote storage system, the system comprising:
a database client, wherein the database client comprises an interface configured to allow a user to request database operations using the database client; a database engine coupled to the database client, and configured to receive requests for database operations from the database client; wherein the database engine is configured to obtain and operate on transaction state stored as one or more key/value pairs in the key/value store at the remote storage system from the remote storage system; and wherein the database engine is configured to transactionally perform data operations, using the transaction state, on data stored as one or more data key/value pairs in the key/value store. 2. The computing system of claim 1, wherein the database engine is configured to update a write lease field in the transaction state stored as one or more key/value rows in the key/value store at the remote storage system to indicate that the database engine has acquired a write lease on the one or more data key/value pairs. 3. The computing system of claim 1, wherein the database engine is configured to update a current transaction field in the transaction state stored as one or more key/value rows in the key/value store at the remote storage system to include an identifier identifying a specific transaction that has begun on data in the one or more data key/value pairs. 4. The computing system of claim 1, wherein the database engine is configured to update an uncommitted transaction field in the transaction state stored as one or more key/value rows in the key/value store at the remote storage system to add and remove identifiers identifying a specific transaction as being an uncommitted transaction data in the one or more data key/value pairs. 5. The computing system of claim 1, wherein the database engine is configured to commit a transaction on data in the one or more data key/value pairs by updating the transaction state stored as one or more key/value pairs in the key/value store at the remote storage system from the remote storage system. 6. The computing system of claim 1, wherein the database engine is configured to commit a transaction on data in the one or more data key/value pairs by updating an uncommitted transaction field in the transaction state stored as one or more key/value rows in the key/value store at the remote storage system to remove an identifier identifying the transaction as being an uncommitted transaction on data in the one or more data key/value pairs. 7. The computing system of claim 1, wherein the database engine is configured to determine that a transaction on data in the one or more data key/value pairs should be aborted and rolled back and as a result, prevent updating an uncommitted transaction field in the transaction state stored as one or more key/value rows in the key/value store at the remote storage system, to prevent removal of an identifier identifying the transaction as being an uncommitted transaction on data in the one or more data key/value pairs. 8. The computing system of claim 1, wherein the database engine is configured to receive a notification that a different computing system has begun a transaction on data in the one or more data key/value pairs, but that the transaction has not been committed or aborted, and as a result, the database engine is configured to use the transaction state stored as one or more key/value pairs in the key/value store at the remote storage system to resume the transaction including performing data operations, using the transaction state, on the one or more data key/value pairs in the key/value store. 9. The computing system of claim 1, wherein the database engine is configured to receive a notification that a different computing system has begun a transaction on data in the one or more data key/value pairs, but that the transaction has not been committed or aborted, and as a result, the database engine is configured to use the transaction state stored as one or more key/value pairs in the key/value store at the remote storage system to commit the transaction on data in the one or more data key/value pairs by updating an uncommitted transaction field in the transaction state stored as one or more key/value rows in the key/value store at the remote storage system to remove an identifier identifying the transaction as being an uncommitted transaction on data in the one or more data key/value pairs. 10. The computing system of claim 1, wherein the database engine is configured to receive a notification that a different computing system has begun a transaction on data in the one or more data key/value pairs, but that the transaction has not been committed or aborted, and as a result, the database engine is configured abort and roll back the transaction by preventing updating of an uncommitted transaction field in the transaction state stored as one or more key/value rows in the key/value store at the remote storage system, to prevent removal of an identifier identifying the transaction as being an uncommitted transaction on data in the one or more data key/value pairs. 11. In a computing environment comprising a plurality of local systems coupled to a remote storage system, where the remote storage system comprises a key/value store, a method of performing a transaction on data stored as one or more data key/value pairs at the remote storage system using a database engine at a local system, the method comprising:
at the local system, receiving a request to perform a transaction on data stored at the remote storage system; obtaining, from the remote storage system, transaction state stored as one or more key/value pairs in the key/value store at the remote storage system; and transactionally performing one or more data operations on the data stored as one or more data key/value pairs in the key/value store by updating the transaction state stored as one or more key/value pairs in the key/value store at the remote storage system. 12. The method of claim 11, wherein updating the transaction state stored as one or more key/value pairs in the key/value store at the remote storage system comprises updating a write lease field in the transaction state stored as one or more key/value rows in the key/value store at the remote storage system to indicate that the database engine has acquired a write lease on the one or more data key/value pairs. 13. The method of claim 11, wherein updating the transaction state stored as one or more key/value pairs in the key/value store at the remote storage system comprises updating a current transaction field in the transaction state stored as one or more key/value rows in the key/value store at the remote storage system to include an identifier identifying that the transaction has begun on data in the one or more data key/value pairs. 14. The method of claim 11, wherein updating the transaction state stored as one or more key/value pairs in the key/value store at the remote storage system comprises updating an uncommitted transaction field in the transaction state stored as one or more key/value rows in the key/value store at the remote storage system to at least one of add or remove one or more identifiers identifying the transaction as being an uncommitted transaction on data in the one or more data key/value pairs. 15. The method of claim 11, further comprising committing the transaction on data in the one or more data key/value pairs by updating the transaction state stored as one or more key/value pairs in the key/value store at the remote storage system from the remote storage system. 16. The method of claim 11, further comprising committing the transaction on data in the one or more data key/value pairs by removing an identifier identifying the transaction as being an uncommitted transaction on data in the one or more data key/value pairs. 17. The method of claim 11, further comprising:
determining that the transaction on data in the one or more data key/value pairs should be aborted and rolled back; and as a result, preventing updating an uncommitted transaction field in the transaction state stored as one or more key/value rows in the key/value store at the remote storage system, to prevent removal of an identifier identifying the transaction as being an uncommitted transaction on data in the one or more data key/value pairs. 18. The method of claim 11, further comprising:
receiving a notification that a different computing system has begun a transaction on data in the one or more data key/value pairs, but that the transaction has not been committed or aborted; and as a result, using the transaction state stored as one or more key/value pairs in the key/value store at the remote storage system to resume the transaction including at least one of performing data operations on the data in the one or more data key/value pairs, aborting the transaction, or committing the transaction. 19. A remote storage system for implementing transactional data storage, the remote storage system comprising:
a data store, wherein the data store comprises a key/value store, wherein the key/value store comprises:
data to be operated on stored as key/value rows of the data store;
transaction state stored as one or more key/value rows of the data store; and
wherein the remote storage system is configured to provide the transaction state stored in the data store to, and receive updated transaction state from a plurality of different local systems that each have database engines that perform transactional database operations on the data store to read or write data to be operated on. 20. The remote storage system of claim 19, wherein the transaction state comprises:
a write lease field stored in one or more key/value rows in the key/value store, the write lease field including an indication that a database engine has acquired a write lease; a current transaction field stored in one or more key/value rows in the key/value store, wherein the current transaction field includes an identifier identifying a specific transaction that has begun; and an uncommitted transaction field in one or more key/value rows in the key/value store, wherein the uncommitted transaction field includes an identifier identifying the specific transaction as being an uncommitted transaction. | 2,100 |
5,971 | 5,971 | 15,063,515 | 2,191 | The current document is directed to an integrated cloud-management facility, or subsystem, that incorporates an automated-application-deployment-facility integrator that incorporates one or more automated-application-deployment facilities into the cloud-management facility. The automated-application-deployment-facility integrator allows users of the cloud-management facility to access one or more automated-application-deployment facilities within the context of the cloud-management facility. The automated-application-deployment-facility integrator provides to system managers and administrators, through the cloud-management facility, a wider range of functionalities and capabilities than is provided by a cloud-management facility that includes only a single automated-application-deployment facility, or subsystem. | 1. A workflow-based cloud-management system incorporated within a cloud-computing facility having multiple servers, data-storage devices, and one or more internal networks, the workflow-based cloud-management system comprising:
an automated-application-release-management subsystem; an infrastructure-management-and-administration subsystem; an automated-application-deployment integrator that includes
an application programming interface,
a service-provider interface,
entrypoint-translation components, and
entrypoint-implementation components; and
one or more automated-application-deployment subsystems that interface to the service-provider interface of the automated-application-deployment integrator. 2. The workflow-based cloud-management system of claim 1 wherein the automated-application-release-management subsystem comprises:
a dashboard user interface;
an automated-application-release-management controller;
an interface to a workflow-execution engine; and
an artifact-storage-and-management subsystem. 3. The workflow-based cloud-management system of claim 1 wherein the infrastructure-management-and-administration subsystem comprises:
one or more user service catalogs;
one or more services and corresponding service implementations;
fabrics that abstract virtual cloud-computing-facility resources, including virtual processors, virtual networks, and virtual data-storage devices that, in turn, abstract physical cloud-computing-facility resources, including processors, networks, and data-storage devices; and
a graphical user interface through which users, including system administrators, distribute resources and services to users and groups of users, configure the cloud-computing-facility, and manage the cloud-computing-facility. 4. The workflow-based cloud-management system of claim 1 wherein the automated-application-release-management subsystem and the infrastructure-management-and-administration subsystem include control logic at least partially implemented as workflows that are executed by a workflow-execution-engine subsystem. 5. The workflow-based cloud-management system of claim 1 wherein the application-deployment subsystems each comprises an integrated application-deployment facility that:
accesses one or more artifact repositories that store and logically organize binary files and other artifacts used to build complex cloud-resident applications; and
accesses automated tools that are used, along with workflows, to develop specific automated application-deployment tools that deploy cloud-resident applications within the cloud-computing facility. 6. The workflow-based cloud-management system of claim 1 wherein the application programming interface of the automated-application-deployment integrator includes entrypoints that represent access points to the application-deployment functionality, or a subset of the application-deployment functionality, access to which is provided by the entrypoints in the application programming interfaces of the one or more automated-application-deployment subsystems that interface to the service-provider interface of the automated-application-deployment integrator. 7. The workflow-based cloud-management system of claim 6 wherein the application programming interface of the automated-application-deployment integrator includes entrypoints that represent access points to functionality implemented, at least is part, within the automated-application-deployment integrator. 8. The workflow-based cloud-management system of claim 7 wherein a portion of the functionality implemented, at least is part, within the automated-application-deployment integrator is not implemented in any of the one or more automated-application-deployment subsystems that interface to the service-provider interface of the automated-application-deployment integrator. 9. The workflow-based cloud-management system of claim 7 wherein a portion of the functionality implemented, at least is part, within the automated-application-deployment integrator is implemented in only a proper subset any of two or more of the automated-application-deployment subsystems that interface to the service-provider interface of the automated-application-deployment integrator. 10. The workflow-based cloud-management system of claim 1 wherein the service-provider interface of the automated-application-deployment integrator includes those entrypoints of the application programming interfaces of the one or more automated-application-deployment subsystems that are called by the entrypoint-translation components and entrypoint-implementation components of the automated-application-deployment integrator. 11. The workflow-based cloud-management system of claim 1 wherein the workflow-based cloud-management system provides an automated-application-deployment-subsystem selection feature using which a system administrator selects a particular automated-application-deployment subsystem for supporting operation of the automated-application-release-management subsystem and infrastructure-management-and-administration subsystem of the workflow-based cloud-management system. 12. A method that interfaces two or more automated-application-deployment subsystems to an automated-application-release-management-subsystem component and an infrastructure-management-and-administration-subsystem component of a workflow-based cloud-management system incorporated within a cloud-computing facility having multiple servers, data-storage devices, and one or more internal networks, the method comprising:
installing, within the cloud-computing facility, a workflow-based cloud-management system that includes the automated-application-release-management subsystem, the infrastructure-management-and-administration subsystem, and an automated-application-deployment integrator; and selecting one of two or more automated-application-deployment subsystems to support operation of the workflow-based cloud-management system. 13. The method of claim 12 wherein the two or more application-deployment subsystems each comprises an integrated application-deployment facility that:
accesses one or more artifact repositories that store and logically organize binary files and other artifacts used to build complex cloud-resident applications; and
accesses automated tools that are used, along with workflows, to develop specific automated application-deployment tools that deploy cloud-resident applications within the cloud-computing facility. 14. The method of claim 12 wherein the automated-application-deployment integrator comprises:
an application programming interface,
a service-provider interface,
entrypoint-translation components, and
entrypoint-implementation components. 15. The method of claim 14 wherein the application programming interface of the automated-application-deployment integrator includes entrypoints that represent access points to the application-deployment functionality, or a subset of the application-deployment functionality, access to which is provided by the entrypoints in the application programming interfaces of the one or more automated-application-deployment subsystems that interface to the service-provider interface of the automated-application-deployment integrator. 16. The method of claim 15 wherein the application programming interface of the automated-application-deployment integrator includes entrypoints that represent access points to functionality implemented, at least is part, within the automated-application-deployment integrator. 17. The method of claim 16 wherein a portion of the functionality implemented, at least is part, within the automated-application-deployment integrator is not implemented in any of the one or more automated-application-deployment subsystems that interface to the service-provider interface of the automated-application-deployment integrator. 18. The method of claim 16 wherein a portion of the functionality implemented, at least is part, within the automated-application-deployment integrator is implemented in only a proper subset any of two or more of the automated-application-deployment subsystems that interface to the service-provider interface of the automated-application-deployment integrator. 19. The method of claim 14 wherein the service-provider interface of the automated-application-deployment integrator includes those entrypoints of the application programming interfaces of the one or more automated-application-deployment subsystems that are called by the entrypoint-translation components and entrypoint-implementation components of the automated-application-deployment integrator. 20. Computer instructions, stored within one or more physical data-storage devices, that, when executed on one or more processors within a cloud-computing facility having multiple servers, data-storage devices, and one or more internal networks, control the cloud-computing facility to provide application-deployment functionality, through an automated-application-deployment-integrator component of a workflow-based cloud-management system incorporated within the workflow-based cloud-management system, the automated-application-deployment-integrator component comprising:
an application programming interface, a service-provider interface, entrypoint-translation components, and entrypoint-implementation components. | The current document is directed to an integrated cloud-management facility, or subsystem, that incorporates an automated-application-deployment-facility integrator that incorporates one or more automated-application-deployment facilities into the cloud-management facility. The automated-application-deployment-facility integrator allows users of the cloud-management facility to access one or more automated-application-deployment facilities within the context of the cloud-management facility. The automated-application-deployment-facility integrator provides to system managers and administrators, through the cloud-management facility, a wider range of functionalities and capabilities than is provided by a cloud-management facility that includes only a single automated-application-deployment facility, or subsystem.1. A workflow-based cloud-management system incorporated within a cloud-computing facility having multiple servers, data-storage devices, and one or more internal networks, the workflow-based cloud-management system comprising:
an automated-application-release-management subsystem; an infrastructure-management-and-administration subsystem; an automated-application-deployment integrator that includes
an application programming interface,
a service-provider interface,
entrypoint-translation components, and
entrypoint-implementation components; and
one or more automated-application-deployment subsystems that interface to the service-provider interface of the automated-application-deployment integrator. 2. The workflow-based cloud-management system of claim 1 wherein the automated-application-release-management subsystem comprises:
a dashboard user interface;
an automated-application-release-management controller;
an interface to a workflow-execution engine; and
an artifact-storage-and-management subsystem. 3. The workflow-based cloud-management system of claim 1 wherein the infrastructure-management-and-administration subsystem comprises:
one or more user service catalogs;
one or more services and corresponding service implementations;
fabrics that abstract virtual cloud-computing-facility resources, including virtual processors, virtual networks, and virtual data-storage devices that, in turn, abstract physical cloud-computing-facility resources, including processors, networks, and data-storage devices; and
a graphical user interface through which users, including system administrators, distribute resources and services to users and groups of users, configure the cloud-computing-facility, and manage the cloud-computing-facility. 4. The workflow-based cloud-management system of claim 1 wherein the automated-application-release-management subsystem and the infrastructure-management-and-administration subsystem include control logic at least partially implemented as workflows that are executed by a workflow-execution-engine subsystem. 5. The workflow-based cloud-management system of claim 1 wherein the application-deployment subsystems each comprises an integrated application-deployment facility that:
accesses one or more artifact repositories that store and logically organize binary files and other artifacts used to build complex cloud-resident applications; and
accesses automated tools that are used, along with workflows, to develop specific automated application-deployment tools that deploy cloud-resident applications within the cloud-computing facility. 6. The workflow-based cloud-management system of claim 1 wherein the application programming interface of the automated-application-deployment integrator includes entrypoints that represent access points to the application-deployment functionality, or a subset of the application-deployment functionality, access to which is provided by the entrypoints in the application programming interfaces of the one or more automated-application-deployment subsystems that interface to the service-provider interface of the automated-application-deployment integrator. 7. The workflow-based cloud-management system of claim 6 wherein the application programming interface of the automated-application-deployment integrator includes entrypoints that represent access points to functionality implemented, at least is part, within the automated-application-deployment integrator. 8. The workflow-based cloud-management system of claim 7 wherein a portion of the functionality implemented, at least is part, within the automated-application-deployment integrator is not implemented in any of the one or more automated-application-deployment subsystems that interface to the service-provider interface of the automated-application-deployment integrator. 9. The workflow-based cloud-management system of claim 7 wherein a portion of the functionality implemented, at least is part, within the automated-application-deployment integrator is implemented in only a proper subset any of two or more of the automated-application-deployment subsystems that interface to the service-provider interface of the automated-application-deployment integrator. 10. The workflow-based cloud-management system of claim 1 wherein the service-provider interface of the automated-application-deployment integrator includes those entrypoints of the application programming interfaces of the one or more automated-application-deployment subsystems that are called by the entrypoint-translation components and entrypoint-implementation components of the automated-application-deployment integrator. 11. The workflow-based cloud-management system of claim 1 wherein the workflow-based cloud-management system provides an automated-application-deployment-subsystem selection feature using which a system administrator selects a particular automated-application-deployment subsystem for supporting operation of the automated-application-release-management subsystem and infrastructure-management-and-administration subsystem of the workflow-based cloud-management system. 12. A method that interfaces two or more automated-application-deployment subsystems to an automated-application-release-management-subsystem component and an infrastructure-management-and-administration-subsystem component of a workflow-based cloud-management system incorporated within a cloud-computing facility having multiple servers, data-storage devices, and one or more internal networks, the method comprising:
installing, within the cloud-computing facility, a workflow-based cloud-management system that includes the automated-application-release-management subsystem, the infrastructure-management-and-administration subsystem, and an automated-application-deployment integrator; and selecting one of two or more automated-application-deployment subsystems to support operation of the workflow-based cloud-management system. 13. The method of claim 12 wherein the two or more application-deployment subsystems each comprises an integrated application-deployment facility that:
accesses one or more artifact repositories that store and logically organize binary files and other artifacts used to build complex cloud-resident applications; and
accesses automated tools that are used, along with workflows, to develop specific automated application-deployment tools that deploy cloud-resident applications within the cloud-computing facility. 14. The method of claim 12 wherein the automated-application-deployment integrator comprises:
an application programming interface,
a service-provider interface,
entrypoint-translation components, and
entrypoint-implementation components. 15. The method of claim 14 wherein the application programming interface of the automated-application-deployment integrator includes entrypoints that represent access points to the application-deployment functionality, or a subset of the application-deployment functionality, access to which is provided by the entrypoints in the application programming interfaces of the one or more automated-application-deployment subsystems that interface to the service-provider interface of the automated-application-deployment integrator. 16. The method of claim 15 wherein the application programming interface of the automated-application-deployment integrator includes entrypoints that represent access points to functionality implemented, at least is part, within the automated-application-deployment integrator. 17. The method of claim 16 wherein a portion of the functionality implemented, at least is part, within the automated-application-deployment integrator is not implemented in any of the one or more automated-application-deployment subsystems that interface to the service-provider interface of the automated-application-deployment integrator. 18. The method of claim 16 wherein a portion of the functionality implemented, at least is part, within the automated-application-deployment integrator is implemented in only a proper subset any of two or more of the automated-application-deployment subsystems that interface to the service-provider interface of the automated-application-deployment integrator. 19. The method of claim 14 wherein the service-provider interface of the automated-application-deployment integrator includes those entrypoints of the application programming interfaces of the one or more automated-application-deployment subsystems that are called by the entrypoint-translation components and entrypoint-implementation components of the automated-application-deployment integrator. 20. Computer instructions, stored within one or more physical data-storage devices, that, when executed on one or more processors within a cloud-computing facility having multiple servers, data-storage devices, and one or more internal networks, control the cloud-computing facility to provide application-deployment functionality, through an automated-application-deployment-integrator component of a workflow-based cloud-management system incorporated within the workflow-based cloud-management system, the automated-application-deployment-integrator component comprising:
an application programming interface, a service-provider interface, entrypoint-translation components, and entrypoint-implementation components. | 2,100 |
5,972 | 5,972 | 15,878,708 | 2,191 | The current document is directed to an automated-application-release-management system that organizes and manages the application-development and application-release processes to allow for continuous application development and release. The current document is particularly directed to implementations in which the automated application-release-management subsystem provides code-change ratings and developer ratings used throughout the code-change-submission-to-acceptance process. Code-change ratings and developer ratings are used to tailor tasks and control flow within the code-change-submission-to-acceptance process in order to respond to particular characteristics of code changes and developers. | 1. An automated-application-release-management subsystem within a cloud-computing facility having multiple servers, data-storage devices, and one or more internal networks, the automated-application-release-management subsystem comprising:
a dashboard user interface; an automated-application-release-management controller; an interface to a workflow-execution engine within the cloud-computing facility; an artifact-storage-and-management subsystem; and code-change ratings that are generated, stored, and shared during code-change processing by the automated-application-release-management subsystem. 2. The automated-application-release-management subsystem of claim 1 that is further incorporated in a workflow-based cloud-management system that additionally includes an infrastructure-management-and-administration subsystem and the workflow-execution engine. 3. The automated-application-release-management subsystem of claim 1 wherein the automated-application-release-management controller controls execution of application-release-management pipelines, each application-release-management pipeline representing a sequence of tasks carried out by the automated-application-release-management subsystem to generate a releasable version of an application. 4. The automated-application-release-management subsystem of claim 3 wherein each application-release-management pipeline comprises one or more stages. 5. The automated-application-release-management subsystem of claim 4 wherein each application-release-management-pipeline stage comprises:
a set of one or more tasks; and
a plug-in framework that maps entrypoints in the tasks to entrypoints within sets of routine and/or function entrypoints in descriptors within the set of sets of descriptors. 6. The automated-application-release-management subsystem of claim 5 wherein the tasks include tasks of task types selected from among:
initialization tasks;
deployment tasks;
run-tests tasks;
gating-rule tasks; and
finalize tasks. 7. The automated-application-release-management subsystem of claim 4 wherein the automated application-release-management subsystem:
generates a code-change-rating data structure when a submitted code change is first received; and
stores the generated code-change-rating data structure in a data store. 8. The automated-application-release-management subsystem of claim 7 wherein, during processing of a code change, each stage of the application-release-management pipeline following an initial stage that generates the code-change-rating data structure for the code change:
retrieves the code-change-rating data structure from the data store;
uses information contained in the code-change-rating data structure to control execution of one or more tasks within the stage;
modifies the code-change-rating data structure; and
updates the code-change-rating data structure in the data store to contain the modified code-change-rating data structure. 9. The automated-application-release-management subsystem of claim 8 wherein the code-change-rating data structure includes:
a field storing an identifier for the code change; and
a field including a current score for the code change. 10. The automated-application-release-management subsystem of claim 9 wherein the change-rating data structure additionally includes information about the processing history carried out by each stage, including an indication of a number of times the stage has received and processed the code change and a stage rating for the code change. 11. The automated-application-release-management subsystem of claim 7 wherein the automated-application-release-management subsystem:
generates a developer-rating data structure when a developer is first made known to the automated-application-release-management subsystem; and
stores the developer-rating data structure in the data store. 12. The automated-application-release-management subsystem of claim 11 wherein, during processing of a code change, each stage of the application-release-management pipeline:
retrieves the developer-rating data structure for the developer who submitted the code change from the data store; and
uses information contained in the developer-rating data structure to control execution of one or more tasks within the stage. 13. The automated-application-release-management subsystem of claim 12 wherein, during processing of a code change, a final stage of the application-release-management pipeline, following acceptance of the code change:
modifies the developer-rating data structure; and
updates the developer-rating data structure in the data store to contain the modified code-change-rating data structure. 14. The automated-application-release-management subsystem of claim 11 wherein the developer-rating data structure includes:
a field storing an identifier for the developer; and
a field including a current rating for the developer. 15. A method that maintains code-change and developer ratings within an automated-application-release-management subsystem that includes one or more application-release-management pipelines, each application-release-management pipeline representing a sequence of tasks carried out by the automated-application-release-management subsystem to generate a releasable version of an application and each application-release-management pipeline comprises one or more stages, the automated-application-release-management subsystem operating within a computing facility having multiple servers, data-storage devices, and one or more internal networks, the method comprising:
generating a code-change-rating data structure when a submitted code change is first received; storing the generated code-change-rating data structure in a data store; generating a developer-rating data structure when a developer is first made known to the automated-application-release-management subsystem; storing the developer-rating data structure in the data store; and during processing of a code change, each stage of the application-release-management pipeline following an initial stage that generates the code-change-rating data structure for the code change
retrieves the code-change-rating data structure from the data store,
retrieves the developer-rating data structure for the developer who submitted the code change from the data store,
uses information contained in one or both of the code-change-rating data structure and the developer-rating data structure to control execution of one or more tasks within the stage,
modifies the code-change-rating data structure; and
updates the code-change-rating data structure in the data store to contain the modified code-change-rating data structure. 16. The method of claim 15 wherein, during processing of a code change, a final stage of the application-release-management pipeline, following acceptance of the code change:
modifies the developer-rating data structure; and
updates the developer-rating data structure in the data store to contain the modified code-change-rating data structure. 17. The method of claim 15 wherein the code-change-rating data structure includes:
a field storing an identifier for the code change; and
a field including a current score for the code change. 18. The method of claim 17 wherein the change-rating data structure additionally includes information about the processing history carried out by each stage, including an indication of a number of times the stage has received and processed the code change and a stage rating for the code change. 19. The method of claim 15 wherein the developer-rating data structure includes:
a field storing an identifier for the developer; and
a field including a current rating for the developer. 20. Computer instructions, stored within one or more physical data-storage devices, that, when executed on one or more processors within a computing facility having multiple servers, data-storage devices, and one or more internal networks, control the computing facility to maintain code-change and developer ratings within an automated-application-release-management subsystem, operating within the computing facility, that includes one or more application-release-management pipelines, each application-release-management pipeline representing a sequence of tasks carried out by the automated-application-release-management subsystem to generate a releasable version of an application and each application-release-management pipeline comprises one or more stages, by:
generating a code-change-rating data structure when a submitted code change is first received; storing the generated code-change-rating data structure in a data store; generating a developer-rating data structure when a developer is first made known to the automated-application-release-management subsystem; storing the developer-rating data structure in the data store; and during processing of a code change, each stage of the application-release-management pipeline following an initial stage that generates the code-change-rating data structure for the code change
retrieves the code-change-rating data structure from the data store,
retrieves the developer-rating data structure for the developer who submitted the code change from the data store,
uses information contained in one or both of the code-change-rating data structure and the developer-rating data structure to control execution of one or more tasks within the stage,
modifies the code-change-rating data structure; and
updates the code-change-rating data structure in the data store to contain the modified code-change-rating data structure. | The current document is directed to an automated-application-release-management system that organizes and manages the application-development and application-release processes to allow for continuous application development and release. The current document is particularly directed to implementations in which the automated application-release-management subsystem provides code-change ratings and developer ratings used throughout the code-change-submission-to-acceptance process. Code-change ratings and developer ratings are used to tailor tasks and control flow within the code-change-submission-to-acceptance process in order to respond to particular characteristics of code changes and developers.1. An automated-application-release-management subsystem within a cloud-computing facility having multiple servers, data-storage devices, and one or more internal networks, the automated-application-release-management subsystem comprising:
a dashboard user interface; an automated-application-release-management controller; an interface to a workflow-execution engine within the cloud-computing facility; an artifact-storage-and-management subsystem; and code-change ratings that are generated, stored, and shared during code-change processing by the automated-application-release-management subsystem. 2. The automated-application-release-management subsystem of claim 1 that is further incorporated in a workflow-based cloud-management system that additionally includes an infrastructure-management-and-administration subsystem and the workflow-execution engine. 3. The automated-application-release-management subsystem of claim 1 wherein the automated-application-release-management controller controls execution of application-release-management pipelines, each application-release-management pipeline representing a sequence of tasks carried out by the automated-application-release-management subsystem to generate a releasable version of an application. 4. The automated-application-release-management subsystem of claim 3 wherein each application-release-management pipeline comprises one or more stages. 5. The automated-application-release-management subsystem of claim 4 wherein each application-release-management-pipeline stage comprises:
a set of one or more tasks; and
a plug-in framework that maps entrypoints in the tasks to entrypoints within sets of routine and/or function entrypoints in descriptors within the set of sets of descriptors. 6. The automated-application-release-management subsystem of claim 5 wherein the tasks include tasks of task types selected from among:
initialization tasks;
deployment tasks;
run-tests tasks;
gating-rule tasks; and
finalize tasks. 7. The automated-application-release-management subsystem of claim 4 wherein the automated application-release-management subsystem:
generates a code-change-rating data structure when a submitted code change is first received; and
stores the generated code-change-rating data structure in a data store. 8. The automated-application-release-management subsystem of claim 7 wherein, during processing of a code change, each stage of the application-release-management pipeline following an initial stage that generates the code-change-rating data structure for the code change:
retrieves the code-change-rating data structure from the data store;
uses information contained in the code-change-rating data structure to control execution of one or more tasks within the stage;
modifies the code-change-rating data structure; and
updates the code-change-rating data structure in the data store to contain the modified code-change-rating data structure. 9. The automated-application-release-management subsystem of claim 8 wherein the code-change-rating data structure includes:
a field storing an identifier for the code change; and
a field including a current score for the code change. 10. The automated-application-release-management subsystem of claim 9 wherein the change-rating data structure additionally includes information about the processing history carried out by each stage, including an indication of a number of times the stage has received and processed the code change and a stage rating for the code change. 11. The automated-application-release-management subsystem of claim 7 wherein the automated-application-release-management subsystem:
generates a developer-rating data structure when a developer is first made known to the automated-application-release-management subsystem; and
stores the developer-rating data structure in the data store. 12. The automated-application-release-management subsystem of claim 11 wherein, during processing of a code change, each stage of the application-release-management pipeline:
retrieves the developer-rating data structure for the developer who submitted the code change from the data store; and
uses information contained in the developer-rating data structure to control execution of one or more tasks within the stage. 13. The automated-application-release-management subsystem of claim 12 wherein, during processing of a code change, a final stage of the application-release-management pipeline, following acceptance of the code change:
modifies the developer-rating data structure; and
updates the developer-rating data structure in the data store to contain the modified code-change-rating data structure. 14. The automated-application-release-management subsystem of claim 11 wherein the developer-rating data structure includes:
a field storing an identifier for the developer; and
a field including a current rating for the developer. 15. A method that maintains code-change and developer ratings within an automated-application-release-management subsystem that includes one or more application-release-management pipelines, each application-release-management pipeline representing a sequence of tasks carried out by the automated-application-release-management subsystem to generate a releasable version of an application and each application-release-management pipeline comprises one or more stages, the automated-application-release-management subsystem operating within a computing facility having multiple servers, data-storage devices, and one or more internal networks, the method comprising:
generating a code-change-rating data structure when a submitted code change is first received; storing the generated code-change-rating data structure in a data store; generating a developer-rating data structure when a developer is first made known to the automated-application-release-management subsystem; storing the developer-rating data structure in the data store; and during processing of a code change, each stage of the application-release-management pipeline following an initial stage that generates the code-change-rating data structure for the code change
retrieves the code-change-rating data structure from the data store,
retrieves the developer-rating data structure for the developer who submitted the code change from the data store,
uses information contained in one or both of the code-change-rating data structure and the developer-rating data structure to control execution of one or more tasks within the stage,
modifies the code-change-rating data structure; and
updates the code-change-rating data structure in the data store to contain the modified code-change-rating data structure. 16. The method of claim 15 wherein, during processing of a code change, a final stage of the application-release-management pipeline, following acceptance of the code change:
modifies the developer-rating data structure; and
updates the developer-rating data structure in the data store to contain the modified code-change-rating data structure. 17. The method of claim 15 wherein the code-change-rating data structure includes:
a field storing an identifier for the code change; and
a field including a current score for the code change. 18. The method of claim 17 wherein the change-rating data structure additionally includes information about the processing history carried out by each stage, including an indication of a number of times the stage has received and processed the code change and a stage rating for the code change. 19. The method of claim 15 wherein the developer-rating data structure includes:
a field storing an identifier for the developer; and
a field including a current rating for the developer. 20. Computer instructions, stored within one or more physical data-storage devices, that, when executed on one or more processors within a computing facility having multiple servers, data-storage devices, and one or more internal networks, control the computing facility to maintain code-change and developer ratings within an automated-application-release-management subsystem, operating within the computing facility, that includes one or more application-release-management pipelines, each application-release-management pipeline representing a sequence of tasks carried out by the automated-application-release-management subsystem to generate a releasable version of an application and each application-release-management pipeline comprises one or more stages, by:
generating a code-change-rating data structure when a submitted code change is first received; storing the generated code-change-rating data structure in a data store; generating a developer-rating data structure when a developer is first made known to the automated-application-release-management subsystem; storing the developer-rating data structure in the data store; and during processing of a code change, each stage of the application-release-management pipeline following an initial stage that generates the code-change-rating data structure for the code change
retrieves the code-change-rating data structure from the data store,
retrieves the developer-rating data structure for the developer who submitted the code change from the data store,
uses information contained in one or both of the code-change-rating data structure and the developer-rating data structure to control execution of one or more tasks within the stage,
modifies the code-change-rating data structure; and
updates the code-change-rating data structure in the data store to contain the modified code-change-rating data structure. | 2,100 |
5,973 | 5,973 | 15,650,230 | 2,135 | A method for managing a variable caching structure for managing storage for a processor. The method includes using a multi-way tag array to store a plurality of pointers for a corresponding plurality of different size groups of physical storage of a storage stack, wherein the pointers indicate guest addresses that have corresponding converted native addresses stored within the storage stack, and allocating a group of storage blocks of the storage stack, wherein the size of the allocation is in accordance with a corresponding size of one of the plurality of different size groups. Upon a hit on the tag, a corresponding entry is accessed to retrieve a pointer that indicates where in the storage stack a corresponding group of storage blocks of converted native instructions reside. The converted native instructions are then fetched from the storage stack for execution. | 1. A method for caching storage management to support guest instruction to native instruction conversion, the method comprising:
accessing a set of guest instructions from guest code; translating the set of guest instructions into a set of native instructions; storing the set of native instructions in a native instruction cache; storing mappings between an identifier based a guest address for the set of guest instructions and an identifier based on a native address for the set of native instructions, the mappings being stored in a conversion look aside buffer; and storing the set of native instructions in a hierarchical cache where the set of native instructions are removed from the hierarchical cache based on a least recently used replacement scheme. 2. The method of claim 1, wherein the hierarchical cache is implemented as a physical stack with blocks of native instructions added having variable sizes. 3. The method of claim 1, further comprising:
indexing conversion look aside buffer a portion of the guest address to identify a location in the hierarchical cache with the set of native instructions. 4. The method of claim 1, wherein each mapping in the conversion look aside buffer includes a translated block range. 5. The method of claim 1, wherein each mapping in the conversion look aside buffer includes management bits to track frequency of hits. 6. The method of claim 1 wherein each mapping in the conversion look aside buffer includes dynamic branch bias bits to track branch prediction accuracy for branch instructions in the set of native instructions. 7. The method of claim 1, wherein the guest address is sub-divided into tags and the conversion look aside buffer is structured as a multi-way tag array. 8. A device to convert guest instructions to native instructions, the device implementing a method for caching storage management to support guest instruction to native instruction conversion, the device comprising:
fetching logic to access a set of guest instructions from guest code; conversion logic to translate the set of guest instructions into a set of native instructions, to store the set of native instructions in a native instruction cache, and to store mappings between an identifier based a guest address for the set of guest instructions and an identifier based on a native address for the set of native instructions, the mappings being stored in a conversion look aside buffer; the conversion look aside buffer to store the mappings; and a hierarchical cache to store the set of native instructions where the set of native instructions are removed from the hierarchical cache based on a least recently used replacement scheme. 9. The device of claim 8, wherein the hierarchical cache is implemented as a physical stack with blocks of native instructions added having variable sizes. 10. The device of claim 8, wherein the conversion look aside buffer is indexed using a portion of the guest address to identify a location in the hierarchical cache with the set of native instructions. 11. The device of claim 8, wherein each mapping in the conversion look aside buffer includes a translated block range. 12. The device of claim 8, wherein each mapping in the conversion look aside buffer includes management bits to track frequency of hits. 13. The device of claim 8 wherein each mapping in the conversion look aside buffer includes dynamic branch bias bits to track branch prediction accuracy for branch instructions in the set of native instructions. 14. The device of claim 8, wherein the guest address is sub-divided into tags and the conversion look aside buffer is structured as a multi-way tag array. 15. A system including supporting guest instruction to native instruction conversion, the system including caching storage management to support the guest instruction to native instruction conversion, the system comprising:
a processor to execute native instructions; a pipeline coupled to the processor to implement the guest instruction to native instruction conversion, the pipeline to access a set of guest instructions from guest code, to translate the set of guest instructions into a set of native instructions, to store the set of native instructions in a native instruction cache, and to store mappings between an identifier based a guest address for the set of guest instructions and an identifier based on a native address for the set of native instructions, the mappings being stored in a conversion look aside buffer, the conversion look aside buffer to store the mappings; and a hierarchical cache to store the set of native instructions where the set of native instructions are removed from the hierarchical cache based on a least recently used replacement scheme. 16. The system of claim 15, wherein the hierarchical cache is implemented as a physical stack with blocks of native instructions added having variable sizes. 17. The system of claim 15, wherein the conversion look aside buffer is indexed using a portion of the guest address to identify a location in the hierarchical cache with the set of native instructions. 18. The system of claim 15, wherein each mapping in the conversion look aside buffer includes a translated block range. 19. The system of claim 15, wherein each mapping in the conversion look aside buffer includes management bits to track frequency of hits. 20. The system of claim 15, wherein each mapping in the conversion look aside buffer includes dynamic branch bias bits to track branch prediction accuracy for branch instructions in the set of native instructions. | A method for managing a variable caching structure for managing storage for a processor. The method includes using a multi-way tag array to store a plurality of pointers for a corresponding plurality of different size groups of physical storage of a storage stack, wherein the pointers indicate guest addresses that have corresponding converted native addresses stored within the storage stack, and allocating a group of storage blocks of the storage stack, wherein the size of the allocation is in accordance with a corresponding size of one of the plurality of different size groups. Upon a hit on the tag, a corresponding entry is accessed to retrieve a pointer that indicates where in the storage stack a corresponding group of storage blocks of converted native instructions reside. The converted native instructions are then fetched from the storage stack for execution.1. A method for caching storage management to support guest instruction to native instruction conversion, the method comprising:
accessing a set of guest instructions from guest code; translating the set of guest instructions into a set of native instructions; storing the set of native instructions in a native instruction cache; storing mappings between an identifier based a guest address for the set of guest instructions and an identifier based on a native address for the set of native instructions, the mappings being stored in a conversion look aside buffer; and storing the set of native instructions in a hierarchical cache where the set of native instructions are removed from the hierarchical cache based on a least recently used replacement scheme. 2. The method of claim 1, wherein the hierarchical cache is implemented as a physical stack with blocks of native instructions added having variable sizes. 3. The method of claim 1, further comprising:
indexing conversion look aside buffer a portion of the guest address to identify a location in the hierarchical cache with the set of native instructions. 4. The method of claim 1, wherein each mapping in the conversion look aside buffer includes a translated block range. 5. The method of claim 1, wherein each mapping in the conversion look aside buffer includes management bits to track frequency of hits. 6. The method of claim 1 wherein each mapping in the conversion look aside buffer includes dynamic branch bias bits to track branch prediction accuracy for branch instructions in the set of native instructions. 7. The method of claim 1, wherein the guest address is sub-divided into tags and the conversion look aside buffer is structured as a multi-way tag array. 8. A device to convert guest instructions to native instructions, the device implementing a method for caching storage management to support guest instruction to native instruction conversion, the device comprising:
fetching logic to access a set of guest instructions from guest code; conversion logic to translate the set of guest instructions into a set of native instructions, to store the set of native instructions in a native instruction cache, and to store mappings between an identifier based a guest address for the set of guest instructions and an identifier based on a native address for the set of native instructions, the mappings being stored in a conversion look aside buffer; the conversion look aside buffer to store the mappings; and a hierarchical cache to store the set of native instructions where the set of native instructions are removed from the hierarchical cache based on a least recently used replacement scheme. 9. The device of claim 8, wherein the hierarchical cache is implemented as a physical stack with blocks of native instructions added having variable sizes. 10. The device of claim 8, wherein the conversion look aside buffer is indexed using a portion of the guest address to identify a location in the hierarchical cache with the set of native instructions. 11. The device of claim 8, wherein each mapping in the conversion look aside buffer includes a translated block range. 12. The device of claim 8, wherein each mapping in the conversion look aside buffer includes management bits to track frequency of hits. 13. The device of claim 8 wherein each mapping in the conversion look aside buffer includes dynamic branch bias bits to track branch prediction accuracy for branch instructions in the set of native instructions. 14. The device of claim 8, wherein the guest address is sub-divided into tags and the conversion look aside buffer is structured as a multi-way tag array. 15. A system including supporting guest instruction to native instruction conversion, the system including caching storage management to support the guest instruction to native instruction conversion, the system comprising:
a processor to execute native instructions; a pipeline coupled to the processor to implement the guest instruction to native instruction conversion, the pipeline to access a set of guest instructions from guest code, to translate the set of guest instructions into a set of native instructions, to store the set of native instructions in a native instruction cache, and to store mappings between an identifier based a guest address for the set of guest instructions and an identifier based on a native address for the set of native instructions, the mappings being stored in a conversion look aside buffer, the conversion look aside buffer to store the mappings; and a hierarchical cache to store the set of native instructions where the set of native instructions are removed from the hierarchical cache based on a least recently used replacement scheme. 16. The system of claim 15, wherein the hierarchical cache is implemented as a physical stack with blocks of native instructions added having variable sizes. 17. The system of claim 15, wherein the conversion look aside buffer is indexed using a portion of the guest address to identify a location in the hierarchical cache with the set of native instructions. 18. The system of claim 15, wherein each mapping in the conversion look aside buffer includes a translated block range. 19. The system of claim 15, wherein each mapping in the conversion look aside buffer includes management bits to track frequency of hits. 20. The system of claim 15, wherein each mapping in the conversion look aside buffer includes dynamic branch bias bits to track branch prediction accuracy for branch instructions in the set of native instructions. | 2,100 |
5,974 | 5,974 | 15,890,159 | 2,176 | A system and method for determining a layout of an electronic document containing bidirectional Hebrew text is disclosed. The system and method have a layout filter configured to determine if the electronic document is a candidate for layout detection based upon features of the electronic document, and an encoding detector configured to determine the encoding employed to encode characters in the electronic document; and an ordering detector configured to determine, based on the determined encoding, an ordering scheme employed in the electronic document. Additionally, a base direction detector configured to determine, based on the determined ordering scheme, a base direction of the electronic document based at least on non-Hebrew characters present in the electronic document; and a mirroring detector configured to determine a character mirroring state for the electronic document based upon the ordering scheme and a presence of at least one bracket pair in the electronic document. | 1. A system for determining a layout of an electronic document containing bidirectional Hebrew text comprising:
a layout filter configured to determine if the electronic document is a candidate for layout detection based upon features of the electronic document; an encoding detector configured to determine the encoding employed to encode characters in the electronic document; an ordering detector configured to determine, based on the determined encoding, an ordering scheme employed in the electronic document; a base direction detector configured to determine, based on the determined ordering scheme, a base direction of the electronic document based at least on non-Hebrew characters present in the electronic document; and a mirroring detector configured to determine a character mirroring state for the electronic document based upon the determined ordering scheme and a presence of at least one bracket pair in the electronic document. 2. The system of claim 1 further comprising:
a rendering component configure to render the electronic document based upon the determined ordering scheme and the determined base direction. 3. The system of claim 1 wherein the ordering scheme is determined to be logical. 4. The system of claim 3 wherein the base direction detector is configured to determine the base direction by determining a number of strong left-to-right characters and a number of strong right-to-left characters in the electronic document. 5. The system of claim 4 wherein the base direction is determined to be right-to-left when the number of strong right-to-left characters is determined to be greater than the number of strong left-to-right characters. 6. The system of claim 4 wherein the base direction is determined to be left-to-right when the number of strong left-to-right characters is determined to be greater than the number of strong right-to-left characters. 7. The system of claim 3 wherein the ordering scheme is determined to be logical when meaningful Hebrew text is found in an original ordering of characters forming the electronic document. 8. The system of claim 7 wherein the meaningful Hebrew text is determined by the encoding detector. 9. The system of claim 8 wherein the encoding detector employs natural language processing to determine if the electronic document includes meaningful Hebrew text. 10. The system of claim 1 wherein the ordering scheme is determined to be visual. 11. The system of claim 10 wherein the ordering scheme is determined to be visual when meaningful Hebrew text is found in a reversed ordering of characters forming the electronic document. 12. The system claim 11 wherein the reversed order of characters are provided to the encoding detector to identify meaningful Hebrew text. 13. The system of claim 10 wherein the base direction detector only processes non-Hebrew portions of the electronic document to determine the base direction. 14. The system of claim 13 wherein the base direction detector determines that the base direction is left-to-right when left-to-right text is identified in the electronic document, and meaningful words in the left-to-right text is identified. 15. The system of claim 13 wherein when the electronic document contains left-to-right text, and the left-to-right text does not contain meaningful words, the base direction detector is configured to reverse an order associated with the non-Hebrew portions of the electronic document. 16. The system of claim 15 wherein the base direction detector determines that the base direction is right-to-left when the reversed order of the non-Hebrew portions is determined to contain meaningful words. 17. The system of claim 15 wherein the base direction detector determines that the base direction is left-to-right when the reversed order of the non-Hebrew portions is determined not to contain meaningful words. 18. The system of claim 1 wherein the encoding detector determines that the electronic document is encoded with characters encoded in an encoding selected from the group consisting of Unicode, single byte Hebrew, non-Unicode non-Hebrew, and non-Unicode but undefined characters. 19. The system of claim 1 wherein the layout filter is configured to determine that the electronic document is in plain text, and is configured to convert the electronic document to plain text when the layout filter determines that the electronic document is not in plain text. 20. A computer program product having computer executable instructions that when executed by at least one computing device to determine a layout direction for at least one electronic document, comprising instructions to:
receive an electronic document, the electronic document including bidirectional text wherein at least a portion of the bidirectional text includes Hebrew text; determine if the electronic document is in plain text; when the electronic document is not in plain text, convert the electronic document to plain text such that the electronic document is in plain text; when the electronic document is in plain text, determine if the electronic document has uniform layout characteristics;
when the electronic document does not have uniform layout characteristics, terminate the layout direction process;
when the electronic document has uniform layout characteristics, determine if the electronic document includes actual Hebrew words in the Hebrew text;
when the electronic document does not contain actual Hebrew words, terminate the layout detection process; when the electronic document contains actual Hebrew words, determine if all of the actual Hebrew words in the Hebrew text are palindromes;
when all of the actual Hebrew words are palindromes; terminate the layout detection process;
when the Hebrew words are not all palindromes, detect a character encoding for the electronic document; when the detected character encoding is non-Unicode non-Hebrew encoding, terminate the layout detection process; when the detected encoding is non-Unicode Hebrew, determine the ordering scheme is a logical ordering scheme; when the detected character encoding is Unicode,
extract all Hebrew characters from the electronic document;
convert the extracted Hebrew characters to single byte Hebrew;
determine if the extracted Hebrew characters form actual Hebrew words;
when the extracted Hebrew characters form actual Hebrew words, determine the ordering scheme is logical;
when the extracted Hebrew characters do not form actual Hebrew words, reverse the extracted Hebrew characters to create a reversed version of the extracted Hebrew characters;
determine if the reversed version contains actual Hebrew words;
when the reversed version is determined to contain actual Hebrew words, determine the ordering scheme is visual;
when the reversed version does not contain actual Hebrew words; terminate the layout detection process;
when the detected character encoding is determined to be non-Unicode and undefined characters,
reverse the Hebrew characters to create a second reversed version of the Hebrew characters;
detect the character encoding for the second reversed version;
when the detected character encoding for the second reversed version is non-Hebrew or undefined, terminate the layout detection process;
when the detected character encoding for the second reversed version is single byte Hebrew, determine if the second reversed version contains actual Hebrew words;
when the second reversed version is determined to contain actual Hebrew words, determine the ordering scheme is visual;
when the second reversed version does not contain actual Hebrew words; terminate the layout detection process;
determine if the electronic document is Unicode Transformation Format 32 (UTF-32),
when the electronic document is not in UTF-32, convert the electronic document UTF-32;
when the ordering scheme is logical, determine a base direction by:
determine a first number of characters in the electronic document that are strong left-to-right characters;
determine a second number of characters in the electronic document that are strong right-to-left characters;
compare the first number of characters to the second number of characters;
when the first number of characters is greater than the second number of characters, determine the base direction of left-to right;
when the first number of characters is less than the second number of characters, determine the base direction of right-to-left;
when the ordering scheme is visual, determine the base direction by:
determine if a left-to-right script is present in the electronic document;
when left-to-right script is not present in the electronic document, determine the base direction of left-to-right;
when left-to-right script is present, determine if a left-to-right language is present in non-Hebrew portions of the electronic document;
when a left-to-right language is present in the electronic document, determine the base direction of left-to-right;
when a left-to-right language is not present in the electronic document, reverse the left-to-right script to create a third reversed version of the electronic document;
determine a language associated with the third reversed version;
when the language associated with the third reversed version is determined to be a left-to right language, determine the base direction of right-to left;
when the language associated with the third reversed version is determined not to be a left-to-right language, determine the base direction of left-to-right;
determine if the electronic document includes at least one bracket pair;
when the electronic document does not include at least one bracket pair and the ordering scheme is logical, determine character mirroring is off;
when the electronic document does not include at least one bracket pair and the ordering scheme is visual, determine character mirroring is on;
when the electronic document includes at least one bracket pair, determine if a bracket having an open property precedes a bracket having a close property;
when the bracket having the open property precedes the bracket having the close property and the ordering scheme is logical, determine character mirroring is off;
when the bracket having the open property does not precede the bracket having the close property and the ordering scheme is logical, determine character mirroring is on;
when the bracket having the open property precedes the bracket having the close property and the ordering scheme is visual, determine character mirroring is on;
when the bracket having the open property does not precede the bracket having the close property and the ordering scheme is visual, determine character mirroring is on; and
render the electronic document based upon the determined ordering scheme and base direction. | A system and method for determining a layout of an electronic document containing bidirectional Hebrew text is disclosed. The system and method have a layout filter configured to determine if the electronic document is a candidate for layout detection based upon features of the electronic document, and an encoding detector configured to determine the encoding employed to encode characters in the electronic document; and an ordering detector configured to determine, based on the determined encoding, an ordering scheme employed in the electronic document. Additionally, a base direction detector configured to determine, based on the determined ordering scheme, a base direction of the electronic document based at least on non-Hebrew characters present in the electronic document; and a mirroring detector configured to determine a character mirroring state for the electronic document based upon the ordering scheme and a presence of at least one bracket pair in the electronic document.1. A system for determining a layout of an electronic document containing bidirectional Hebrew text comprising:
a layout filter configured to determine if the electronic document is a candidate for layout detection based upon features of the electronic document; an encoding detector configured to determine the encoding employed to encode characters in the electronic document; an ordering detector configured to determine, based on the determined encoding, an ordering scheme employed in the electronic document; a base direction detector configured to determine, based on the determined ordering scheme, a base direction of the electronic document based at least on non-Hebrew characters present in the electronic document; and a mirroring detector configured to determine a character mirroring state for the electronic document based upon the determined ordering scheme and a presence of at least one bracket pair in the electronic document. 2. The system of claim 1 further comprising:
a rendering component configure to render the electronic document based upon the determined ordering scheme and the determined base direction. 3. The system of claim 1 wherein the ordering scheme is determined to be logical. 4. The system of claim 3 wherein the base direction detector is configured to determine the base direction by determining a number of strong left-to-right characters and a number of strong right-to-left characters in the electronic document. 5. The system of claim 4 wherein the base direction is determined to be right-to-left when the number of strong right-to-left characters is determined to be greater than the number of strong left-to-right characters. 6. The system of claim 4 wherein the base direction is determined to be left-to-right when the number of strong left-to-right characters is determined to be greater than the number of strong right-to-left characters. 7. The system of claim 3 wherein the ordering scheme is determined to be logical when meaningful Hebrew text is found in an original ordering of characters forming the electronic document. 8. The system of claim 7 wherein the meaningful Hebrew text is determined by the encoding detector. 9. The system of claim 8 wherein the encoding detector employs natural language processing to determine if the electronic document includes meaningful Hebrew text. 10. The system of claim 1 wherein the ordering scheme is determined to be visual. 11. The system of claim 10 wherein the ordering scheme is determined to be visual when meaningful Hebrew text is found in a reversed ordering of characters forming the electronic document. 12. The system claim 11 wherein the reversed order of characters are provided to the encoding detector to identify meaningful Hebrew text. 13. The system of claim 10 wherein the base direction detector only processes non-Hebrew portions of the electronic document to determine the base direction. 14. The system of claim 13 wherein the base direction detector determines that the base direction is left-to-right when left-to-right text is identified in the electronic document, and meaningful words in the left-to-right text is identified. 15. The system of claim 13 wherein when the electronic document contains left-to-right text, and the left-to-right text does not contain meaningful words, the base direction detector is configured to reverse an order associated with the non-Hebrew portions of the electronic document. 16. The system of claim 15 wherein the base direction detector determines that the base direction is right-to-left when the reversed order of the non-Hebrew portions is determined to contain meaningful words. 17. The system of claim 15 wherein the base direction detector determines that the base direction is left-to-right when the reversed order of the non-Hebrew portions is determined not to contain meaningful words. 18. The system of claim 1 wherein the encoding detector determines that the electronic document is encoded with characters encoded in an encoding selected from the group consisting of Unicode, single byte Hebrew, non-Unicode non-Hebrew, and non-Unicode but undefined characters. 19. The system of claim 1 wherein the layout filter is configured to determine that the electronic document is in plain text, and is configured to convert the electronic document to plain text when the layout filter determines that the electronic document is not in plain text. 20. A computer program product having computer executable instructions that when executed by at least one computing device to determine a layout direction for at least one electronic document, comprising instructions to:
receive an electronic document, the electronic document including bidirectional text wherein at least a portion of the bidirectional text includes Hebrew text; determine if the electronic document is in plain text; when the electronic document is not in plain text, convert the electronic document to plain text such that the electronic document is in plain text; when the electronic document is in plain text, determine if the electronic document has uniform layout characteristics;
when the electronic document does not have uniform layout characteristics, terminate the layout direction process;
when the electronic document has uniform layout characteristics, determine if the electronic document includes actual Hebrew words in the Hebrew text;
when the electronic document does not contain actual Hebrew words, terminate the layout detection process; when the electronic document contains actual Hebrew words, determine if all of the actual Hebrew words in the Hebrew text are palindromes;
when all of the actual Hebrew words are palindromes; terminate the layout detection process;
when the Hebrew words are not all palindromes, detect a character encoding for the electronic document; when the detected character encoding is non-Unicode non-Hebrew encoding, terminate the layout detection process; when the detected encoding is non-Unicode Hebrew, determine the ordering scheme is a logical ordering scheme; when the detected character encoding is Unicode,
extract all Hebrew characters from the electronic document;
convert the extracted Hebrew characters to single byte Hebrew;
determine if the extracted Hebrew characters form actual Hebrew words;
when the extracted Hebrew characters form actual Hebrew words, determine the ordering scheme is logical;
when the extracted Hebrew characters do not form actual Hebrew words, reverse the extracted Hebrew characters to create a reversed version of the extracted Hebrew characters;
determine if the reversed version contains actual Hebrew words;
when the reversed version is determined to contain actual Hebrew words, determine the ordering scheme is visual;
when the reversed version does not contain actual Hebrew words; terminate the layout detection process;
when the detected character encoding is determined to be non-Unicode and undefined characters,
reverse the Hebrew characters to create a second reversed version of the Hebrew characters;
detect the character encoding for the second reversed version;
when the detected character encoding for the second reversed version is non-Hebrew or undefined, terminate the layout detection process;
when the detected character encoding for the second reversed version is single byte Hebrew, determine if the second reversed version contains actual Hebrew words;
when the second reversed version is determined to contain actual Hebrew words, determine the ordering scheme is visual;
when the second reversed version does not contain actual Hebrew words; terminate the layout detection process;
determine if the electronic document is Unicode Transformation Format 32 (UTF-32),
when the electronic document is not in UTF-32, convert the electronic document UTF-32;
when the ordering scheme is logical, determine a base direction by:
determine a first number of characters in the electronic document that are strong left-to-right characters;
determine a second number of characters in the electronic document that are strong right-to-left characters;
compare the first number of characters to the second number of characters;
when the first number of characters is greater than the second number of characters, determine the base direction of left-to right;
when the first number of characters is less than the second number of characters, determine the base direction of right-to-left;
when the ordering scheme is visual, determine the base direction by:
determine if a left-to-right script is present in the electronic document;
when left-to-right script is not present in the electronic document, determine the base direction of left-to-right;
when left-to-right script is present, determine if a left-to-right language is present in non-Hebrew portions of the electronic document;
when a left-to-right language is present in the electronic document, determine the base direction of left-to-right;
when a left-to-right language is not present in the electronic document, reverse the left-to-right script to create a third reversed version of the electronic document;
determine a language associated with the third reversed version;
when the language associated with the third reversed version is determined to be a left-to right language, determine the base direction of right-to left;
when the language associated with the third reversed version is determined not to be a left-to-right language, determine the base direction of left-to-right;
determine if the electronic document includes at least one bracket pair;
when the electronic document does not include at least one bracket pair and the ordering scheme is logical, determine character mirroring is off;
when the electronic document does not include at least one bracket pair and the ordering scheme is visual, determine character mirroring is on;
when the electronic document includes at least one bracket pair, determine if a bracket having an open property precedes a bracket having a close property;
when the bracket having the open property precedes the bracket having the close property and the ordering scheme is logical, determine character mirroring is off;
when the bracket having the open property does not precede the bracket having the close property and the ordering scheme is logical, determine character mirroring is on;
when the bracket having the open property precedes the bracket having the close property and the ordering scheme is visual, determine character mirroring is on;
when the bracket having the open property does not precede the bracket having the close property and the ordering scheme is visual, determine character mirroring is on; and
render the electronic document based upon the determined ordering scheme and base direction. | 2,100 |
5,975 | 5,975 | 15,991,549 | 2,177 | Scheduling events with multiple invitees includes: identifying a plurality of invitees for an event in a calendar system having at least one processor; searching an availability associated with each of the invitees to determine a number of periods of availability in the calendar system, at least some of the invitees being available during each period of availability; creating a separate instance of the event in the calendar system for each identified period of availability; and for each invitee available during at least one of the periods of availability, assigning the invitee to one said instance of the event for which that invitee is available with the calendar system. | 1. A method of scheduling events with multiple invitees, said method comprising:
identifying a plurality of invitees for an event in a calendar system, the calendar system comprising at least one processor; searching an availability associated with each of the invitees to determine a number of periods of availability in the calendar system, at least one of the invitees being available during each period of availability; creating a separate instance of the event in the calendar system for each identified period of availability including scheduling a plurality of instances of the same event to occur at different times in the calendar system; and for each invitee available during at least one of the periods of availability, with the calendar system, assigning the invitee to one said instance of the event for which that invitee is available. 2. The method of claim 1, in which the number of periods of availability in the calendar system is set by a user. 3. The method of claim 1, in which the number of periods of availability in the calendar system is a minimum number of periods of availability such that each of the invitees is available during at least one of the periods of availability. 4. The method of claim 1, in which the plurality of instances of the same event are scheduled within a period of time set by a user. 5. The method of claim 1, further comprising determining the number of periods of availability in the calendar system automatically in response to the amount of invitees being greater than a specified threshold. 6. The method of claim 1, further comprising associating all the instances of the event in the calendar system such that a change made to any one of the instances results in an automatic update to the other instances of the event to reflect that change. 7. The method of claim 1, further comprising, for a said invitee available during more than one of the periods of availability, displaying each of the periods of availability for which the invitee is available to a user. 8. The method of claim 1, further comprising allowing the user to change an assignment of a said invitee from one instance of the event to another instance of the event. 9. The method of claim 1, further comprising sending an electronic invitation to each of the invitees assigned to one of the instances of the event. 10. The method of claim 1, further comprising, in response to determining that no single available meeting period exists during which each of the invitees is available:
display a button for splitting a meeting into multiple meetings; in response to activation of the button, determining the number of periods of availability in the calendar system, at least one of the invitees being available during each period of availability. 11. A computerized calendar system, comprising:
a processor communicatively coupled to a memory, the memory comprising executable code stored thereon that, when executed by the processor, causes the processor to:
identify a plurality of invitees for an event;
search an availability associated with each of the invitees to determine a number of periods of availability, at least one of the invitees being available during each period of availability;
schedule a separate instance of the event for each identified period of availability such that multiple instances of the same event are scheduled to occur at different times to accommodate availability of different invitees; and
for each invitee, assign the invitee to one said instance of the event for which that invitee is available. 12. The computerized calendar system of claim 11, the executable code further causing the processor to determine the number of periods of availability in the calendar system automatically in response to the amount of invitees being greater than a specified threshold. 13. The computerized calendar system of claim 11, the executable code further causing the processor to allow the user to change an assignment of a said invitee from one instance of the event to another instance of the event. 14. A method of scheduling events with multiple invitees, said method comprising:
identifying a plurality of invitees for an event in a calendar system comprising at least one processor; determining in the calendar system whether an available meeting period exists during which each of the invitees is available; in response to determining that no single available meeting period exists during which each of the invitees is available, scheduling a plurality of instances of the same event to occur at different times in the calendar system; and for each invitee, assigning the invitee to one said instance of the event for which that invitee is available, such that a different group of invitees is assigned to each of the multiple instances of the event based on availability of individual invitees. 15. The method of claim 14, further comprising:
in response to determining that no single available meeting period exists during which each of the invitees is available, displaying a button for splitting a meeting into multiple meetings; and in response to activation of the button, scheduling the plurality of instances of the same event to occur at different times in the calendar system 16. The method of claim 14, in which a number of plurality of instances of the same event is set by a user. 17. The method of claim 14, in which a number of plurality of instances of the same event is a minimum number of periods of availability such that each of the invitees is available during at least one of the periods of availability. 18. The method of claim 14, further comprising scheduling the plurality of instances of the same event automatically in response to the amount of invitees being greater than a specified threshold. 19. The method of claim 14, further comprising displaying the invitees assigned to each instance of the event to a user. 20. The method of claim 14, further comprising, allowing the user to change an assignment of a said invitee from one instance of the event to another instance of the event. | Scheduling events with multiple invitees includes: identifying a plurality of invitees for an event in a calendar system having at least one processor; searching an availability associated with each of the invitees to determine a number of periods of availability in the calendar system, at least some of the invitees being available during each period of availability; creating a separate instance of the event in the calendar system for each identified period of availability; and for each invitee available during at least one of the periods of availability, assigning the invitee to one said instance of the event for which that invitee is available with the calendar system.1. A method of scheduling events with multiple invitees, said method comprising:
identifying a plurality of invitees for an event in a calendar system, the calendar system comprising at least one processor; searching an availability associated with each of the invitees to determine a number of periods of availability in the calendar system, at least one of the invitees being available during each period of availability; creating a separate instance of the event in the calendar system for each identified period of availability including scheduling a plurality of instances of the same event to occur at different times in the calendar system; and for each invitee available during at least one of the periods of availability, with the calendar system, assigning the invitee to one said instance of the event for which that invitee is available. 2. The method of claim 1, in which the number of periods of availability in the calendar system is set by a user. 3. The method of claim 1, in which the number of periods of availability in the calendar system is a minimum number of periods of availability such that each of the invitees is available during at least one of the periods of availability. 4. The method of claim 1, in which the plurality of instances of the same event are scheduled within a period of time set by a user. 5. The method of claim 1, further comprising determining the number of periods of availability in the calendar system automatically in response to the amount of invitees being greater than a specified threshold. 6. The method of claim 1, further comprising associating all the instances of the event in the calendar system such that a change made to any one of the instances results in an automatic update to the other instances of the event to reflect that change. 7. The method of claim 1, further comprising, for a said invitee available during more than one of the periods of availability, displaying each of the periods of availability for which the invitee is available to a user. 8. The method of claim 1, further comprising allowing the user to change an assignment of a said invitee from one instance of the event to another instance of the event. 9. The method of claim 1, further comprising sending an electronic invitation to each of the invitees assigned to one of the instances of the event. 10. The method of claim 1, further comprising, in response to determining that no single available meeting period exists during which each of the invitees is available:
display a button for splitting a meeting into multiple meetings; in response to activation of the button, determining the number of periods of availability in the calendar system, at least one of the invitees being available during each period of availability. 11. A computerized calendar system, comprising:
a processor communicatively coupled to a memory, the memory comprising executable code stored thereon that, when executed by the processor, causes the processor to:
identify a plurality of invitees for an event;
search an availability associated with each of the invitees to determine a number of periods of availability, at least one of the invitees being available during each period of availability;
schedule a separate instance of the event for each identified period of availability such that multiple instances of the same event are scheduled to occur at different times to accommodate availability of different invitees; and
for each invitee, assign the invitee to one said instance of the event for which that invitee is available. 12. The computerized calendar system of claim 11, the executable code further causing the processor to determine the number of periods of availability in the calendar system automatically in response to the amount of invitees being greater than a specified threshold. 13. The computerized calendar system of claim 11, the executable code further causing the processor to allow the user to change an assignment of a said invitee from one instance of the event to another instance of the event. 14. A method of scheduling events with multiple invitees, said method comprising:
identifying a plurality of invitees for an event in a calendar system comprising at least one processor; determining in the calendar system whether an available meeting period exists during which each of the invitees is available; in response to determining that no single available meeting period exists during which each of the invitees is available, scheduling a plurality of instances of the same event to occur at different times in the calendar system; and for each invitee, assigning the invitee to one said instance of the event for which that invitee is available, such that a different group of invitees is assigned to each of the multiple instances of the event based on availability of individual invitees. 15. The method of claim 14, further comprising:
in response to determining that no single available meeting period exists during which each of the invitees is available, displaying a button for splitting a meeting into multiple meetings; and in response to activation of the button, scheduling the plurality of instances of the same event to occur at different times in the calendar system 16. The method of claim 14, in which a number of plurality of instances of the same event is set by a user. 17. The method of claim 14, in which a number of plurality of instances of the same event is a minimum number of periods of availability such that each of the invitees is available during at least one of the periods of availability. 18. The method of claim 14, further comprising scheduling the plurality of instances of the same event automatically in response to the amount of invitees being greater than a specified threshold. 19. The method of claim 14, further comprising displaying the invitees assigned to each instance of the event to a user. 20. The method of claim 14, further comprising, allowing the user to change an assignment of a said invitee from one instance of the event to another instance of the event. | 2,100 |
5,976 | 5,976 | 14,098,947 | 2,176 | A user-centric management application system and method for a security system that conceptualizes the security devices and the tasks to perform on the security devices as separate objects with common properties and behavior. Operators of the system create abstract containers, called dockviews, which the operator adds objects to in order to perform specific tasks. Because the operator assigns user access to dockviews and objects, the system tailors system access for both application users and tenants of the security system based on policy objectives. Dockviews have the ability to open in separate application windows to support priority display and isolation of critical management tasks. The system provides integrated user administration, event management and reports capability, a consistent “look and feel,” and system-wide automated event notification via a watchlist window. | 1. A method for organizing and presenting object information for objects in a security system, comprising:
displaying graphical user interfaces on user devices; and displaying the object information for the objects in a window of the graphical user interfaces makes a graphical conversion to enable the display of additional object information. 2. The method of claim 1, further comprising displaying the additional object information by providing the graphical conversion as a flip window that rotates about a flip axis. 3. The method of claim 1, wherein the displaying of the additional object information in the window is enabled when the object information does not fit within one or more working areas of the window. 4. The method of claim 1, wherein the displaying of the additional object information in the window occurs in response to an operator selecting a flip button of the window. 5. The method of claim 1, wherein in response to an operator selecting a flip button of the window, the window rotates about the flip axis in a flip direction. 6. The method of claim 1, further comprising displaying the object information in the window using object icons for the objects in the security system. 7. The method of claim 1, wherein the object information displayed in the window includes user information for access cards and video streams from security cameras. 8. The method of claim 7, wherein the user information includes a user picture. 9. The method of claim 1, wherein the object information displayed in the window is organized by an object type for the objects in the security system. 10. A security system, comprising:
graphical user interfaces on user devices for organizing and displaying object information from objects of the security system which communicate over a security network; and one or more windows of the graphical user interfaces for enabling the display of additional object information from the objects of the security system. 11. The security system of claim 10, wherein the windows include a flip button that the operator selects for rotating the windows for displaying the additional object information. 12. The security system of claim 10, wherein the additional object information is displayed by rotating the windows about a flip axis. 13. The security system of claim 10, wherein the windows include a flip button that the operator selects for rotating the windows for displaying the additional object information. 14. The security system of claim 10, wherein the windows display the additional object information when the object information does not fit within one or more working areas of the windows. 15. The security system of claim 10, wherein the object information displayed in the windows include user information for access cards and video streams from security cameras. 16. The security system of claim 10, wherein the user information includes a user picture. 17. The security system of claim 10, wherein the object information displayed in the windows is organized by an object type for the objects in the security system. 18. A method for organizing and presenting object information for objects in a security system, comprising:
displaying graphical user interfaces on user devices; and displaying the object information for the objects in a window of the graphical user interfaces by rendering the window in a graphics plane that suggests a curved image surface to enable the display of additional object information. | A user-centric management application system and method for a security system that conceptualizes the security devices and the tasks to perform on the security devices as separate objects with common properties and behavior. Operators of the system create abstract containers, called dockviews, which the operator adds objects to in order to perform specific tasks. Because the operator assigns user access to dockviews and objects, the system tailors system access for both application users and tenants of the security system based on policy objectives. Dockviews have the ability to open in separate application windows to support priority display and isolation of critical management tasks. The system provides integrated user administration, event management and reports capability, a consistent “look and feel,” and system-wide automated event notification via a watchlist window.1. A method for organizing and presenting object information for objects in a security system, comprising:
displaying graphical user interfaces on user devices; and displaying the object information for the objects in a window of the graphical user interfaces makes a graphical conversion to enable the display of additional object information. 2. The method of claim 1, further comprising displaying the additional object information by providing the graphical conversion as a flip window that rotates about a flip axis. 3. The method of claim 1, wherein the displaying of the additional object information in the window is enabled when the object information does not fit within one or more working areas of the window. 4. The method of claim 1, wherein the displaying of the additional object information in the window occurs in response to an operator selecting a flip button of the window. 5. The method of claim 1, wherein in response to an operator selecting a flip button of the window, the window rotates about the flip axis in a flip direction. 6. The method of claim 1, further comprising displaying the object information in the window using object icons for the objects in the security system. 7. The method of claim 1, wherein the object information displayed in the window includes user information for access cards and video streams from security cameras. 8. The method of claim 7, wherein the user information includes a user picture. 9. The method of claim 1, wherein the object information displayed in the window is organized by an object type for the objects in the security system. 10. A security system, comprising:
graphical user interfaces on user devices for organizing and displaying object information from objects of the security system which communicate over a security network; and one or more windows of the graphical user interfaces for enabling the display of additional object information from the objects of the security system. 11. The security system of claim 10, wherein the windows include a flip button that the operator selects for rotating the windows for displaying the additional object information. 12. The security system of claim 10, wherein the additional object information is displayed by rotating the windows about a flip axis. 13. The security system of claim 10, wherein the windows include a flip button that the operator selects for rotating the windows for displaying the additional object information. 14. The security system of claim 10, wherein the windows display the additional object information when the object information does not fit within one or more working areas of the windows. 15. The security system of claim 10, wherein the object information displayed in the windows include user information for access cards and video streams from security cameras. 16. The security system of claim 10, wherein the user information includes a user picture. 17. The security system of claim 10, wherein the object information displayed in the windows is organized by an object type for the objects in the security system. 18. A method for organizing and presenting object information for objects in a security system, comprising:
displaying graphical user interfaces on user devices; and displaying the object information for the objects in a window of the graphical user interfaces by rendering the window in a graphics plane that suggests a curved image surface to enable the display of additional object information. | 2,100 |
5,977 | 5,977 | 14,716,834 | 2,161 | Examples perform transactions across a distributed system of elements, such as nodes, computing devices, objects, and virtual machines. The elements of the distributed system maintain data (e.g, tables) which include information on transactions previously received and the source of the transactions. A first element of the distributed system transmits a transaction, the identifier (ID) of the first element, and a transaction ID to a plurality of second elements. The second elements compare the transaction ID to the maximum transaction ID associated with the first element and stored in the tables to determine whether the transaction is the most recent and should be performed, or whether the transaction has already been performed and should not be re-performed. In this manner, undo logs are not needed. | 1. A method for processing transactions among elements of a distributed system using a redo-only write-ahead log, said method comprising:
receiving a transaction, a transaction identifier (ID) and a first element ID by a second element; comparing the received transaction ID to a set of transaction IDs maintained by the second element, the set of transaction IDs representing transactions previously performed by the second element; writing the received transaction to a redo-log if the transaction is more recent than the previously performed transactions; performing or not performing the transaction based on the comparison; and updating the set of transaction IDs only upon performing the transaction. 2. The method of claim 1, wherein comparing the transaction ID to the set of transaction IDs further comprises searching the set for a latest transaction ID associated with the first element. 3. The method of claim 2, further comprising performing the transaction upon determining that the transaction ID is more recent than the latest transaction ID associated with the first element in the set of transaction IDs. 4. The method of claim 1, further comprising reporting to the first element that the transaction was successfully performed after performing or not performing the transaction. 5. The method of claim 1, wherein said comparing and said performing result in the transactions being idempotent. 6. The method of claim 1, wherein the latest transaction ID includes a maximum transaction ID. 7. The method of claim 1, wherein the second element maintains only a redo log. 8. The method of claim 1, wherein the second element does not maintain an undo log. 9. One or more computer-readable storage media including computer-executable instructions that, when executed, cause at least one processor to process transactions among elements of a distributed system using a redo-only write-ahead log, by:
receiving a transaction, a transaction identifier (ID) and a first element ID by a second element; comparing the received transaction ID to a set of transaction IDs, the set of transaction IDs representing transactions previously performed by the second element; performing the transaction or not performing the transaction based on the comparison; and updating the set of transaction IDs only upon performing the transaction. 10. The computer storage media of claim 9, wherein the computer-executable instructions further cause the processor to update the set of transaction IDs with the new transaction ID upon performing the transaction. 11. The computer storage media of claim 9, wherein the computer-executable instructions further cause the processor to report, to the first element, that the transaction was successful. 12. The computer storage media of claim 9, wherein the computer-executable instructions further cause the processor to replay all transactions in an idempotent manner. 13. The computer storage media of claim 9, wherein the computer-executable instructions cause the processor to execute the transaction only if the transaction ID is greater than a maximum transaction ID, found in the table of transaction IDs, associated with the first element. 14. A system for coordinating one or more transactions for a plurality of elements of a distributed system, said system comprising:
a distributed system associated with the plurality of elements, said elements sharing access to a storage area; and a processor of a first element of the plurality of elements, the first element programmed to:
assign a transaction identifier (ID) to a transaction;
transmit the transaction including a first element ID and the transaction ID to a second element; and
a processor of the second element of the plurality of elements, the second element programmed to:
receive the transaction from the first element;
compare the received transaction ID to a set of stored transaction IDs;
perform or do not perform the transaction based on the comparison;
report success to the first element; and
update the set of transaction IDs only upon performing the transaction. 15. The system of claim 14, wherein the elements are capable of processing distributed transactions, and wherein the elements comprises at least one of nodes, objects, virtual machines (VMs), or computing devices. 16. The system of claim 14, wherein the set of stored transaction IDs further includes the IDs of the plurality of elements, and the transactions transmitted by the elements. 17. The system of claim 14, wherein updating the set of transaction IDs further comprises incrementing the stored transaction ID associated with the first element ID. 18. The system of claim 14, wherein the elements are tightly-coupled to a shared storage system. 19. The system of claim 14, wherein comparing the transaction ID to the set of transaction IDs further comprises accessing a latest transaction ID associated with the ID of the first element and performing the transaction only if the transaction ID is more recent than the accessed latest transaction ID. 20. The system of claim 19, further comprising updating the table of transaction IDs with the transaction ID of the executed transaction. | Examples perform transactions across a distributed system of elements, such as nodes, computing devices, objects, and virtual machines. The elements of the distributed system maintain data (e.g, tables) which include information on transactions previously received and the source of the transactions. A first element of the distributed system transmits a transaction, the identifier (ID) of the first element, and a transaction ID to a plurality of second elements. The second elements compare the transaction ID to the maximum transaction ID associated with the first element and stored in the tables to determine whether the transaction is the most recent and should be performed, or whether the transaction has already been performed and should not be re-performed. In this manner, undo logs are not needed.1. A method for processing transactions among elements of a distributed system using a redo-only write-ahead log, said method comprising:
receiving a transaction, a transaction identifier (ID) and a first element ID by a second element; comparing the received transaction ID to a set of transaction IDs maintained by the second element, the set of transaction IDs representing transactions previously performed by the second element; writing the received transaction to a redo-log if the transaction is more recent than the previously performed transactions; performing or not performing the transaction based on the comparison; and updating the set of transaction IDs only upon performing the transaction. 2. The method of claim 1, wherein comparing the transaction ID to the set of transaction IDs further comprises searching the set for a latest transaction ID associated with the first element. 3. The method of claim 2, further comprising performing the transaction upon determining that the transaction ID is more recent than the latest transaction ID associated with the first element in the set of transaction IDs. 4. The method of claim 1, further comprising reporting to the first element that the transaction was successfully performed after performing or not performing the transaction. 5. The method of claim 1, wherein said comparing and said performing result in the transactions being idempotent. 6. The method of claim 1, wherein the latest transaction ID includes a maximum transaction ID. 7. The method of claim 1, wherein the second element maintains only a redo log. 8. The method of claim 1, wherein the second element does not maintain an undo log. 9. One or more computer-readable storage media including computer-executable instructions that, when executed, cause at least one processor to process transactions among elements of a distributed system using a redo-only write-ahead log, by:
receiving a transaction, a transaction identifier (ID) and a first element ID by a second element; comparing the received transaction ID to a set of transaction IDs, the set of transaction IDs representing transactions previously performed by the second element; performing the transaction or not performing the transaction based on the comparison; and updating the set of transaction IDs only upon performing the transaction. 10. The computer storage media of claim 9, wherein the computer-executable instructions further cause the processor to update the set of transaction IDs with the new transaction ID upon performing the transaction. 11. The computer storage media of claim 9, wherein the computer-executable instructions further cause the processor to report, to the first element, that the transaction was successful. 12. The computer storage media of claim 9, wherein the computer-executable instructions further cause the processor to replay all transactions in an idempotent manner. 13. The computer storage media of claim 9, wherein the computer-executable instructions cause the processor to execute the transaction only if the transaction ID is greater than a maximum transaction ID, found in the table of transaction IDs, associated with the first element. 14. A system for coordinating one or more transactions for a plurality of elements of a distributed system, said system comprising:
a distributed system associated with the plurality of elements, said elements sharing access to a storage area; and a processor of a first element of the plurality of elements, the first element programmed to:
assign a transaction identifier (ID) to a transaction;
transmit the transaction including a first element ID and the transaction ID to a second element; and
a processor of the second element of the plurality of elements, the second element programmed to:
receive the transaction from the first element;
compare the received transaction ID to a set of stored transaction IDs;
perform or do not perform the transaction based on the comparison;
report success to the first element; and
update the set of transaction IDs only upon performing the transaction. 15. The system of claim 14, wherein the elements are capable of processing distributed transactions, and wherein the elements comprises at least one of nodes, objects, virtual machines (VMs), or computing devices. 16. The system of claim 14, wherein the set of stored transaction IDs further includes the IDs of the plurality of elements, and the transactions transmitted by the elements. 17. The system of claim 14, wherein updating the set of transaction IDs further comprises incrementing the stored transaction ID associated with the first element ID. 18. The system of claim 14, wherein the elements are tightly-coupled to a shared storage system. 19. The system of claim 14, wherein comparing the transaction ID to the set of transaction IDs further comprises accessing a latest transaction ID associated with the ID of the first element and performing the transaction only if the transaction ID is more recent than the accessed latest transaction ID. 20. The system of claim 19, further comprising updating the table of transaction IDs with the transaction ID of the executed transaction. | 2,100 |
5,978 | 5,978 | 14,681,307 | 2,116 | A manufacturing plant with an MES system is controlled through the execution of a given workflow. a) A plant designer application models a representation of the manufacturing plant through a set of equipment objects and through a workflow, b) a complex entity (plugin) is provided for expanding the characteristics of an equipment object; the plugin exposing an interface with a configuration, a set of property elements, a set of functionality elements; c) at engineering time, designing a set of plugins usable by the set of equipment objects; d) at engineering time, for at least one equipment object, associating at least one plugin; e) defining, through the plant designer, a given workflow according to given customer requirements, the workflow including an interaction with an element of a plugin associated with an equipment object; f) at runtime, executing the given workflow and performing the interaction with the element of the plugin. | 1. A process for controlling a manufacturing plant with an MES (manufacturing execution system) system, through the execution of a given workflow meeting given customer requirements, the process comprising:
providing an application for modeling a representation of the manufacturing plant through a set of equipment objects and through at least one workflow, the application being a plant designer and an equipment object being a collector of information; providing a complex entity for expanding the characteristics of an equipment object, the complex entity being a plugin and the plugin exposing an interface including a configuration, a set of property elements, and a set of functionality elements; the following steps to be performed at engineering time:
designing a set of plugins usable by the set of equipment objects;
for at least one equipment object, associating at least one plugin;
through the plant designer, defining a given workflow according to the given customer requirements, the given workflow including at least one interaction with at least one element of at least one plugin associated with an equipment object; and
the following step to be performed at runtime:
executing the given workflow and performing the at least one interaction with the at least one element of the at least one plugin of the associated equipment object. 2. The process according to claim 1, wherein an equipment object is a record of a database table. 3. The process according to claim 1, wherein a plugin interface comprises an event element. 4. The process according to claim 1, wherein, when a plugin element is a functionality, the interaction is a call to the functionality. 5. The process according of claim 1, wherein, when a plugin element is a property, the interaction is an action of assessing the property. 6. The process according of claim 1, wherein, when a plugin element is a property, the interaction is an action of firing the event. 7. A system for controlling a manufacturing plant with an MES system, through the execution of a given workflow meeting given customer requirements, the system comprising:
means for providing an application for modeling a representation of the manufacturing plant through a set of equipment objects and through at least one workflow, said application being a plant designer;
an equipment object being a collector of information;
means for providing a complex entity for expanding a characteristic of an equipment object, the complex entity being a plugin;
a plugin exposing an interface including a configuration, a set of property elements, and a set of functionality elements;
the system further comprising the following means to be used at engineering time:
means for designing a set of plugins usable by the set of equipment objects;
for at least one equipment object, means for associating at least one plugin;
through the plant designer, means for defining a given workflow according to given customer requirements, the given workflow including at least one interaction with at least one element of at least one plugin associated with an equipment object; and
the system further comprising the following means to be used at runtime:
means for executing the given workflow and performing the at least one interaction with the at least one element of the at least one plugin of the associated equipment object. | A manufacturing plant with an MES system is controlled through the execution of a given workflow. a) A plant designer application models a representation of the manufacturing plant through a set of equipment objects and through a workflow, b) a complex entity (plugin) is provided for expanding the characteristics of an equipment object; the plugin exposing an interface with a configuration, a set of property elements, a set of functionality elements; c) at engineering time, designing a set of plugins usable by the set of equipment objects; d) at engineering time, for at least one equipment object, associating at least one plugin; e) defining, through the plant designer, a given workflow according to given customer requirements, the workflow including an interaction with an element of a plugin associated with an equipment object; f) at runtime, executing the given workflow and performing the interaction with the element of the plugin.1. A process for controlling a manufacturing plant with an MES (manufacturing execution system) system, through the execution of a given workflow meeting given customer requirements, the process comprising:
providing an application for modeling a representation of the manufacturing plant through a set of equipment objects and through at least one workflow, the application being a plant designer and an equipment object being a collector of information; providing a complex entity for expanding the characteristics of an equipment object, the complex entity being a plugin and the plugin exposing an interface including a configuration, a set of property elements, and a set of functionality elements; the following steps to be performed at engineering time:
designing a set of plugins usable by the set of equipment objects;
for at least one equipment object, associating at least one plugin;
through the plant designer, defining a given workflow according to the given customer requirements, the given workflow including at least one interaction with at least one element of at least one plugin associated with an equipment object; and
the following step to be performed at runtime:
executing the given workflow and performing the at least one interaction with the at least one element of the at least one plugin of the associated equipment object. 2. The process according to claim 1, wherein an equipment object is a record of a database table. 3. The process according to claim 1, wherein a plugin interface comprises an event element. 4. The process according to claim 1, wherein, when a plugin element is a functionality, the interaction is a call to the functionality. 5. The process according of claim 1, wherein, when a plugin element is a property, the interaction is an action of assessing the property. 6. The process according of claim 1, wherein, when a plugin element is a property, the interaction is an action of firing the event. 7. A system for controlling a manufacturing plant with an MES system, through the execution of a given workflow meeting given customer requirements, the system comprising:
means for providing an application for modeling a representation of the manufacturing plant through a set of equipment objects and through at least one workflow, said application being a plant designer;
an equipment object being a collector of information;
means for providing a complex entity for expanding a characteristic of an equipment object, the complex entity being a plugin;
a plugin exposing an interface including a configuration, a set of property elements, and a set of functionality elements;
the system further comprising the following means to be used at engineering time:
means for designing a set of plugins usable by the set of equipment objects;
for at least one equipment object, means for associating at least one plugin;
through the plant designer, means for defining a given workflow according to given customer requirements, the given workflow including at least one interaction with at least one element of at least one plugin associated with an equipment object; and
the system further comprising the following means to be used at runtime:
means for executing the given workflow and performing the at least one interaction with the at least one element of the at least one plugin of the associated equipment object. | 2,100 |
5,979 | 5,979 | 14,729,554 | 2,126 | A system comprising: a processor; a data bus coupled to the processor; and a non-transitory, computer-readable storage medium embodying computer program code, the non-transitory, computer-readable storage medium being coupled to the data bus. The computer program code interacting with a plurality of computer operations and comprising instructions executable by the processor and configured for: receiving streams of data from a plurality of data sources; processing the streams of data from the plurality of data sources, the processing the streams of data from the plurality of data sources performing data enriching to provide data enriched data streams; and, storing the data streams and the data enriched data streams within the universal knowledge repository as a collection of knowledge elements. | 1. A system comprising:
a processor; a data bus coupled to the processor; and a non-transitory, computer-readable storage medium embodying computer program code, the non-transitory, computer-readable storage medium being coupled to the data bus, the computer program code interacting with a plurality of computer operations and comprising instructions executable by the processor and configured for:
receiving streams of data from a plurality of data sources;
processing the streams of data from the plurality of data sources, the processing the streams of data from the plurality of data sources performing data enriching to provide data enriched data streams; and,
storing the data streams and the data enriched data streams within the universal knowledge repository as a collection of knowledge elements. 2. The system of claim 1, wherein the instructions executable by the processor further comprise instructions for:
generating a cognitive insight based upon the collection of knowledge elements stored within the universal knowledge repository. 3. The system of claim 1, wherein:
the data enriching comprises identifying at least some knowledge elements within the collection of knowledge elements as at least one of facts, opinions, descriptions, and skills. 4. The system of claim 1, wherein:
the data enriching comprises identifying at least some knowledge elements within the collection of knowledge elements as at least one of statements, assertions, beliefs, perceptions, preferences, sentiments, attitudes and opinions and associating identified at least some knowledge elements with an entity responsible for generating the at least one of statements, assertions, beliefs, perceptions, preferences, sentiments, attitudes and opinions. 5. The system of claim 1, wherein:
the universal knowledge repository comprises a private knowledge repository; and, at least some of the plurality of data streams are private data streams. 6. The system of claim 1, wherein:
the universal knowledge repository comprises a universal cognitive graph. 7. A non-transitory, computer-readable storage medium embodying computer program code, the computer program code comprising computer executable instructions configured for:
receiving streams of data from a plurality of data sources; processing the streams of data from the plurality of data sources, the processing the streams of data from the plurality of data sources performing data enriching to provide data enriched data streams; and, storing the data streams and the data enriched data streams within the universal knowledge repository as a collection of knowledge elements. 8. The non-transitory, computer-readable storage medium of claim 7, wherein the instructions executable by the processor further comprise instructions for: generating a cognitive insight based upon the collection of knowledge elements stored within the universal knowledge repository. 9. The non-transitory, computer-readable storage medium of claim 7, wherein:
the data enriching comprises identifying at least some knowledge elements within the collection of knowledge elements as at least one of facts, opinions, descriptions, and skills. 10. The non-transitory, computer-readable storage medium of claim 7, wherein:
the data enriching comprises identifying at least some knowledge elements within the collection of knowledge elements as at least one of statements, assertions, beliefs, perceptions, preferences, sentiments, attitudes and opinions and associating identified at least some knowledge elements with an entity responsible for generating the at least one of statements, assertions, beliefs, perceptions, preferences, sentiments, attitudes and opinions. 11. The non-transitory, computer-readable storage medium of claim 7, wherein:
the universal knowledge repository comprises a private knowledge repository; and, at least some of the plurality of data streams are private data streams. 12. The non-transitory, computer-readable storage medium of claim 7, wherein the universal knowledge repository comprises a universal cognitive graph. 13. The non-transitory, computer-readable storage medium of claim 7, wherein the computer executable instructions are deployable to a client system from a server system at a remote location. 14. The non-transitory, computer-readable storage medium of claim 7, wherein the computer executable instructions are provided by a service provider to a user on an on-demand basis. | A system comprising: a processor; a data bus coupled to the processor; and a non-transitory, computer-readable storage medium embodying computer program code, the non-transitory, computer-readable storage medium being coupled to the data bus. The computer program code interacting with a plurality of computer operations and comprising instructions executable by the processor and configured for: receiving streams of data from a plurality of data sources; processing the streams of data from the plurality of data sources, the processing the streams of data from the plurality of data sources performing data enriching to provide data enriched data streams; and, storing the data streams and the data enriched data streams within the universal knowledge repository as a collection of knowledge elements.1. A system comprising:
a processor; a data bus coupled to the processor; and a non-transitory, computer-readable storage medium embodying computer program code, the non-transitory, computer-readable storage medium being coupled to the data bus, the computer program code interacting with a plurality of computer operations and comprising instructions executable by the processor and configured for:
receiving streams of data from a plurality of data sources;
processing the streams of data from the plurality of data sources, the processing the streams of data from the plurality of data sources performing data enriching to provide data enriched data streams; and,
storing the data streams and the data enriched data streams within the universal knowledge repository as a collection of knowledge elements. 2. The system of claim 1, wherein the instructions executable by the processor further comprise instructions for:
generating a cognitive insight based upon the collection of knowledge elements stored within the universal knowledge repository. 3. The system of claim 1, wherein:
the data enriching comprises identifying at least some knowledge elements within the collection of knowledge elements as at least one of facts, opinions, descriptions, and skills. 4. The system of claim 1, wherein:
the data enriching comprises identifying at least some knowledge elements within the collection of knowledge elements as at least one of statements, assertions, beliefs, perceptions, preferences, sentiments, attitudes and opinions and associating identified at least some knowledge elements with an entity responsible for generating the at least one of statements, assertions, beliefs, perceptions, preferences, sentiments, attitudes and opinions. 5. The system of claim 1, wherein:
the universal knowledge repository comprises a private knowledge repository; and, at least some of the plurality of data streams are private data streams. 6. The system of claim 1, wherein:
the universal knowledge repository comprises a universal cognitive graph. 7. A non-transitory, computer-readable storage medium embodying computer program code, the computer program code comprising computer executable instructions configured for:
receiving streams of data from a plurality of data sources; processing the streams of data from the plurality of data sources, the processing the streams of data from the plurality of data sources performing data enriching to provide data enriched data streams; and, storing the data streams and the data enriched data streams within the universal knowledge repository as a collection of knowledge elements. 8. The non-transitory, computer-readable storage medium of claim 7, wherein the instructions executable by the processor further comprise instructions for: generating a cognitive insight based upon the collection of knowledge elements stored within the universal knowledge repository. 9. The non-transitory, computer-readable storage medium of claim 7, wherein:
the data enriching comprises identifying at least some knowledge elements within the collection of knowledge elements as at least one of facts, opinions, descriptions, and skills. 10. The non-transitory, computer-readable storage medium of claim 7, wherein:
the data enriching comprises identifying at least some knowledge elements within the collection of knowledge elements as at least one of statements, assertions, beliefs, perceptions, preferences, sentiments, attitudes and opinions and associating identified at least some knowledge elements with an entity responsible for generating the at least one of statements, assertions, beliefs, perceptions, preferences, sentiments, attitudes and opinions. 11. The non-transitory, computer-readable storage medium of claim 7, wherein:
the universal knowledge repository comprises a private knowledge repository; and, at least some of the plurality of data streams are private data streams. 12. The non-transitory, computer-readable storage medium of claim 7, wherein the universal knowledge repository comprises a universal cognitive graph. 13. The non-transitory, computer-readable storage medium of claim 7, wherein the computer executable instructions are deployable to a client system from a server system at a remote location. 14. The non-transitory, computer-readable storage medium of claim 7, wherein the computer executable instructions are provided by a service provider to a user on an on-demand basis. | 2,100 |
5,980 | 5,980 | 14,729,559 | 2,126 | A method for managing a universal knowledge repository comprising: receiving streams of data from a plurality of data sources; processing the streams of data from the plurality of data sources, the processing the streams of data from the plurality of data sources performing data enriching to provide data enriched data streams; and, storing the data streams and the data enriched data streams within the universal knowledge repository as a collection of knowledge elements. | 1. A computer-implementable method for managing a universal knowledge repository comprising:
receiving streams of data from a plurality of data sources; processing the streams of data from the plurality of data sources, the processing the streams of data from the plurality of data sources performing data enriching to provide data enriched data streams; and, storing the data streams and the data enriched data streams within the universal knowledge repository as a collection of knowledge elements. 2. The method of claim 1, further comprising:
generating a cognitive insight based upon the collection of knowledge elements stored within the universal knowledge repository. 3. The method of claim 1, wherein:
the data enriching comprises identifying at least some knowledge elements within the collection of knowledge elements as at least one of facts, opinions, descriptions, and skills. 4. The method of claim 1, wherein:
the data enriching comprises identifying at least some knowledge elements within the collection of knowledge elements as at least one of statements, assertions, beliefs, perceptions, preferences, sentiments, attitudes and opinions and associating identified at least some knowledge elements with an entity responsible for generating the at least one of statements, assertions, beliefs, perceptions, preferences, sentiments, attitudes and opinions. 5. The method of claim 1, wherein:
the universal knowledge repository comprises a private knowledge repository; and, at least some of the plurality of data streams are private data streams. 6. The method of claim 1, wherein:
the universal knowledge repository comprises a universal cognitive graph. | A method for managing a universal knowledge repository comprising: receiving streams of data from a plurality of data sources; processing the streams of data from the plurality of data sources, the processing the streams of data from the plurality of data sources performing data enriching to provide data enriched data streams; and, storing the data streams and the data enriched data streams within the universal knowledge repository as a collection of knowledge elements.1. A computer-implementable method for managing a universal knowledge repository comprising:
receiving streams of data from a plurality of data sources; processing the streams of data from the plurality of data sources, the processing the streams of data from the plurality of data sources performing data enriching to provide data enriched data streams; and, storing the data streams and the data enriched data streams within the universal knowledge repository as a collection of knowledge elements. 2. The method of claim 1, further comprising:
generating a cognitive insight based upon the collection of knowledge elements stored within the universal knowledge repository. 3. The method of claim 1, wherein:
the data enriching comprises identifying at least some knowledge elements within the collection of knowledge elements as at least one of facts, opinions, descriptions, and skills. 4. The method of claim 1, wherein:
the data enriching comprises identifying at least some knowledge elements within the collection of knowledge elements as at least one of statements, assertions, beliefs, perceptions, preferences, sentiments, attitudes and opinions and associating identified at least some knowledge elements with an entity responsible for generating the at least one of statements, assertions, beliefs, perceptions, preferences, sentiments, attitudes and opinions. 5. The method of claim 1, wherein:
the universal knowledge repository comprises a private knowledge repository; and, at least some of the plurality of data streams are private data streams. 6. The method of claim 1, wherein:
the universal knowledge repository comprises a universal cognitive graph. | 2,100 |
5,981 | 5,981 | 15,279,336 | 2,117 | A building management system includes connected equipment and a predictive diagnostics system. The connected equipment is configured to measure a plurality of monitored variables. The predictive diagnostics system includes a communications interface, a principal component analysis (PCA) modeler, a controller. The communications interface is configured to receive samples of the monitored variables from the connected equipment. The PCA modeler is configured to automatically assign each of the samples of the monitored variables to one of a plurality of operating states of the connected equipment and to construct a PCA model for each operating state using the samples assigned to the operating state. The controller is configured to use the PCA models to adjust an operation of the connected equipment. | 1. A building management system comprising:
connected equipment configured to measure a plurality of monitored variables; and a predictive diagnostics system comprising:
a communications interface configured to receive samples of the monitored variables from the connected equipment;
a principal component analysis (PCA) modeler configured to automatically assign each of the samples of the monitored variables to one of a plurality of operating states of the connected equipment and to construct a PCA model for each operating state using the samples assigned to the operating state; and
a controller configured to use the PCA models to adjust an operation of the connected equipment. 2. The building management system of claim 1, wherein the predictive diagnostics system further comprises a sample indexer configured to generate a fault detection index for each of the samples;
wherein the PCA modeler is configured to compare the fault detection index to a control limit and determine that the connected equipment is switching between the operating states in response to the fault detection index exceeding the control limit. 3. The building management system of claim 2, wherein the PCA modeler is configured to:
determine whether multiple consecutive values of the fault detection index exceed the control limit; and determine that the connected equipment is switching between the operating states in response to a determination that the multiple consecutive values of the fault detection index exceed the control limit. 4. The building management system of claim 1, wherein the PCA modeler is configured to:
recursively update a variance of the samples each time a new sample is received; and determine whether the connected equipment is switching between the operating states based on the variance of the samples. 5. The building management system of claim 4, wherein the PCA modeler is configured to:
identify a new value of the variance and one or more previous values of the variance; calculate a filtered variance using the new value of the variance and the one or more previous values of the variance; and determine whether the connected equipment is switching between the operating states based on the filtered variance. 6. The building management system of claim 5, wherein the PCA modeler is configured to:
calculate the filtered variance by averaging the new value of the variance with the one or more previous values of the variance; recursively update the filtered variance each time a new sample is received. 7. The building management system of claim 4, wherein the PCA modeler is configured to:
calculate a variance slope based on multiple consecutive values of the variance; determine whether the variance slope exceeds a threshold value; and determine that the connected equipment is switching between the operating states in response to a determination that the variance slope exceeds the threshold value. 8. The building management system of claim 7, wherein the PCA modeler is configured to:
recursively update the variance slope each time a new sample is received; determine whether multiple consecutive values of the variance slope are less than the threshold value; and determine that the connected equipment has reached a new operating state in response to a determination that the multiple consecutive values of the variance slope are less than the threshold value. 9. The building management system of claim 4, wherein the PCA modeler is configured to:
determine whether the connected equipment has reached a new operating state based on the variance of the samples; generate a new PCA model for the new operating state in response to a determination that the connected equipment has reached the new operating state; and store the new PCA model in a state library. 10. The building management system of claim 9, wherein the PCA modeler is configured to:
determine whether the new PCA model overlaps with an existing PCA model stored in the state library; and in response to a determination that the new PCA model overlaps the existing PCA model:
create a merged PCA model by merging the new PCA model with the existing PCA model; and
replace the existing PCA model with the merged PCA model in the state library. 11. A method for monitoring and controlling connected equipment in a building management system, the method comprising:
measuring a plurality of monitored variables at the connected equipment; receiving samples of the monitored variables at a predictive diagnostics system; automatically assigning each of the samples of the monitored variables to one of a plurality of operating states of the connected equipment; constructing a PCA model for each operating state using the samples assigned to the operating state; and using the PCA models to adjust an operation of the connected equipment. 12. The method of claim 11, further comprising:
generating a fault detection index for each of the samples; comparing the fault detection index to a control limit; and determining that the connected equipment is switching between the operating states in response to the fault detection index exceeding the control limit. 13. The method of claim 12, further comprising:
determining whether multiple consecutive values of the fault detection index exceed the control limit; and determining that the connected equipment is switching between the operating states in response to a determination that the multiple consecutive values of the fault detection index exceed the control limit. 14. The method of claim 11, further comprising:
recursively updating a variance of the samples each time a new sample is received; and determining whether the connected equipment is switching between the operating states based on the variance of the samples. 15. The method of claim 14, further comprising:
identifying a new value of the variance and one or more previous values of the variance; calculating a filtered variance using the new value of the variance and the one or more previous values of the variance; and determining whether the connected equipment is switching between the operating states based on the filtered variance. 16. The method of claim 15, further comprising:
calculating the filtered variance by averaging the new value of the variance with the one or more previous values of the variance; recursively updating the filtered variance each time a new sample is received. 17. The method of claim 14, further comprising:
calculating a variance slope based on multiple consecutive values of the variance; determining whether the variance slope exceeds a threshold value; and determining that the connected equipment is switching between the operating states in response to a determination that the variance slope exceeds the threshold value. 18. The method of claim 17, further comprising:
recursively updating the variance slope each time a new sample is received; determining whether multiple consecutive values of the variance slope are less than the threshold value; and determining that the connected equipment has reached a new operating state in response to a determination that the multiple consecutive values of the variance slope are less than the threshold value. 19. A heating, ventilation, or air conditioning (HVAC) device comprising:
sensors configured to measure a plurality of monitored variables; and a predictive diagnostics system configured to receive samples of the monitored variables from the sensors, the predictive diagnostics system comprising a principal component analysis (PCA) modeler configured to automatically assign each of the samples of the monitored variables to one of a plurality of operating states of the HVAC device and to construct a PCA model for each operating state using the samples assigned to the operating state; and a controller configured to use the PCA models to adjust an operation of the HVAC device. 20. The HVAC device of claim 19, wherein the PCA modeler is configured to:
recursively update a variance of the samples each time a new sample is received; and determine whether the HVAC device is switching between the operating states based on the variance of the samples. | A building management system includes connected equipment and a predictive diagnostics system. The connected equipment is configured to measure a plurality of monitored variables. The predictive diagnostics system includes a communications interface, a principal component analysis (PCA) modeler, a controller. The communications interface is configured to receive samples of the monitored variables from the connected equipment. The PCA modeler is configured to automatically assign each of the samples of the monitored variables to one of a plurality of operating states of the connected equipment and to construct a PCA model for each operating state using the samples assigned to the operating state. The controller is configured to use the PCA models to adjust an operation of the connected equipment.1. A building management system comprising:
connected equipment configured to measure a plurality of monitored variables; and a predictive diagnostics system comprising:
a communications interface configured to receive samples of the monitored variables from the connected equipment;
a principal component analysis (PCA) modeler configured to automatically assign each of the samples of the monitored variables to one of a plurality of operating states of the connected equipment and to construct a PCA model for each operating state using the samples assigned to the operating state; and
a controller configured to use the PCA models to adjust an operation of the connected equipment. 2. The building management system of claim 1, wherein the predictive diagnostics system further comprises a sample indexer configured to generate a fault detection index for each of the samples;
wherein the PCA modeler is configured to compare the fault detection index to a control limit and determine that the connected equipment is switching between the operating states in response to the fault detection index exceeding the control limit. 3. The building management system of claim 2, wherein the PCA modeler is configured to:
determine whether multiple consecutive values of the fault detection index exceed the control limit; and determine that the connected equipment is switching between the operating states in response to a determination that the multiple consecutive values of the fault detection index exceed the control limit. 4. The building management system of claim 1, wherein the PCA modeler is configured to:
recursively update a variance of the samples each time a new sample is received; and determine whether the connected equipment is switching between the operating states based on the variance of the samples. 5. The building management system of claim 4, wherein the PCA modeler is configured to:
identify a new value of the variance and one or more previous values of the variance; calculate a filtered variance using the new value of the variance and the one or more previous values of the variance; and determine whether the connected equipment is switching between the operating states based on the filtered variance. 6. The building management system of claim 5, wherein the PCA modeler is configured to:
calculate the filtered variance by averaging the new value of the variance with the one or more previous values of the variance; recursively update the filtered variance each time a new sample is received. 7. The building management system of claim 4, wherein the PCA modeler is configured to:
calculate a variance slope based on multiple consecutive values of the variance; determine whether the variance slope exceeds a threshold value; and determine that the connected equipment is switching between the operating states in response to a determination that the variance slope exceeds the threshold value. 8. The building management system of claim 7, wherein the PCA modeler is configured to:
recursively update the variance slope each time a new sample is received; determine whether multiple consecutive values of the variance slope are less than the threshold value; and determine that the connected equipment has reached a new operating state in response to a determination that the multiple consecutive values of the variance slope are less than the threshold value. 9. The building management system of claim 4, wherein the PCA modeler is configured to:
determine whether the connected equipment has reached a new operating state based on the variance of the samples; generate a new PCA model for the new operating state in response to a determination that the connected equipment has reached the new operating state; and store the new PCA model in a state library. 10. The building management system of claim 9, wherein the PCA modeler is configured to:
determine whether the new PCA model overlaps with an existing PCA model stored in the state library; and in response to a determination that the new PCA model overlaps the existing PCA model:
create a merged PCA model by merging the new PCA model with the existing PCA model; and
replace the existing PCA model with the merged PCA model in the state library. 11. A method for monitoring and controlling connected equipment in a building management system, the method comprising:
measuring a plurality of monitored variables at the connected equipment; receiving samples of the monitored variables at a predictive diagnostics system; automatically assigning each of the samples of the monitored variables to one of a plurality of operating states of the connected equipment; constructing a PCA model for each operating state using the samples assigned to the operating state; and using the PCA models to adjust an operation of the connected equipment. 12. The method of claim 11, further comprising:
generating a fault detection index for each of the samples; comparing the fault detection index to a control limit; and determining that the connected equipment is switching between the operating states in response to the fault detection index exceeding the control limit. 13. The method of claim 12, further comprising:
determining whether multiple consecutive values of the fault detection index exceed the control limit; and determining that the connected equipment is switching between the operating states in response to a determination that the multiple consecutive values of the fault detection index exceed the control limit. 14. The method of claim 11, further comprising:
recursively updating a variance of the samples each time a new sample is received; and determining whether the connected equipment is switching between the operating states based on the variance of the samples. 15. The method of claim 14, further comprising:
identifying a new value of the variance and one or more previous values of the variance; calculating a filtered variance using the new value of the variance and the one or more previous values of the variance; and determining whether the connected equipment is switching between the operating states based on the filtered variance. 16. The method of claim 15, further comprising:
calculating the filtered variance by averaging the new value of the variance with the one or more previous values of the variance; recursively updating the filtered variance each time a new sample is received. 17. The method of claim 14, further comprising:
calculating a variance slope based on multiple consecutive values of the variance; determining whether the variance slope exceeds a threshold value; and determining that the connected equipment is switching between the operating states in response to a determination that the variance slope exceeds the threshold value. 18. The method of claim 17, further comprising:
recursively updating the variance slope each time a new sample is received; determining whether multiple consecutive values of the variance slope are less than the threshold value; and determining that the connected equipment has reached a new operating state in response to a determination that the multiple consecutive values of the variance slope are less than the threshold value. 19. A heating, ventilation, or air conditioning (HVAC) device comprising:
sensors configured to measure a plurality of monitored variables; and a predictive diagnostics system configured to receive samples of the monitored variables from the sensors, the predictive diagnostics system comprising a principal component analysis (PCA) modeler configured to automatically assign each of the samples of the monitored variables to one of a plurality of operating states of the HVAC device and to construct a PCA model for each operating state using the samples assigned to the operating state; and a controller configured to use the PCA models to adjust an operation of the HVAC device. 20. The HVAC device of claim 19, wherein the PCA modeler is configured to:
recursively update a variance of the samples each time a new sample is received; and determine whether the HVAC device is switching between the operating states based on the variance of the samples. | 2,100 |
5,982 | 5,982 | 15,114,337 | 2,169 | Example embodiments relate to providing real-time monitoring and analysis of query execution. In example embodiments, a query plan is obtained for a database query that is scheduled for execution in a database. A query tree is then generated based on the query plan, where the query tree includes operator nodes that are associated corresponding operators in the query plan. At this stage, performance metrics are collected from the database during the execution of the database query. Next, the query tree is displayed with the performance metrics, where a related portion of the performance metrics are displayed in each of the operator nodes. | 1. A system for providing real-time monitoring and analysis of query execution, the system comprising of:
a processor to:
obtain a query plan for a database query that is scheduled for execution in a database;
generate a query tree based on the query plan, wherein the query tree comprises a plurality of operator nodes each of which is associated with a corresponding operator in the query plan;
collect performance metrics from the database during the execution of the database query;
display the query tree with the performance metrics, wherein a related portion of the performance metrics are displayed in each of the plurality of operator nodes; and
in response to receiving updated performance metrics from the database, update the related portion displayed in each of the plurality of operator nodes based on the updated performance metrics. 2. The system of claim 1, wherein the processor is further to:
display an overview tree of the query plan, wherein the overview tree is used to navigate to a different extent in the display of query tree. 3. The system of claim 1, wherein the processor is further to:
display a progress indicator that shows the progress of the execution of the database query, wherein each operator node of the plurality of operator nodes is color-coded based on an execution status of the operator associated with the operator node. 4. The system of claim 1, wherein the database is a parallel database comprising a plurality of database computing devices, and wherein the query plan is executed in parallel on the plurality of database computing devices. 5. The system of claim 1, wherein the processor is further to:
identify a plurality of critical paths in the query tree based on the performance metrics; and determine a plurality of tree subsets based on the plurality of critical paths, wherein each of the plurality of tree subsets includes a critical path of the plurality of critical paths and descendent nodes of the critical path in the plurality of operator nodes. 6. The system of claim 5, wherein the processor is further to:
in response to a user selecting a target subset of the plurality of tree subsets, generate a new subquery for the target subset comprising additional syntax that allows operators in the target subset to be executed in the database; and display an isolated query tree for the new subquery. 7. The system of claim 1, wherein the display of query tree suppresses a low priority node of the plurality of operator nodes based on the related portion of performance metrics. 8. A method for providing real-time monitoring and analysis of query execution, the method comprising:
obtaining a query plan for a database query that is scheduled for execution in a database; generating a query tree based on the query plan, wherein the query tree comprises a plurality of operator nodes each of which is associated with a corresponding operator in the query plan; collecting performance metrics from the database during the execution of the database query; displaying the query tree with the performance metrics, wherein a related portion of the performance metrics are displayed in each of the plurality of operator nodes; and identifying a plurality of critical paths in the query tree based on the performance metrics. 9. The method of claim 8, further comprising:
display a progress indicator that shows the progress of the execution of the database query, wherein each operator node of the plurality of operator nodes is color-coded based on an execution status of the operator associated with the operator node. 10. The method of claim 8, further comprising:
determining a plurality of tree subsets based on the plurality of critical paths, wherein each of the plurality of tree subsets includes a critical path of the plurality of critical paths and descendent nodes of the critical path in the plurality of operator nodes. 11. The method of claim 10, further comprising:
in response to a user selecting a target subset of the plurality of tree subsets, generating a new subquery for the target subset comprising additional syntax that allows operators in the target subset to be executed in the database; and displaying an isolated query tree for the new subquery. 12. The method of claim 8, wherein the display of query tree suppresses a low priority node of the plurality of operator nodes based on the related portion of performance metrics. 13. A non-transitory machine-readable storage medium encoded with instructions executable by a processor for providing real-time monitoring and analysis of query execution, the machine-readable storage medium comprising instructions to:
obtain a query plan for a database query that is scheduled for execution in a database, wherein the database is a parallel database comprising a plurality of database computing devices, and wherein the query plan is executed in parallel on the plurality of database computing devices; generate a query tree based on the query plan, wherein the query tree comprises a plurality of operator nodes each of which is associated with a corresponding operator in the query plan; collect performance metrics from the plurality of database devices during the execution of the database query; display the query tree with the performance metrics, wherein a related portion of the performance metrics are displayed in each of the plurality of operator nodes, and wherein each of the plurality of operator nodes is shown to be associated with at least one of the plurality of database devices; and in response to receiving updated performance metrics from the database, update the related portion displayed in each of the plurality of operator nodes based on the updated performance metrics. 14. The system of claim 13, wherein the machine-readable storage medium further comprises instructions to:
identify a plurality of critical paths in the query tree based on the performance metrics; and determine a plurality of tree subsets based on the plurality of critical paths, wherein each of the plurality of tree subsets includes a critical path of the plurality of critical paths and descendent nodes of the critical path in the plurality of operator nodes. 15. The system of claim 14, wherein the machine-readable storage medium further comprises instructions to:
in response to a user selecting a target subset of the plurality of tree subsets, generate a new subquery for the target subset comprising additional syntax that allows operators in the target subset to be executed in the database; and display an isolated query tree for the new subquery. | Example embodiments relate to providing real-time monitoring and analysis of query execution. In example embodiments, a query plan is obtained for a database query that is scheduled for execution in a database. A query tree is then generated based on the query plan, where the query tree includes operator nodes that are associated corresponding operators in the query plan. At this stage, performance metrics are collected from the database during the execution of the database query. Next, the query tree is displayed with the performance metrics, where a related portion of the performance metrics are displayed in each of the operator nodes.1. A system for providing real-time monitoring and analysis of query execution, the system comprising of:
a processor to:
obtain a query plan for a database query that is scheduled for execution in a database;
generate a query tree based on the query plan, wherein the query tree comprises a plurality of operator nodes each of which is associated with a corresponding operator in the query plan;
collect performance metrics from the database during the execution of the database query;
display the query tree with the performance metrics, wherein a related portion of the performance metrics are displayed in each of the plurality of operator nodes; and
in response to receiving updated performance metrics from the database, update the related portion displayed in each of the plurality of operator nodes based on the updated performance metrics. 2. The system of claim 1, wherein the processor is further to:
display an overview tree of the query plan, wherein the overview tree is used to navigate to a different extent in the display of query tree. 3. The system of claim 1, wherein the processor is further to:
display a progress indicator that shows the progress of the execution of the database query, wherein each operator node of the plurality of operator nodes is color-coded based on an execution status of the operator associated with the operator node. 4. The system of claim 1, wherein the database is a parallel database comprising a plurality of database computing devices, and wherein the query plan is executed in parallel on the plurality of database computing devices. 5. The system of claim 1, wherein the processor is further to:
identify a plurality of critical paths in the query tree based on the performance metrics; and determine a plurality of tree subsets based on the plurality of critical paths, wherein each of the plurality of tree subsets includes a critical path of the plurality of critical paths and descendent nodes of the critical path in the plurality of operator nodes. 6. The system of claim 5, wherein the processor is further to:
in response to a user selecting a target subset of the plurality of tree subsets, generate a new subquery for the target subset comprising additional syntax that allows operators in the target subset to be executed in the database; and display an isolated query tree for the new subquery. 7. The system of claim 1, wherein the display of query tree suppresses a low priority node of the plurality of operator nodes based on the related portion of performance metrics. 8. A method for providing real-time monitoring and analysis of query execution, the method comprising:
obtaining a query plan for a database query that is scheduled for execution in a database; generating a query tree based on the query plan, wherein the query tree comprises a plurality of operator nodes each of which is associated with a corresponding operator in the query plan; collecting performance metrics from the database during the execution of the database query; displaying the query tree with the performance metrics, wherein a related portion of the performance metrics are displayed in each of the plurality of operator nodes; and identifying a plurality of critical paths in the query tree based on the performance metrics. 9. The method of claim 8, further comprising:
display a progress indicator that shows the progress of the execution of the database query, wherein each operator node of the plurality of operator nodes is color-coded based on an execution status of the operator associated with the operator node. 10. The method of claim 8, further comprising:
determining a plurality of tree subsets based on the plurality of critical paths, wherein each of the plurality of tree subsets includes a critical path of the plurality of critical paths and descendent nodes of the critical path in the plurality of operator nodes. 11. The method of claim 10, further comprising:
in response to a user selecting a target subset of the plurality of tree subsets, generating a new subquery for the target subset comprising additional syntax that allows operators in the target subset to be executed in the database; and displaying an isolated query tree for the new subquery. 12. The method of claim 8, wherein the display of query tree suppresses a low priority node of the plurality of operator nodes based on the related portion of performance metrics. 13. A non-transitory machine-readable storage medium encoded with instructions executable by a processor for providing real-time monitoring and analysis of query execution, the machine-readable storage medium comprising instructions to:
obtain a query plan for a database query that is scheduled for execution in a database, wherein the database is a parallel database comprising a plurality of database computing devices, and wherein the query plan is executed in parallel on the plurality of database computing devices; generate a query tree based on the query plan, wherein the query tree comprises a plurality of operator nodes each of which is associated with a corresponding operator in the query plan; collect performance metrics from the plurality of database devices during the execution of the database query; display the query tree with the performance metrics, wherein a related portion of the performance metrics are displayed in each of the plurality of operator nodes, and wherein each of the plurality of operator nodes is shown to be associated with at least one of the plurality of database devices; and in response to receiving updated performance metrics from the database, update the related portion displayed in each of the plurality of operator nodes based on the updated performance metrics. 14. The system of claim 13, wherein the machine-readable storage medium further comprises instructions to:
identify a plurality of critical paths in the query tree based on the performance metrics; and determine a plurality of tree subsets based on the plurality of critical paths, wherein each of the plurality of tree subsets includes a critical path of the plurality of critical paths and descendent nodes of the critical path in the plurality of operator nodes. 15. The system of claim 14, wherein the machine-readable storage medium further comprises instructions to:
in response to a user selecting a target subset of the plurality of tree subsets, generate a new subquery for the target subset comprising additional syntax that allows operators in the target subset to be executed in the database; and display an isolated query tree for the new subquery. | 2,100 |
5,983 | 5,983 | 15,195,816 | 2,129 | A method of integrated modeling using multiple subsurface models includes receiving multiple sets of input values associated with a hydrocarbon formation of the Earth. The method also includes receiving a network model that includes one or more assets configured to distribute a flow of hydrocarbons from the hydrocarbon formation to a processing facility. The method further includes generating the multiple subsurface models based on the multiple sets of input values, wherein each subsurface model comprises a set of input values of the multiple sets of input values, and wherein each subsurface model represents a production of the flow of hydrocarbons from the hydrocarbon formation. The method also includes applying the multiple subsurface models to the network model to generate an integrated model comprising multiple production rates of hydrocarbons via the one or more assets over time. The method further includes identifying at least one asset to adjust based on the integrated model. | 1. A method of integrated modeling using a plurality of subsurface models, comprising:
receiving, via a processor, a plurality of sets of input values associated with a hydrocarbon formation of the Earth; receiving, via the processor, a network model comprising one or more assets configured to distribute a flow of hydrocarbons from the hydrocarbon formation to a processing facility; generating, via the processor, the plurality of subsurface models based on the plurality of sets of input values, wherein each subsurface model comprises a set of input values of the plurality of sets of input values, and wherein each subsurface model represents a production of the flow of hydrocarbons from the hydrocarbon formation; applying, via the processor, the plurality of subsurface models to the network model to generate an integrated model comprising a plurality of production rates of hydrocarbons via the one or more assets over time; and identifying, via the processor, at least one of the one or more assets to adjust based on the integrated model. 2. The method of claim 1, wherein the input values of the plurality of sets of input values comprise two or more of gas content, oil content, water content, permeability, porosity, oil-water contact, gas-oil contact, facies heterogeneity, and fault transmissivity. 3. The method of claim 1, wherein the network model corresponds to a single well, a network of wells, a production facility, production equipment, or any combination thereof, that is configured to transfer production of the flow of hydrocarbons from the hydrocarbon formation to a processing facility. 4. The method of claim 1, wherein each subsurface model is associated with:
the hydrocarbon formation comprising a hydrocarbon reservoir; and a hydrocarbon production forecast as a function of time. 5. The method of claim 1, wherein the integrated model is associated with a plurality of reactions of the hydrocarbon formation with respect to the network model. 6. The method of claim 5, wherein each reaction of the plurality of reactions correspond to a respective subsurface model of the plurality of subsurface models. 7. The method of claim 1, comprising:
analyzing, via the processor, the integrated model; and feeding, via the processor, information based at least in part on analyzing the integrated model back to generate another integrated model. 8. A system, comprising:
a display; one or more sensors configured to provide real-time data regarding production and distribution of extracted hydrocarbons; and a computing system communicatively coupled to the one or more sensors and communicatively coupled to the display, wherein the computing system comprises a processor, wherein the processor is configured to:
receive one or more network models, wherein each network model comprises one or more assets configured to distribute a flow of hydrocarbons from a hydrocarbon formation of the Earth to a processing facility;
receive one or more economic models, wherein each economic model comprises a budget, deadline, revenue projection, cost projection, or any combination thereof;
receive a plurality of sets of input values associated with the hydrocarbon formation;
generate a plurality of subsurface models based on the plurality of sets of input values, wherein each subsurface model corresponds to a set of input values of the plurality of sets of input values, and wherein each subsurface model represents a production of the flow of hydrocarbons from the hydrocarbon formation;
apply the plurality of subsurface models to the one or more network models to generate an integrated model, wherein the integrated model comprises a plurality of scenarios, wherein each scenario comprises a production rate of hydrocarbons via the one or more assets over time;
apply the economic model to the integrated model, wherein the economic model comprises one or more economic parameters, wherein applying the economic model to the integrated model comprises removing at least one scenario of the plurality of scenarios of the integrated model when the at least one scenario does not correspond to the one or more economic parameters; and
output the integrated model to the display, wherein outputting the integrated model comprises displaying the plurality of scenarios of the integrated model, such that the one or more assets and effects of operating the one or more assets are displayed. 9. The system of claim 8, wherein the processor is configured to:
receive information associated with the hydrocarbon formation from the one or more sensors; update at least one set of input values of the plurality of sets of input values based at least in part on receiving the information; and generate the plurality of subsurface models based on the at least one set of input values. 10. The system of claim 8, wherein the processor is configured to:
receive information associated with the economic model; update the economic model based at least in part on the information; and apply the updated economic model to the integrated model. 11. The system of claim 8, wherein the processor is configured to:
receive information associated with the one or more network models; update the one or more network models based at least in part on the information; and apply the plurality of subsurface models to the one or more updated network models to generate the integrated model. 12. The system of claim 8, wherein the input values comprise two or more of gas content, oil content, water content, permeability, porosity, oil-water contacts, gas-oil contacts, facies heterogeneity, and any combination thereof. 13. The system of claim 8, wherein the one or more sensors comprise a pressure sensor, a temperature sensor, a flow sensor, or any combination thereof. 14. A tangible, non-transitory, machine-readable medium, comprising machine-readable instructions to cause a processor to:
receive a plurality of sets of input values associated with a hydrocarbon formation of the Earth; receive a plurality of network models comprising one or more assets configured to distribute a flow of hydrocarbons from the hydrocarbon formation to a processing facility; generate a plurality of subsurface models based on the plurality of sets of input values, wherein each subsurface model corresponds to a set of input values of the plurality of sets of input values, and wherein each subsurface model represents a production of the flow of hydrocarbons from the hydrocarbon formation; apply the plurality of subsurface models to the plurality of network models to generate an integrated model comprising a plurality of scenarios, wherein each scenario comprises a production rate of hydrocarbons via the one or more assets over time; and output the integrated model to a display, wherein outputting the integrated model comprises displaying the plurality of scenarios of the integrated model, such that the one or more assets and effects of operating the one or more assets are displayed. 15. The machine-readable medium of claim 14, comprising machine-readable instructions to cause the processor to generate a second integrated model based on updating the plurality of sets of input values or the plurality of network models. 16. The machine-readable medium of claim 15, wherein generating the second integrated model comprises receiving information from a database related to the one or more assets of the plurality of network models. 17. The machine-readable medium of claim 15, wherein generating the second integrated model comprises adjusting the one or more assets included in the plurality of network models. 18. The machine-readable medium of claim 15, wherein generating the second integrated model comprises adjusting at least one set of input values of the plurality of sets of input values based on proximity of the at least one set of input values to a threshold related to the one or more assets included in the plurality of network models. 19. The machine-readable medium of claim 14, comprising machine-readable instructions to cause the processor to receive an economic model comprising a budget, deadline, revenue projection, cost projection, or any combination thereof associated with a production of the hydrocarbons. 20. The machine-readable medium of claim 19, comprising machine-readable instructions to:
receive information related to a budget, deadline, revenue projection, cost projection, or any combination thereof; and adjust the economic model based at least in part on the information. | A method of integrated modeling using multiple subsurface models includes receiving multiple sets of input values associated with a hydrocarbon formation of the Earth. The method also includes receiving a network model that includes one or more assets configured to distribute a flow of hydrocarbons from the hydrocarbon formation to a processing facility. The method further includes generating the multiple subsurface models based on the multiple sets of input values, wherein each subsurface model comprises a set of input values of the multiple sets of input values, and wherein each subsurface model represents a production of the flow of hydrocarbons from the hydrocarbon formation. The method also includes applying the multiple subsurface models to the network model to generate an integrated model comprising multiple production rates of hydrocarbons via the one or more assets over time. The method further includes identifying at least one asset to adjust based on the integrated model.1. A method of integrated modeling using a plurality of subsurface models, comprising:
receiving, via a processor, a plurality of sets of input values associated with a hydrocarbon formation of the Earth; receiving, via the processor, a network model comprising one or more assets configured to distribute a flow of hydrocarbons from the hydrocarbon formation to a processing facility; generating, via the processor, the plurality of subsurface models based on the plurality of sets of input values, wherein each subsurface model comprises a set of input values of the plurality of sets of input values, and wherein each subsurface model represents a production of the flow of hydrocarbons from the hydrocarbon formation; applying, via the processor, the plurality of subsurface models to the network model to generate an integrated model comprising a plurality of production rates of hydrocarbons via the one or more assets over time; and identifying, via the processor, at least one of the one or more assets to adjust based on the integrated model. 2. The method of claim 1, wherein the input values of the plurality of sets of input values comprise two or more of gas content, oil content, water content, permeability, porosity, oil-water contact, gas-oil contact, facies heterogeneity, and fault transmissivity. 3. The method of claim 1, wherein the network model corresponds to a single well, a network of wells, a production facility, production equipment, or any combination thereof, that is configured to transfer production of the flow of hydrocarbons from the hydrocarbon formation to a processing facility. 4. The method of claim 1, wherein each subsurface model is associated with:
the hydrocarbon formation comprising a hydrocarbon reservoir; and a hydrocarbon production forecast as a function of time. 5. The method of claim 1, wherein the integrated model is associated with a plurality of reactions of the hydrocarbon formation with respect to the network model. 6. The method of claim 5, wherein each reaction of the plurality of reactions correspond to a respective subsurface model of the plurality of subsurface models. 7. The method of claim 1, comprising:
analyzing, via the processor, the integrated model; and feeding, via the processor, information based at least in part on analyzing the integrated model back to generate another integrated model. 8. A system, comprising:
a display; one or more sensors configured to provide real-time data regarding production and distribution of extracted hydrocarbons; and a computing system communicatively coupled to the one or more sensors and communicatively coupled to the display, wherein the computing system comprises a processor, wherein the processor is configured to:
receive one or more network models, wherein each network model comprises one or more assets configured to distribute a flow of hydrocarbons from a hydrocarbon formation of the Earth to a processing facility;
receive one or more economic models, wherein each economic model comprises a budget, deadline, revenue projection, cost projection, or any combination thereof;
receive a plurality of sets of input values associated with the hydrocarbon formation;
generate a plurality of subsurface models based on the plurality of sets of input values, wherein each subsurface model corresponds to a set of input values of the plurality of sets of input values, and wherein each subsurface model represents a production of the flow of hydrocarbons from the hydrocarbon formation;
apply the plurality of subsurface models to the one or more network models to generate an integrated model, wherein the integrated model comprises a plurality of scenarios, wherein each scenario comprises a production rate of hydrocarbons via the one or more assets over time;
apply the economic model to the integrated model, wherein the economic model comprises one or more economic parameters, wherein applying the economic model to the integrated model comprises removing at least one scenario of the plurality of scenarios of the integrated model when the at least one scenario does not correspond to the one or more economic parameters; and
output the integrated model to the display, wherein outputting the integrated model comprises displaying the plurality of scenarios of the integrated model, such that the one or more assets and effects of operating the one or more assets are displayed. 9. The system of claim 8, wherein the processor is configured to:
receive information associated with the hydrocarbon formation from the one or more sensors; update at least one set of input values of the plurality of sets of input values based at least in part on receiving the information; and generate the plurality of subsurface models based on the at least one set of input values. 10. The system of claim 8, wherein the processor is configured to:
receive information associated with the economic model; update the economic model based at least in part on the information; and apply the updated economic model to the integrated model. 11. The system of claim 8, wherein the processor is configured to:
receive information associated with the one or more network models; update the one or more network models based at least in part on the information; and apply the plurality of subsurface models to the one or more updated network models to generate the integrated model. 12. The system of claim 8, wherein the input values comprise two or more of gas content, oil content, water content, permeability, porosity, oil-water contacts, gas-oil contacts, facies heterogeneity, and any combination thereof. 13. The system of claim 8, wherein the one or more sensors comprise a pressure sensor, a temperature sensor, a flow sensor, or any combination thereof. 14. A tangible, non-transitory, machine-readable medium, comprising machine-readable instructions to cause a processor to:
receive a plurality of sets of input values associated with a hydrocarbon formation of the Earth; receive a plurality of network models comprising one or more assets configured to distribute a flow of hydrocarbons from the hydrocarbon formation to a processing facility; generate a plurality of subsurface models based on the plurality of sets of input values, wherein each subsurface model corresponds to a set of input values of the plurality of sets of input values, and wherein each subsurface model represents a production of the flow of hydrocarbons from the hydrocarbon formation; apply the plurality of subsurface models to the plurality of network models to generate an integrated model comprising a plurality of scenarios, wherein each scenario comprises a production rate of hydrocarbons via the one or more assets over time; and output the integrated model to a display, wherein outputting the integrated model comprises displaying the plurality of scenarios of the integrated model, such that the one or more assets and effects of operating the one or more assets are displayed. 15. The machine-readable medium of claim 14, comprising machine-readable instructions to cause the processor to generate a second integrated model based on updating the plurality of sets of input values or the plurality of network models. 16. The machine-readable medium of claim 15, wherein generating the second integrated model comprises receiving information from a database related to the one or more assets of the plurality of network models. 17. The machine-readable medium of claim 15, wherein generating the second integrated model comprises adjusting the one or more assets included in the plurality of network models. 18. The machine-readable medium of claim 15, wherein generating the second integrated model comprises adjusting at least one set of input values of the plurality of sets of input values based on proximity of the at least one set of input values to a threshold related to the one or more assets included in the plurality of network models. 19. The machine-readable medium of claim 14, comprising machine-readable instructions to cause the processor to receive an economic model comprising a budget, deadline, revenue projection, cost projection, or any combination thereof associated with a production of the hydrocarbons. 20. The machine-readable medium of claim 19, comprising machine-readable instructions to:
receive information related to a budget, deadline, revenue projection, cost projection, or any combination thereof; and adjust the economic model based at least in part on the information. | 2,100 |
5,984 | 5,984 | 15,602,715 | 2,196 | A computing server includes a hardware platform with hardware resources, with at least a portion of the hardware resources to be allocated as virtualized resources. A hypervisor platform is provided based on execution of code instructions by the hardware platform. A virtual machine operates as an independent guest computing device, with at least a portion of the virtualized resources being allocated by the hypervisor platform to the virtual machine. The hypervisor platform includes a snapshot function to save the state of the virtual machine, a virtual machine activity monitor to monitor activity of the virtual machine, and an activity-based snapshot policy engine to activate the snapshot function based on the monitored activity of the virtual machine exceeding an activity threshold metric. | 1. A computing server comprising:
a hardware platform comprising hardware resources, with at least a portion of said hardware resources to be allocated as virtualized resources; a hypervisor platform being provided based on execution of code instructions by said hardware platform; and at least one virtual machine operating as an independent guest computing device, with at least a portion of the virtualized resources being allocated by said hypervisor platform to said at least one virtual machine; said hypervisor platform comprising:
a snapshot function configured to save a state of said at least one virtual machine,
a virtual machine activity monitor configured to monitor activity of said at least one virtual machine, and
an activity-based snapshot policy engine configured to activate said snapshot function based on the monitored activity of said at least one virtual machine exceeding at least one activity threshold metric. 2. The computing server system according to claim 1 wherein said virtual machine activity monitor is configured to monitor different activities of said at least one virtual machine, and wherein said snapshot function is based on activity of said at least one virtual machine exceeding a combination of different activity threshold metrics corresponding to the different activities being monitored by said virtual machine activity monitor. 3. The computing server system according to claim 2 wherein said hardware platform comprises a user interface coupled to said activity-based snapshot policy engine, and wherein the combination of different activity threshold metrics is selected via said user interface. 4. The computing server system according to claim 1 wherein the virtualized resources allocated to said at least one virtual machine comprises a virtual disk, wherein said virtual machine activity monitor is configured to monitor data being written to the virtual disk, and wherein the at least one activity threshold metric is based on a threshold amount of data written to the virtual disk. 5. The computing server system according to claim 1 wherein the virtualized resources allocated to said at least one virtual machine comprises a virtual disk divided into a plurality of sectors, wherein said virtual machine activity monitor is configured to monitor data being written to the plurality of sectors, and wherein the at least one activity threshold metric is based on a threshold percentage of the plurality of sectors receiving data. 6. The computing server system according to claim 1 wherein the virtualized resources allocated to said at least one virtual machine comprises a virtual memory, wherein said virtual machine activity monitor is configured to monitor data being written to memory pages in the virtual memory, and wherein the at least one activity threshold metric is based on a threshold amount of memory pages being overwritten. 7. The computing server system according to claim 1 wherein the virtualized resources allocated to said at least one virtual machine comprises at least one virtual processor, wherein said virtual machine activity monitor is configured to monitor the at least one virtual processor, and wherein the at least one activity threshold metric is based on a threshold number of instructions being executed by the at least one virtual processor. 8. The computing server system according to claim 1 wherein the virtualized resources allocated to said at least one virtual machine comprises a virtualized network interface, wherein said virtual machine activity monitor is configured to monitor the virtualized network interface, and wherein the at least one activity threshold metric is based on a threshold amount of network traffic sent or received by said at least one virtual machine via the virtualized network interface. 9. The computing server system according to claim 1 wherein the virtualized resources allocated to said at least one virtual machine comprises a virtualized network interface, wherein said virtual machine activity monitor is configured to monitor the virtualized network interface, and wherein the at least one activity threshold metric is based on a threshold number of peers communicated with. 10. A method for operating a computing server comprising a hardware platform comprising hardware resources, with at least a portion of the hardware resources to be allocated as virtualized resources, the method comprising:
providing a hypervisor platform based on execution of code instructions by the hardware platform; providing at least one virtual machine operating as an independent guest computing device, with at least a portion of the virtualized resources being allocated by the hypervisor platform to the at least one virtual machine; providing a snapshot function within the hypervisor platform to save a state of the at least one virtual machine; monitoring activity of the at least one virtual machine; and activating the snapshot function based on the monitored activity of the at least one virtual machine exceeding at least one activity threshold metric. 11. The method according to claim 10 wherein monitoring activity of the at least one virtual machine comprises monitoring different activities of the at least one virtual machine, and activating the snapshot function is based on activity of the at least one virtual machine exceeding a combination of different activity threshold metrics corresponding to the different activities being monitored by the virtual machine activity monitor. 12. The method according to claim 11 wherein the hardware platform comprises a user interface coupled to the activity-based snapshot policy engine, and further comprising selecting the combination of different activity threshold metrics via the user interface. 13. The method according to claim 10 wherein the virtualized resources allocated to the at least one virtual machine comprises a virtual disk, wherein monitoring activity of the at least one virtual machine comprises monitoring data being written to the virtual disk, and wherein the at least one activity threshold metric is based on a threshold amount of data written to the virtual disk. 14. The method according to claim 10 wherein the virtualized resources allocated to the at least one virtual machine comprises a virtual disk divided into a plurality of sectors, wherein monitoring activity of the at least one virtual machine comprises monitoring data being written to the plurality of sectors, and wherein the at least one activity threshold metric is based on a threshold percentage of the plurality of sectors receiving data. 15. The method according to claim 10 wherein the virtualized resources allocated to the at least one virtual machine comprises a virtual memory, wherein monitoring activity of the at least one virtual machine comprises monitoring data being written to memory pages in the virtual memory, and wherein the at least one activity threshold metric is based on a threshold amount of memory pages being overwritten. 16. The method according to claim 10 wherein the virtualized resources allocated to the at least one virtual machine comprises at least one virtual processor, wherein monitoring activity of the at least one virtual machine comprises monitoring the at least one virtual processor, and wherein the at least one activity threshold metric is based on a threshold number of instructions being executed by the at least one virtual processor. 17. The method according to claim 10 wherein the virtualized resources allocated to said at least one virtual machine comprises a virtualized network interface, wherein monitoring activity of the at least one virtual machine comprises monitoring the virtualized network interface, and wherein the at least one activity threshold metric is based on a threshold amount of network traffic sent or received by the at least one virtual machine via the virtualized network interface. 18. A non-transitory computer readable medium for a computing server comprising a hardware platform comprising hardware resources, with at least a portion of the hardware resources to be allocated as virtualized resources, the non-ransitory computer readable medium having a plurality of computer executable instructions for causing the computing server to perform steps comprising:
providing a hypervisor platform based on execution of code instructions by the hardware platform; providing at least one virtual machine operating as an independent guest computing device, with at least a portion of the virtualized resources being allocated by the hypervisor platform to the at least one virtual machine; providing a snapshot function within the hypervisor platform to save a state of the at least one virtual machine; monitoring activity of the at least one virtual machine; and activating the snapshot function based on the monitored activity of the at least one virtual machine exceeding at least one activity threshold metric. 19. The non-transitory computer readable medium according to claim 18 wherein monitoring activity of the at least one virtual machine comprises monitoring different activities of the at least one virtual machine, and activating the snapshot function is based on activity of the at least one virtual machine exceeding a combination of different activity threshold metrics corresponding to the different activities being monitored by the virtual machine activity monitor. 20. The non-transitory computer readable medium according to claim 18 wherein the virtualized resources allocated to the at least one virtual machine comprises a virtual disk, wherein monitoring activity of the at least one virtual machine comprises monitoring data being written to the virtual disk, and wherein the at least one activity threshold metric is based on a threshold amount of data written to the virtual disk. 21. The non-transitory computer readable medium according to claim 18 wherein the virtualized resources allocated to the at least one virtual machine comprises a virtual disk divided into a plurality of sectors, wherein monitoring activity of the at least one virtual machine comprises monitoring data being written to the plurality of sectors, and wherein the at least one activity threshold metric is based on a threshold percentage of the plurality of sectors receiving data. 22. The non-transitory computer readable medium according to claim 18 wherein the virtualized resources allocated to the at least one virtual machine comprises at least one virtual processor, wherein monitoring activity of the at least one virtual machine comprises monitoring the at least one virtual processor, and wherein the at least one activity threshold metric is based on a threshold number of instructions being executed by the at least one virtual processor. 23. The non-transitory computer readable medium according to claim 18 wherein the virtualized resources allocated to said at least one virtual machine comprises a virtualized network interface, wherein monitoring activity of the at least one virtual machine comprises monitoring the virtualized network interface, and wherein the at least one activity threshold metric is based on a threshold amount of network traffic sent or received by the at least one virtual machine via the virtualized network interface. | A computing server includes a hardware platform with hardware resources, with at least a portion of the hardware resources to be allocated as virtualized resources. A hypervisor platform is provided based on execution of code instructions by the hardware platform. A virtual machine operates as an independent guest computing device, with at least a portion of the virtualized resources being allocated by the hypervisor platform to the virtual machine. The hypervisor platform includes a snapshot function to save the state of the virtual machine, a virtual machine activity monitor to monitor activity of the virtual machine, and an activity-based snapshot policy engine to activate the snapshot function based on the monitored activity of the virtual machine exceeding an activity threshold metric.1. A computing server comprising:
a hardware platform comprising hardware resources, with at least a portion of said hardware resources to be allocated as virtualized resources; a hypervisor platform being provided based on execution of code instructions by said hardware platform; and at least one virtual machine operating as an independent guest computing device, with at least a portion of the virtualized resources being allocated by said hypervisor platform to said at least one virtual machine; said hypervisor platform comprising:
a snapshot function configured to save a state of said at least one virtual machine,
a virtual machine activity monitor configured to monitor activity of said at least one virtual machine, and
an activity-based snapshot policy engine configured to activate said snapshot function based on the monitored activity of said at least one virtual machine exceeding at least one activity threshold metric. 2. The computing server system according to claim 1 wherein said virtual machine activity monitor is configured to monitor different activities of said at least one virtual machine, and wherein said snapshot function is based on activity of said at least one virtual machine exceeding a combination of different activity threshold metrics corresponding to the different activities being monitored by said virtual machine activity monitor. 3. The computing server system according to claim 2 wherein said hardware platform comprises a user interface coupled to said activity-based snapshot policy engine, and wherein the combination of different activity threshold metrics is selected via said user interface. 4. The computing server system according to claim 1 wherein the virtualized resources allocated to said at least one virtual machine comprises a virtual disk, wherein said virtual machine activity monitor is configured to monitor data being written to the virtual disk, and wherein the at least one activity threshold metric is based on a threshold amount of data written to the virtual disk. 5. The computing server system according to claim 1 wherein the virtualized resources allocated to said at least one virtual machine comprises a virtual disk divided into a plurality of sectors, wherein said virtual machine activity monitor is configured to monitor data being written to the plurality of sectors, and wherein the at least one activity threshold metric is based on a threshold percentage of the plurality of sectors receiving data. 6. The computing server system according to claim 1 wherein the virtualized resources allocated to said at least one virtual machine comprises a virtual memory, wherein said virtual machine activity monitor is configured to monitor data being written to memory pages in the virtual memory, and wherein the at least one activity threshold metric is based on a threshold amount of memory pages being overwritten. 7. The computing server system according to claim 1 wherein the virtualized resources allocated to said at least one virtual machine comprises at least one virtual processor, wherein said virtual machine activity monitor is configured to monitor the at least one virtual processor, and wherein the at least one activity threshold metric is based on a threshold number of instructions being executed by the at least one virtual processor. 8. The computing server system according to claim 1 wherein the virtualized resources allocated to said at least one virtual machine comprises a virtualized network interface, wherein said virtual machine activity monitor is configured to monitor the virtualized network interface, and wherein the at least one activity threshold metric is based on a threshold amount of network traffic sent or received by said at least one virtual machine via the virtualized network interface. 9. The computing server system according to claim 1 wherein the virtualized resources allocated to said at least one virtual machine comprises a virtualized network interface, wherein said virtual machine activity monitor is configured to monitor the virtualized network interface, and wherein the at least one activity threshold metric is based on a threshold number of peers communicated with. 10. A method for operating a computing server comprising a hardware platform comprising hardware resources, with at least a portion of the hardware resources to be allocated as virtualized resources, the method comprising:
providing a hypervisor platform based on execution of code instructions by the hardware platform; providing at least one virtual machine operating as an independent guest computing device, with at least a portion of the virtualized resources being allocated by the hypervisor platform to the at least one virtual machine; providing a snapshot function within the hypervisor platform to save a state of the at least one virtual machine; monitoring activity of the at least one virtual machine; and activating the snapshot function based on the monitored activity of the at least one virtual machine exceeding at least one activity threshold metric. 11. The method according to claim 10 wherein monitoring activity of the at least one virtual machine comprises monitoring different activities of the at least one virtual machine, and activating the snapshot function is based on activity of the at least one virtual machine exceeding a combination of different activity threshold metrics corresponding to the different activities being monitored by the virtual machine activity monitor. 12. The method according to claim 11 wherein the hardware platform comprises a user interface coupled to the activity-based snapshot policy engine, and further comprising selecting the combination of different activity threshold metrics via the user interface. 13. The method according to claim 10 wherein the virtualized resources allocated to the at least one virtual machine comprises a virtual disk, wherein monitoring activity of the at least one virtual machine comprises monitoring data being written to the virtual disk, and wherein the at least one activity threshold metric is based on a threshold amount of data written to the virtual disk. 14. The method according to claim 10 wherein the virtualized resources allocated to the at least one virtual machine comprises a virtual disk divided into a plurality of sectors, wherein monitoring activity of the at least one virtual machine comprises monitoring data being written to the plurality of sectors, and wherein the at least one activity threshold metric is based on a threshold percentage of the plurality of sectors receiving data. 15. The method according to claim 10 wherein the virtualized resources allocated to the at least one virtual machine comprises a virtual memory, wherein monitoring activity of the at least one virtual machine comprises monitoring data being written to memory pages in the virtual memory, and wherein the at least one activity threshold metric is based on a threshold amount of memory pages being overwritten. 16. The method according to claim 10 wherein the virtualized resources allocated to the at least one virtual machine comprises at least one virtual processor, wherein monitoring activity of the at least one virtual machine comprises monitoring the at least one virtual processor, and wherein the at least one activity threshold metric is based on a threshold number of instructions being executed by the at least one virtual processor. 17. The method according to claim 10 wherein the virtualized resources allocated to said at least one virtual machine comprises a virtualized network interface, wherein monitoring activity of the at least one virtual machine comprises monitoring the virtualized network interface, and wherein the at least one activity threshold metric is based on a threshold amount of network traffic sent or received by the at least one virtual machine via the virtualized network interface. 18. A non-transitory computer readable medium for a computing server comprising a hardware platform comprising hardware resources, with at least a portion of the hardware resources to be allocated as virtualized resources, the non-ransitory computer readable medium having a plurality of computer executable instructions for causing the computing server to perform steps comprising:
providing a hypervisor platform based on execution of code instructions by the hardware platform; providing at least one virtual machine operating as an independent guest computing device, with at least a portion of the virtualized resources being allocated by the hypervisor platform to the at least one virtual machine; providing a snapshot function within the hypervisor platform to save a state of the at least one virtual machine; monitoring activity of the at least one virtual machine; and activating the snapshot function based on the monitored activity of the at least one virtual machine exceeding at least one activity threshold metric. 19. The non-transitory computer readable medium according to claim 18 wherein monitoring activity of the at least one virtual machine comprises monitoring different activities of the at least one virtual machine, and activating the snapshot function is based on activity of the at least one virtual machine exceeding a combination of different activity threshold metrics corresponding to the different activities being monitored by the virtual machine activity monitor. 20. The non-transitory computer readable medium according to claim 18 wherein the virtualized resources allocated to the at least one virtual machine comprises a virtual disk, wherein monitoring activity of the at least one virtual machine comprises monitoring data being written to the virtual disk, and wherein the at least one activity threshold metric is based on a threshold amount of data written to the virtual disk. 21. The non-transitory computer readable medium according to claim 18 wherein the virtualized resources allocated to the at least one virtual machine comprises a virtual disk divided into a plurality of sectors, wherein monitoring activity of the at least one virtual machine comprises monitoring data being written to the plurality of sectors, and wherein the at least one activity threshold metric is based on a threshold percentage of the plurality of sectors receiving data. 22. The non-transitory computer readable medium according to claim 18 wherein the virtualized resources allocated to the at least one virtual machine comprises at least one virtual processor, wherein monitoring activity of the at least one virtual machine comprises monitoring the at least one virtual processor, and wherein the at least one activity threshold metric is based on a threshold number of instructions being executed by the at least one virtual processor. 23. The non-transitory computer readable medium according to claim 18 wherein the virtualized resources allocated to said at least one virtual machine comprises a virtualized network interface, wherein monitoring activity of the at least one virtual machine comprises monitoring the virtualized network interface, and wherein the at least one activity threshold metric is based on a threshold amount of network traffic sent or received by the at least one virtual machine via the virtualized network interface. | 2,100 |
5,985 | 5,985 | 15,597,881 | 2,135 | A data storage network may have multiple data storage devices that each consist of a device buffer. A network buffer and buffer circuit can be found in a network controller with the buffer circuit arranged to divide and store data associated with a data access request in the network buffer and the device buffer of the first data storage device. | 1. An apparatus comprising a network controller connected to separate first and second data storage devices each having a device buffer, the network controller comprising a network buffer and a buffer circuit, the buffer circuit arranged to divide and store data associated with a data access request in the device buffer and the network buffer of the first data storage device. 2. The apparatus of claim 1, wherein network controller, first data storage device, and second data storage device are connected as a redundant array of independent devices (RAID). 3. The apparatus of claim 1, wherein the first and second data storage devices are each non-volatile solid state memories. 4. The apparatus of claim 1, wherein the first and second data storage devices are different types of memory. 5. The apparatus of claim 1, wherein the network controller is connected to at least one remote host via a front-end interface and connected to the first and second data storage devices via a back-end interface, the back-end interface comprising a peripheral component interconnect express (PCIe) bus. 6. The apparatus of claim 1, wherein the first and second data storage devices communicate to the network controller via a non-volatile memory express (NVMe) protocol. 7. The apparatus of claim 1, wherein the first and second data storage devices have an M.2 form factor. 8. A method comprising:
connecting a network controller to separate first and second data storage devices, each data storage device having a device buffer, the network controller comprising a network buffer and a buffer circuit; dividing data associated with a write request from a host with the buffer circuit into first and second packets; and storing the first data packet in the network buffer and the second packet in the device buffer of the first data storage device as directed by the buffer circuit. 9. The method of claim 8, wherein the device buffer each data storage device has a greater data capacity than the network buffer. 10. The method of claim 8, wherein the first data packet is different than the second data packet. 11. The method of claim 8, wherein the first data packet has a smaller data size than the second data packet. 12. The method of claim 8, wherein the buffer circuit stores the first data packet in a cache memory of the network controller prior to the device buffer. 13. A method comprising:
connecting a network controller to separate first, second, and third data storage devices, each data storage device having a device buffer, the network controller comprising a network buffer and a buffer circuit; dividing data associated with a write request from a host with the buffer circuit into different first, second, and third packets; storing the first data packet in the device buffer of the first data storage device without being stored in the network controller; directing the second data packet to the device buffer of the second data storage device without being stored in the network controller; and writing the third data packet in the network buffer as directed by the buffer circuit. 14. The method of claim 13, wherein the first and second data packets contain parity data associated with a redundant array of independent devices (RAID) level. 15. The method of claim 13, wherein the buffer circuit maintains a scatter gather list that specifies a destination address in the first or second data storage devices. 16. The method of claim 15, wherein the scatter gather list is generated by a programmable processor of the network controller when the write request is received by the network controller. 17. The method of claim 13, wherein the first and second data storage devices each have lower data latency than the network controller. 18. The method of claim 13, wherein the buffer circuit moves the first data packet to a memory array of the third data storage device. 19. The method of claim 13, wherein the buffer circuit moves the second data packet to a memory array of the second data storage device. 20. The method of claim 13, wherein each data storage device has a memory array with a greater data capacity than the network buffer. | A data storage network may have multiple data storage devices that each consist of a device buffer. A network buffer and buffer circuit can be found in a network controller with the buffer circuit arranged to divide and store data associated with a data access request in the network buffer and the device buffer of the first data storage device.1. An apparatus comprising a network controller connected to separate first and second data storage devices each having a device buffer, the network controller comprising a network buffer and a buffer circuit, the buffer circuit arranged to divide and store data associated with a data access request in the device buffer and the network buffer of the first data storage device. 2. The apparatus of claim 1, wherein network controller, first data storage device, and second data storage device are connected as a redundant array of independent devices (RAID). 3. The apparatus of claim 1, wherein the first and second data storage devices are each non-volatile solid state memories. 4. The apparatus of claim 1, wherein the first and second data storage devices are different types of memory. 5. The apparatus of claim 1, wherein the network controller is connected to at least one remote host via a front-end interface and connected to the first and second data storage devices via a back-end interface, the back-end interface comprising a peripheral component interconnect express (PCIe) bus. 6. The apparatus of claim 1, wherein the first and second data storage devices communicate to the network controller via a non-volatile memory express (NVMe) protocol. 7. The apparatus of claim 1, wherein the first and second data storage devices have an M.2 form factor. 8. A method comprising:
connecting a network controller to separate first and second data storage devices, each data storage device having a device buffer, the network controller comprising a network buffer and a buffer circuit; dividing data associated with a write request from a host with the buffer circuit into first and second packets; and storing the first data packet in the network buffer and the second packet in the device buffer of the first data storage device as directed by the buffer circuit. 9. The method of claim 8, wherein the device buffer each data storage device has a greater data capacity than the network buffer. 10. The method of claim 8, wherein the first data packet is different than the second data packet. 11. The method of claim 8, wherein the first data packet has a smaller data size than the second data packet. 12. The method of claim 8, wherein the buffer circuit stores the first data packet in a cache memory of the network controller prior to the device buffer. 13. A method comprising:
connecting a network controller to separate first, second, and third data storage devices, each data storage device having a device buffer, the network controller comprising a network buffer and a buffer circuit; dividing data associated with a write request from a host with the buffer circuit into different first, second, and third packets; storing the first data packet in the device buffer of the first data storage device without being stored in the network controller; directing the second data packet to the device buffer of the second data storage device without being stored in the network controller; and writing the third data packet in the network buffer as directed by the buffer circuit. 14. The method of claim 13, wherein the first and second data packets contain parity data associated with a redundant array of independent devices (RAID) level. 15. The method of claim 13, wherein the buffer circuit maintains a scatter gather list that specifies a destination address in the first or second data storage devices. 16. The method of claim 15, wherein the scatter gather list is generated by a programmable processor of the network controller when the write request is received by the network controller. 17. The method of claim 13, wherein the first and second data storage devices each have lower data latency than the network controller. 18. The method of claim 13, wherein the buffer circuit moves the first data packet to a memory array of the third data storage device. 19. The method of claim 13, wherein the buffer circuit moves the second data packet to a memory array of the second data storage device. 20. The method of claim 13, wherein each data storage device has a memory array with a greater data capacity than the network buffer. | 2,100 |
5,986 | 5,986 | 15,687,101 | 2,193 | In an embodiment, a method is provided. In an embodiment, the method provides determining that a message has been placed in a send buffer; and transferring the message to an application on a second virtual machine by bypassing use of an operating system to process the message by directly placing the message in an application memory space from which the application can retrieve the message. | 1-20. (canceled) 21. At least one non-transitory machine readable storage medium having instructions stored thereon, the instructions when executed by a machine to cause the machine to:
provide a monitor to facilitate the transfer of data between a first application executed by the machine and a second application executed by the machine, the first application and the second application to each utilize a virtualized hardware resource of the machine; establish an association between a first queue of a first queue pair and a second queue of a second queue pair, the first queue pair associated with the first application and the second queue pair associated with the second application; and transfer data from a buffer of the first application to a buffer of the second application using an association between the first queue pair and the second queue pair. 22. The medium of claim 21, the instructions when executed by a machine to cause the machine to set up the first queue pair in response to a call by the first application to the monitor. 23. The medium of claim 21, the instructions when executed by a machine to cause the machine to set up a completion queue associated with the first queue. 24. The medium of claim 21, the instructions when executed by a machine to cause the machine to execute a first virtual machine and a second virtual machine, the first virtual machine to execute the first application and the second virtual machine to execute the second application. 25. The medium of claim 24, the instructions when executed by a machine to cause the machine to place the data into the buffer of the first application, wherein placing the data into the buffer of the first application comprises bypassing use of an operating system executed by the first virtual machine. 26. The medium of claim 21, the instructions when executed by a machine to cause the machine to cause the first application to generate a send request and post the send request to the first queue. 27. The medium of claim 26, the send request to comprise an address at which the data is stored. 28. The medium of claim 26, the instructions when executed by a machine to cause the machine to cause the monitor to detect that the send request has been posted to the first queue. 29. The medium of claim 21, the instructions when executed by a machine to cause the machine to cause the monitor to select, based on the second queue, the buffer of the second application, wherein the buffer of the second application is located in an application memory space of the second application. 30. The medium of claim 21, the instructions when executed by a machine to cause the machine to cause the monitor to store a queue pair table, the queue pair table to store an association between the first queue and the second queue. 31. A system comprising:
at least one hardware resource to be virtualized and utilized by a first application and a second application; at least one processor to:
execute the first application and the second application;
provide a monitor to facilitate the transfer of data between the first application and the second application;
establish an association between a first queue of a first queue pair and a second queue of a second queue pair, the first queue pair associated with the first application and the second queue pair associated with the second application; and
transfer data from a buffer of the first application to a buffer of the second application using an association between the first queue pair and the second queue pair. 32. The system of claim 31, the at least one processor to set up the first queue pair in response to a call by the first application to the monitor. 33. The system of claim 31, the at least one processor to set up a completion queue associated with the first queue. 34. The system of claim 31, the at least one processor to execute a first virtual machine and a second virtual machine, the first virtual machine to execute the first application and the second virtual machine to execute the second application. 35. The system of claim 34, the at least one processor to place the data into the buffer of the first application, wherein placing the data into the buffer of the first application comprises bypassing use of an operating system executed by the first virtual machine. 36. The system of claim 31, the at least one processor to cause the first application to generate a send request and post the send request to the first queue. 37. The system of claim 36, the send request to comprise an address at which the data is stored. 38. The system of claim 36, the at least one processor to cause the monitor to detect that the send request has been posted to the first queue. 39. The system of claim 31, the at least one processor to select, based on the second queue, the buffer of the second application, wherein the buffer of the second application is located in an application memory space of the second application. 40. The system of claim 31, the at least one processor to store a queue pair table, the queue pair table to store an association between the first queue and the second queue. 41. A method comprising:
providing, by at least one processor, a monitor to facilitate the transfer of data between a first application executed by the at least one processor and a second application executed by the at least one processor, the first application and the second application to each utilize a virtualized hardware resource of a computing system comprising the at least one processor; establish, by the at least one processor, an association between a first queue of a first queue pair and a second queue of a second queue pair, the first queue pair associated with the first application and the second queue pair associated with the second application; and transfer, by the at least one processor, data from a buffer of the first application to a buffer of the second application using an association between the first queue pair and the second queue pair. 42. The method of claim 41, further comprising setting up the first queue pair in response to a call by the first application to the monitor. 43. The method of claim 41, further comprising set up a completion queue associated with the first queue. 44. The method of claim 41, further comprising executing a first virtual machine and a second virtual machine, the first virtual machine to execute the first application and the second virtual machine to execute the second application. 45. The method of claim 44, further comprising placing the data into the buffer of the first application, wherein placing the data into the buffer of the first application comprises bypassing use of an operating system executed by the first virtual machine. | In an embodiment, a method is provided. In an embodiment, the method provides determining that a message has been placed in a send buffer; and transferring the message to an application on a second virtual machine by bypassing use of an operating system to process the message by directly placing the message in an application memory space from which the application can retrieve the message.1-20. (canceled) 21. At least one non-transitory machine readable storage medium having instructions stored thereon, the instructions when executed by a machine to cause the machine to:
provide a monitor to facilitate the transfer of data between a first application executed by the machine and a second application executed by the machine, the first application and the second application to each utilize a virtualized hardware resource of the machine; establish an association between a first queue of a first queue pair and a second queue of a second queue pair, the first queue pair associated with the first application and the second queue pair associated with the second application; and transfer data from a buffer of the first application to a buffer of the second application using an association between the first queue pair and the second queue pair. 22. The medium of claim 21, the instructions when executed by a machine to cause the machine to set up the first queue pair in response to a call by the first application to the monitor. 23. The medium of claim 21, the instructions when executed by a machine to cause the machine to set up a completion queue associated with the first queue. 24. The medium of claim 21, the instructions when executed by a machine to cause the machine to execute a first virtual machine and a second virtual machine, the first virtual machine to execute the first application and the second virtual machine to execute the second application. 25. The medium of claim 24, the instructions when executed by a machine to cause the machine to place the data into the buffer of the first application, wherein placing the data into the buffer of the first application comprises bypassing use of an operating system executed by the first virtual machine. 26. The medium of claim 21, the instructions when executed by a machine to cause the machine to cause the first application to generate a send request and post the send request to the first queue. 27. The medium of claim 26, the send request to comprise an address at which the data is stored. 28. The medium of claim 26, the instructions when executed by a machine to cause the machine to cause the monitor to detect that the send request has been posted to the first queue. 29. The medium of claim 21, the instructions when executed by a machine to cause the machine to cause the monitor to select, based on the second queue, the buffer of the second application, wherein the buffer of the second application is located in an application memory space of the second application. 30. The medium of claim 21, the instructions when executed by a machine to cause the machine to cause the monitor to store a queue pair table, the queue pair table to store an association between the first queue and the second queue. 31. A system comprising:
at least one hardware resource to be virtualized and utilized by a first application and a second application; at least one processor to:
execute the first application and the second application;
provide a monitor to facilitate the transfer of data between the first application and the second application;
establish an association between a first queue of a first queue pair and a second queue of a second queue pair, the first queue pair associated with the first application and the second queue pair associated with the second application; and
transfer data from a buffer of the first application to a buffer of the second application using an association between the first queue pair and the second queue pair. 32. The system of claim 31, the at least one processor to set up the first queue pair in response to a call by the first application to the monitor. 33. The system of claim 31, the at least one processor to set up a completion queue associated with the first queue. 34. The system of claim 31, the at least one processor to execute a first virtual machine and a second virtual machine, the first virtual machine to execute the first application and the second virtual machine to execute the second application. 35. The system of claim 34, the at least one processor to place the data into the buffer of the first application, wherein placing the data into the buffer of the first application comprises bypassing use of an operating system executed by the first virtual machine. 36. The system of claim 31, the at least one processor to cause the first application to generate a send request and post the send request to the first queue. 37. The system of claim 36, the send request to comprise an address at which the data is stored. 38. The system of claim 36, the at least one processor to cause the monitor to detect that the send request has been posted to the first queue. 39. The system of claim 31, the at least one processor to select, based on the second queue, the buffer of the second application, wherein the buffer of the second application is located in an application memory space of the second application. 40. The system of claim 31, the at least one processor to store a queue pair table, the queue pair table to store an association between the first queue and the second queue. 41. A method comprising:
providing, by at least one processor, a monitor to facilitate the transfer of data between a first application executed by the at least one processor and a second application executed by the at least one processor, the first application and the second application to each utilize a virtualized hardware resource of a computing system comprising the at least one processor; establish, by the at least one processor, an association between a first queue of a first queue pair and a second queue of a second queue pair, the first queue pair associated with the first application and the second queue pair associated with the second application; and transfer, by the at least one processor, data from a buffer of the first application to a buffer of the second application using an association between the first queue pair and the second queue pair. 42. The method of claim 41, further comprising setting up the first queue pair in response to a call by the first application to the monitor. 43. The method of claim 41, further comprising set up a completion queue associated with the first queue. 44. The method of claim 41, further comprising executing a first virtual machine and a second virtual machine, the first virtual machine to execute the first application and the second virtual machine to execute the second application. 45. The method of claim 44, further comprising placing the data into the buffer of the first application, wherein placing the data into the buffer of the first application comprises bypassing use of an operating system executed by the first virtual machine. | 2,100 |
5,987 | 5,987 | 15,374,201 | 2,194 | At a cloud platform, a class of feed is received for an external feed corresponding to an information source, as are an instruction corresponding to a create operation for the external feed, and a dictionary input corresponding to parameters expected by the information source. The external feed produces a corresponding class of events. At the cloud platform, a handler is selected based on the received class of feed and the received create operation; the input dictionary is transferred to the handler; and the handler generates a unique destination to receive events for the class of events. The handler on the cloud platform generates a unique request to the information source to generate events of the class of feed to the unique destination and sends the request to the information source. Events generated from the information source responsive to the unique request are received at the unique destination. | 1. A method comprising:
receiving, at a cloud platform, a class of feed for an external feed corresponding to an information source, said external feed to produce a corresponding class of events; receiving, at said cloud platform, an instruction corresponding to a create operation for said external feed; receiving, at said cloud platform, a dictionary input corresponding to parameters expected by said information source; selecting, at said cloud platform, a handler based on said received class of feed and said received create operation; transferring, to said handler on said cloud platform, said input dictionary; generating, at said cloud platform, a unique destination to receive events for said class of events; generating, by said handler on said cloud platform, a unique request to said information source to generate events of said class of feed to said unique destination; sending, by said handler on said cloud platform, said request to said information source; and receiving, at said unique destination, events generated from said information source responsive to said unique request. 2. The method of claim 1, wherein in said step of receiving said class of feed, said class of feed is content-based. 3. The method of claim 1, wherein in said step of receiving said class of feed, said class of feed is topic-based. 4. The method of claim 1, further comprising:
receiving, at said cloud platform, an instruction corresponding to a delete operation for said external feed; selecting, at said cloud platform, an appropriate handler based on said received class of feed and said received delete operation; running said appropriate handler on said cloud platform; and stopping receipt of information from said information source. 5. The method of claim 1, further comprising:
receiving, at said cloud platform, an instruction corresponding to a pause operation for said external feed; selecting, at said cloud platform, an appropriate handler based on said received class of feed and said received pause operation; running said appropriate handler on said cloud platform; and pausing receipt of information from said information source. 6. The method of claim 5, further comprising, subsequent to said pausing:
receiving, at said cloud platform, an instruction corresponding to a resume operation for said external feed; selecting, at said cloud platform, another handler based on said received class of feed and said received resume operation; running said other handler on said cloud platform; and resuming receipt of information from said information source. 7. The method of claim 1, wherein said generating of said unique destination comprises generating a unique uniform resource locator (URL) and said events are received via a call to said uniform resource locator (URL). 8. A non-transitory computer readable medium comprising computer executable instructions which when executed by a computer cause the computer to perform the method of:
receiving, at a cloud platform, a class of feed for an external feed corresponding to an information source, said external feed to produce a corresponding class of events; receiving, at said cloud platform, an instruction corresponding to a create operation for said external feed; receiving, at said cloud platform, a dictionary input corresponding to parameters expected by said information source; selecting, at said cloud platform, a handler based on said received class of feed and said received create operation; transferring, to said handler on said cloud platform, said input dictionary; generating, at said cloud platform, a unique destination to receive events for said class of events; generating, by said handler on said cloud platform, a unique request to said information source to generate events of said class of feed to said unique destination; sending, by said handler on said cloud platform, said request to said information source; and receiving, at said unique destination, events generated from said information source responsive to said unique request. 9. The non-transitory computer readable medium of claim 8, wherein in said step of receiving said class of feed, said class of feed is content-based. 10. The non-transitory computer readable medium of claim 8, wherein in said step of receiving said class of feed, said class of feed is topic-based. 11. The non-transitory computer readable medium of claim 8, further comprising:
receiving, at said cloud platform, an instruction corresponding to a delete operation for said external feed; selecting, at said cloud platform, an appropriate handler based on said received class of feed and said received delete operation; running said appropriate handler on said cloud platform; and stopping receipt of information from said information source. 12. The non-transitory computer readable medium of claim 8, further comprising:
receiving, at said cloud platform, an instruction corresponding to a pause operation for said external feed; selecting, at said cloud platform, an appropriate handler based on said received class of feed and said received pause operation; running said appropriate handler on said cloud platform; and pausing receipt of information from said information source. 13. The non-transitory computer readable medium of claim 12, further comprising, subsequent to said pausing:
receiving, at said cloud platform, an instruction corresponding to a resume operation for said external feed; selecting, at said cloud platform, another handler based on said received class of feed and said received resume operation; running said other handler on said cloud platform; and resuming receipt of information from said information source. 14. The non-transitory computer readable medium of claim 8, wherein said generating of said unique destination comprises generating a unique uniform resource locator (URL) and said events are received via a call to said uniform resource locator (URL). 15. An apparatus comprising:
a memory; at least one processor, coupled to said memory, and a non-transitory computer readable medium comprising computer executable instructions which when loaded into said memory configure said at least one processor to be operative to:
receive, at a cloud platform, a class of feed for an external feed corresponding to an information source, said external feed to produce a corresponding class of events;
receive, at said cloud platform, an instruction corresponding to a create operation for said external feed;
receive, at said cloud platform, a dictionary input corresponding to parameters expected by said information source;
select, at said cloud platform, a handler based on said received class of feed and said received create operation;
transfer, to said handler on said cloud platform, said input dictionary;
generate, at said cloud platform, a unique destination to receive events for said class of events;
generate, by said handler on said cloud platform, a unique request to said information source to generate events of said class of feed to said unique destination;
send, by said handler on said cloud platform, said request to said information source; and
receive, at said unique destination, events generated from said information source responsive to said unique request. 16. The apparatus of claim 15, wherein said class of feed is content-based. 17. The apparatus of claim 15, wherein said class of feed is topic-based. 18. The apparatus of claim 15, wherein said computer executable instructions, when loaded into said memory, further configure said at least one processor to be operative to:
receive, at said cloud platform, an instruction corresponding to a delete operation for said external feed; select, at said cloud platform, an appropriate handler based on said received class of feed and said received delete operation; run said appropriate handler on said cloud platform; and stop receipt of information from said information source. 19. The apparatus of claim 15, wherein said computer executable instructions, when loaded into said memory, further configure said at least one processor to be operative to:
receive, at said cloud platform, an instruction corresponding to a pause operation for said external feed; select, at said cloud platform, an appropriate handler based on said received class of feed and said received pause operation; run said appropriate handler on said cloud platform; pause receipt of information from said information source; subsequent to said pausing, receive, at said cloud platform, an instruction corresponding to a resume operation for said external feed; subsequent to said pausing, select, at said cloud platform, another handler based on said received class of feed and said received resume operation; subsequent to said pausing, run said other handler on said cloud platform; and subsequent to said pausing, resume receipt of information from said information source. 20. The apparatus of claim 15, wherein said computer executable instructions, when loaded into said memory, configure said at least one processor such that said generating of said unique destination comprises generating a unique uniform resource locator (URL) and said events are received via a call to said uniform resource locator (URL). | At a cloud platform, a class of feed is received for an external feed corresponding to an information source, as are an instruction corresponding to a create operation for the external feed, and a dictionary input corresponding to parameters expected by the information source. The external feed produces a corresponding class of events. At the cloud platform, a handler is selected based on the received class of feed and the received create operation; the input dictionary is transferred to the handler; and the handler generates a unique destination to receive events for the class of events. The handler on the cloud platform generates a unique request to the information source to generate events of the class of feed to the unique destination and sends the request to the information source. Events generated from the information source responsive to the unique request are received at the unique destination.1. A method comprising:
receiving, at a cloud platform, a class of feed for an external feed corresponding to an information source, said external feed to produce a corresponding class of events; receiving, at said cloud platform, an instruction corresponding to a create operation for said external feed; receiving, at said cloud platform, a dictionary input corresponding to parameters expected by said information source; selecting, at said cloud platform, a handler based on said received class of feed and said received create operation; transferring, to said handler on said cloud platform, said input dictionary; generating, at said cloud platform, a unique destination to receive events for said class of events; generating, by said handler on said cloud platform, a unique request to said information source to generate events of said class of feed to said unique destination; sending, by said handler on said cloud platform, said request to said information source; and receiving, at said unique destination, events generated from said information source responsive to said unique request. 2. The method of claim 1, wherein in said step of receiving said class of feed, said class of feed is content-based. 3. The method of claim 1, wherein in said step of receiving said class of feed, said class of feed is topic-based. 4. The method of claim 1, further comprising:
receiving, at said cloud platform, an instruction corresponding to a delete operation for said external feed; selecting, at said cloud platform, an appropriate handler based on said received class of feed and said received delete operation; running said appropriate handler on said cloud platform; and stopping receipt of information from said information source. 5. The method of claim 1, further comprising:
receiving, at said cloud platform, an instruction corresponding to a pause operation for said external feed; selecting, at said cloud platform, an appropriate handler based on said received class of feed and said received pause operation; running said appropriate handler on said cloud platform; and pausing receipt of information from said information source. 6. The method of claim 5, further comprising, subsequent to said pausing:
receiving, at said cloud platform, an instruction corresponding to a resume operation for said external feed; selecting, at said cloud platform, another handler based on said received class of feed and said received resume operation; running said other handler on said cloud platform; and resuming receipt of information from said information source. 7. The method of claim 1, wherein said generating of said unique destination comprises generating a unique uniform resource locator (URL) and said events are received via a call to said uniform resource locator (URL). 8. A non-transitory computer readable medium comprising computer executable instructions which when executed by a computer cause the computer to perform the method of:
receiving, at a cloud platform, a class of feed for an external feed corresponding to an information source, said external feed to produce a corresponding class of events; receiving, at said cloud platform, an instruction corresponding to a create operation for said external feed; receiving, at said cloud platform, a dictionary input corresponding to parameters expected by said information source; selecting, at said cloud platform, a handler based on said received class of feed and said received create operation; transferring, to said handler on said cloud platform, said input dictionary; generating, at said cloud platform, a unique destination to receive events for said class of events; generating, by said handler on said cloud platform, a unique request to said information source to generate events of said class of feed to said unique destination; sending, by said handler on said cloud platform, said request to said information source; and receiving, at said unique destination, events generated from said information source responsive to said unique request. 9. The non-transitory computer readable medium of claim 8, wherein in said step of receiving said class of feed, said class of feed is content-based. 10. The non-transitory computer readable medium of claim 8, wherein in said step of receiving said class of feed, said class of feed is topic-based. 11. The non-transitory computer readable medium of claim 8, further comprising:
receiving, at said cloud platform, an instruction corresponding to a delete operation for said external feed; selecting, at said cloud platform, an appropriate handler based on said received class of feed and said received delete operation; running said appropriate handler on said cloud platform; and stopping receipt of information from said information source. 12. The non-transitory computer readable medium of claim 8, further comprising:
receiving, at said cloud platform, an instruction corresponding to a pause operation for said external feed; selecting, at said cloud platform, an appropriate handler based on said received class of feed and said received pause operation; running said appropriate handler on said cloud platform; and pausing receipt of information from said information source. 13. The non-transitory computer readable medium of claim 12, further comprising, subsequent to said pausing:
receiving, at said cloud platform, an instruction corresponding to a resume operation for said external feed; selecting, at said cloud platform, another handler based on said received class of feed and said received resume operation; running said other handler on said cloud platform; and resuming receipt of information from said information source. 14. The non-transitory computer readable medium of claim 8, wherein said generating of said unique destination comprises generating a unique uniform resource locator (URL) and said events are received via a call to said uniform resource locator (URL). 15. An apparatus comprising:
a memory; at least one processor, coupled to said memory, and a non-transitory computer readable medium comprising computer executable instructions which when loaded into said memory configure said at least one processor to be operative to:
receive, at a cloud platform, a class of feed for an external feed corresponding to an information source, said external feed to produce a corresponding class of events;
receive, at said cloud platform, an instruction corresponding to a create operation for said external feed;
receive, at said cloud platform, a dictionary input corresponding to parameters expected by said information source;
select, at said cloud platform, a handler based on said received class of feed and said received create operation;
transfer, to said handler on said cloud platform, said input dictionary;
generate, at said cloud platform, a unique destination to receive events for said class of events;
generate, by said handler on said cloud platform, a unique request to said information source to generate events of said class of feed to said unique destination;
send, by said handler on said cloud platform, said request to said information source; and
receive, at said unique destination, events generated from said information source responsive to said unique request. 16. The apparatus of claim 15, wherein said class of feed is content-based. 17. The apparatus of claim 15, wherein said class of feed is topic-based. 18. The apparatus of claim 15, wherein said computer executable instructions, when loaded into said memory, further configure said at least one processor to be operative to:
receive, at said cloud platform, an instruction corresponding to a delete operation for said external feed; select, at said cloud platform, an appropriate handler based on said received class of feed and said received delete operation; run said appropriate handler on said cloud platform; and stop receipt of information from said information source. 19. The apparatus of claim 15, wherein said computer executable instructions, when loaded into said memory, further configure said at least one processor to be operative to:
receive, at said cloud platform, an instruction corresponding to a pause operation for said external feed; select, at said cloud platform, an appropriate handler based on said received class of feed and said received pause operation; run said appropriate handler on said cloud platform; pause receipt of information from said information source; subsequent to said pausing, receive, at said cloud platform, an instruction corresponding to a resume operation for said external feed; subsequent to said pausing, select, at said cloud platform, another handler based on said received class of feed and said received resume operation; subsequent to said pausing, run said other handler on said cloud platform; and subsequent to said pausing, resume receipt of information from said information source. 20. The apparatus of claim 15, wherein said computer executable instructions, when loaded into said memory, configure said at least one processor such that said generating of said unique destination comprises generating a unique uniform resource locator (URL) and said events are received via a call to said uniform resource locator (URL). | 2,100 |
5,988 | 5,988 | 15,215,650 | 2,145 | Included are embodiments for device tuning. Some embodiments of the method include identifying a razor device, where the razor device includes a sensing system, determining a first operating parameter of the razor device, where the first operating parameter relates to the sensing system, and providing a user interface to a user, where the user interface includes a user option to adjust the first operating parameter. | 1. A system for device tuning, comprising:
a processor that receives and processes instructions; and a memory component that stores logic for providing the instructions, wherein the logic causes the system to perform at least the following:
identify a handheld device, wherein the handheld device includes a sensing system and a transmitting system;
determine a first operating parameter of the handheld device, wherein the first operating parameter relates to the sensing system;
determine a second operating parameter of the handheld device, wherein the second operating parameter relates to the transmitting system; and
provide a user interface to a user, wherein the user interface includes a user option to adjust at least one of the following: the first operating parameter and the second operating parameter. 2. The system of claim 1, wherein the first operating parameter includes at least one of the following: an analog to digital conversion threshold, a device engagement delay threshold, a repeat engagement delay threshold, a time read parameter, and a timeout read parameter. 3. The system of claim 1, wherein the second operating parameter includes at least one of the following: a transmit power parameter, a broadcast frequency parameter, a device voltage parameter, and an interval parameter. 4. The system of claim 1, further comprising the handheld device, wherein the handheld device comprises an integrated computing device for facilitating communication and making the adjustment. 5. The system of claim 1, wherein the logic further causes the system to provide an option to alter an identity of the handheld device. 6. The system of claim 1, wherein the logic further causes the system to perform the following:
receive environment data related to at least one of the following: a water type, an assisting substance type, a distance from the handheld device, and data related to interference to the handheld device; and automatically adjust at least one of the following, based on the environment data: the first operating parameter and the second operating parameter. 7. The system of claim 1, further comprising a remote computing device, wherein the logic further causes the system to communicate at least one of the following to the remote computing device: data related to the first operating parameter, data related to the second operating parameter, and the adjustment. 8. A method for device tuning, comprising:
identifying a razor device, wherein the razor device includes a sensing system; determining a first operating parameter of the razor device, wherein the first operating parameter relates to the sensing system; and providing a user interface to a user, wherein the user interface includes a user option to adjust the first operating parameter. 9. The method of claim 8, wherein the first operating parameter includes at least one of the following: an analog to digital conversion threshold, a device engagement delay threshold, a repeat engagement delay threshold, a time read parameter, and a timeout read parameter. 10. The method of claim 8, wherein the razor device further includes a transmitting system, and wherein the method further comprises determining a second operating parameter of the razor device, wherein the second operating parameter relates to the transmitting system, and wherein the user interface further includes an option to adjust the second operating parameter. 11. The method of claim 10, wherein the second operating parameter includes at least one of the following: a transmit power parameter, a broadcast frequency parameter, a device voltage parameter, and an interval parameter. 12. The method of claim 10, further comprising:
receiving environment data related to at least one of the following: a water type, an shaving prep type, a distance from the razor device, and data related to interference to the razor device; and automatically adjusting at least one of the following based on the environment data: the first operating parameter and the second operating parameter. 13. The method of claim 8, further comprising:
receiving user input to adjust the first operating parameter; and sending an adjustment to the razor device for implementation, based on user input. 14. The method of claim 8, reporting at least the following to a remote computing device: data related to the first operating parameter, data related to a second operating parameter, and the adjustment. 15. A non-transitory computer-readable medium for device tuning that stores logic that causes a computing device to perform at least the following:
identify a razor device, wherein the razor device includes a transmitting system; determine a second operating parameter of the razor device, wherein the second operating parameter relates to the transmitting system; and provide a user interface to a user, wherein the user interface includes a user option to adjust the second operating parameter. 16. The non-transitory computer-readable medium of claim 15, wherein the second operating parameter includes at least one of the following: a transmit power parameter, a broadcast frequency parameter, a device voltage parameter, and an interval parameter. 17. The non-transitory computer-readable medium of claim 15, wherein the razor device further includes a sensing system, and wherein the method further comprises determining a first operating parameter of the razor device, wherein the first operating parameter relates to the sensing system, and wherein the user interface further includes an option to adjust the first operating parameter. 18. The non-transitory computer-readable medium of claim 17, wherein the first operating parameter includes at least one of the following: an analog to digital conversion threshold, a device engagement delay threshold, a repeat engagement delay threshold, a time read parameter, and a timeout read parameter. 19. The non-transitory computer-readable medium of claim 17, wherein the logic further causes the computing device to perform at least the following:
receive environment data related to at least one of the following: a water type, an shaving prep type, a distance from the razor device, and data related to interference to the razor device; and automatically adjust at least one of the following based on the environment data: the first operating parameter and the second operating parameter. 20. The non-transitory computer-readable medium of claim 15, wherein the logic further causes the computing device to perform at least the following:
receive user input to adjust the second operating parameter; and send an adjustment to the razor device for implementation, based on the user input. | Included are embodiments for device tuning. Some embodiments of the method include identifying a razor device, where the razor device includes a sensing system, determining a first operating parameter of the razor device, where the first operating parameter relates to the sensing system, and providing a user interface to a user, where the user interface includes a user option to adjust the first operating parameter.1. A system for device tuning, comprising:
a processor that receives and processes instructions; and a memory component that stores logic for providing the instructions, wherein the logic causes the system to perform at least the following:
identify a handheld device, wherein the handheld device includes a sensing system and a transmitting system;
determine a first operating parameter of the handheld device, wherein the first operating parameter relates to the sensing system;
determine a second operating parameter of the handheld device, wherein the second operating parameter relates to the transmitting system; and
provide a user interface to a user, wherein the user interface includes a user option to adjust at least one of the following: the first operating parameter and the second operating parameter. 2. The system of claim 1, wherein the first operating parameter includes at least one of the following: an analog to digital conversion threshold, a device engagement delay threshold, a repeat engagement delay threshold, a time read parameter, and a timeout read parameter. 3. The system of claim 1, wherein the second operating parameter includes at least one of the following: a transmit power parameter, a broadcast frequency parameter, a device voltage parameter, and an interval parameter. 4. The system of claim 1, further comprising the handheld device, wherein the handheld device comprises an integrated computing device for facilitating communication and making the adjustment. 5. The system of claim 1, wherein the logic further causes the system to provide an option to alter an identity of the handheld device. 6. The system of claim 1, wherein the logic further causes the system to perform the following:
receive environment data related to at least one of the following: a water type, an assisting substance type, a distance from the handheld device, and data related to interference to the handheld device; and automatically adjust at least one of the following, based on the environment data: the first operating parameter and the second operating parameter. 7. The system of claim 1, further comprising a remote computing device, wherein the logic further causes the system to communicate at least one of the following to the remote computing device: data related to the first operating parameter, data related to the second operating parameter, and the adjustment. 8. A method for device tuning, comprising:
identifying a razor device, wherein the razor device includes a sensing system; determining a first operating parameter of the razor device, wherein the first operating parameter relates to the sensing system; and providing a user interface to a user, wherein the user interface includes a user option to adjust the first operating parameter. 9. The method of claim 8, wherein the first operating parameter includes at least one of the following: an analog to digital conversion threshold, a device engagement delay threshold, a repeat engagement delay threshold, a time read parameter, and a timeout read parameter. 10. The method of claim 8, wherein the razor device further includes a transmitting system, and wherein the method further comprises determining a second operating parameter of the razor device, wherein the second operating parameter relates to the transmitting system, and wherein the user interface further includes an option to adjust the second operating parameter. 11. The method of claim 10, wherein the second operating parameter includes at least one of the following: a transmit power parameter, a broadcast frequency parameter, a device voltage parameter, and an interval parameter. 12. The method of claim 10, further comprising:
receiving environment data related to at least one of the following: a water type, an shaving prep type, a distance from the razor device, and data related to interference to the razor device; and automatically adjusting at least one of the following based on the environment data: the first operating parameter and the second operating parameter. 13. The method of claim 8, further comprising:
receiving user input to adjust the first operating parameter; and sending an adjustment to the razor device for implementation, based on user input. 14. The method of claim 8, reporting at least the following to a remote computing device: data related to the first operating parameter, data related to a second operating parameter, and the adjustment. 15. A non-transitory computer-readable medium for device tuning that stores logic that causes a computing device to perform at least the following:
identify a razor device, wherein the razor device includes a transmitting system; determine a second operating parameter of the razor device, wherein the second operating parameter relates to the transmitting system; and provide a user interface to a user, wherein the user interface includes a user option to adjust the second operating parameter. 16. The non-transitory computer-readable medium of claim 15, wherein the second operating parameter includes at least one of the following: a transmit power parameter, a broadcast frequency parameter, a device voltage parameter, and an interval parameter. 17. The non-transitory computer-readable medium of claim 15, wherein the razor device further includes a sensing system, and wherein the method further comprises determining a first operating parameter of the razor device, wherein the first operating parameter relates to the sensing system, and wherein the user interface further includes an option to adjust the first operating parameter. 18. The non-transitory computer-readable medium of claim 17, wherein the first operating parameter includes at least one of the following: an analog to digital conversion threshold, a device engagement delay threshold, a repeat engagement delay threshold, a time read parameter, and a timeout read parameter. 19. The non-transitory computer-readable medium of claim 17, wherein the logic further causes the computing device to perform at least the following:
receive environment data related to at least one of the following: a water type, an shaving prep type, a distance from the razor device, and data related to interference to the razor device; and automatically adjust at least one of the following based on the environment data: the first operating parameter and the second operating parameter. 20. The non-transitory computer-readable medium of claim 15, wherein the logic further causes the computing device to perform at least the following:
receive user input to adjust the second operating parameter; and send an adjustment to the razor device for implementation, based on the user input. | 2,100 |
5,989 | 5,989 | 14,891,425 | 2,139 | Degradation in processing capability due to copying during garbage collection is reduced. A data storage system includes a memory unit provided with a memory, into which data are written in units of pages, and a memory controller that controls writing of data to the memory; and a controller that indicates, to the memory controller, a logical page address to which data are to be written. The memory controller determines a target block that is a block to be erased when garbage collection is next performed and provides the controller with information on a logical page address corresponding to a physical page address of a valid page in the target block. The controller instructs the memory controller to write data to the logical page address received from the memory controller. | 1. A data storage system comprising:
a memory unit comprising a memory, into which data are written in units of pages, and a memory controller configured to control writing of data to the memory; and a controller configured to indicate, to the memory controller, a logical page address to which data are to be written, wherein the memory controller
determines a target block that is a block to be erased when garbage collection is next performed, and
provides the controller with information on a logical page address corresponding to a physical page address of a valid page in the target block, and
the controller instructs the memory controller to write data to the logical page address received from the memory controller. 2. The data storage system of claim 1, wherein the controller instructs the memory controller to write the data by distributing the data between each logical page address received from the memory controller. 3. The data storage system of claim 1, wherein when updating data stored in the memory, the controller instructs the memory controller to erase non-updated data and to write newly updated data to the logical page address received from the memory controller. 4. The data storage system of claim 2, wherein when updating data stored in the memory, the controller instructs the memory controller to erase non-updated data and to write newly updated data to the logical page address received from the memory controller. 5. The data storage system of claim 1, wherein the memory controller starts garbage collection upon free space in the memory falling below a predetermined threshold. 6. A method of controlling a data storage system that includes a memory into which data are written in units of pages, the method comprising:
determining a target block that is a block to be erased when garbage collection is next performed; converting a physical page address of a valid page in the target block into a corresponding logical page address; and writing data to the logical page address yielded by conversion. 7. The data storage system of claim 2, wherein the memory controller starts garbage collection upon free space in the memory falling below a predetermined threshold. 8. The data storage system of claim 3, wherein the memory controller starts garbage collection upon free space in the memory falling below a predetermined threshold. 9. The data storage system of claim 4, wherein the memory controller starts garbage collection upon free space in the memory falling below a predetermined threshold. | Degradation in processing capability due to copying during garbage collection is reduced. A data storage system includes a memory unit provided with a memory, into which data are written in units of pages, and a memory controller that controls writing of data to the memory; and a controller that indicates, to the memory controller, a logical page address to which data are to be written. The memory controller determines a target block that is a block to be erased when garbage collection is next performed and provides the controller with information on a logical page address corresponding to a physical page address of a valid page in the target block. The controller instructs the memory controller to write data to the logical page address received from the memory controller.1. A data storage system comprising:
a memory unit comprising a memory, into which data are written in units of pages, and a memory controller configured to control writing of data to the memory; and a controller configured to indicate, to the memory controller, a logical page address to which data are to be written, wherein the memory controller
determines a target block that is a block to be erased when garbage collection is next performed, and
provides the controller with information on a logical page address corresponding to a physical page address of a valid page in the target block, and
the controller instructs the memory controller to write data to the logical page address received from the memory controller. 2. The data storage system of claim 1, wherein the controller instructs the memory controller to write the data by distributing the data between each logical page address received from the memory controller. 3. The data storage system of claim 1, wherein when updating data stored in the memory, the controller instructs the memory controller to erase non-updated data and to write newly updated data to the logical page address received from the memory controller. 4. The data storage system of claim 2, wherein when updating data stored in the memory, the controller instructs the memory controller to erase non-updated data and to write newly updated data to the logical page address received from the memory controller. 5. The data storage system of claim 1, wherein the memory controller starts garbage collection upon free space in the memory falling below a predetermined threshold. 6. A method of controlling a data storage system that includes a memory into which data are written in units of pages, the method comprising:
determining a target block that is a block to be erased when garbage collection is next performed; converting a physical page address of a valid page in the target block into a corresponding logical page address; and writing data to the logical page address yielded by conversion. 7. The data storage system of claim 2, wherein the memory controller starts garbage collection upon free space in the memory falling below a predetermined threshold. 8. The data storage system of claim 3, wherein the memory controller starts garbage collection upon free space in the memory falling below a predetermined threshold. 9. The data storage system of claim 4, wherein the memory controller starts garbage collection upon free space in the memory falling below a predetermined threshold. | 2,100 |
5,990 | 5,990 | 14,169,268 | 2,152 | Embodiments are directed towards real time display of event records and extracted values based on at least one extraction rule, such as a regular expression. A user interface may be employed to enable a user to have an extraction rule automatically generate and/or to manually enter an extraction rule. The user may be enabled to manually edit a previously provided extraction rule, which may result in real time display of updated extracted values. The extraction rule may be utilized to extract values from each of a plurality of records, including event records of unstructured machine data. Statistics may be determined for each unique extracted value, and may be displayed to the user in real time. The user interface may also enable the user to select at least one unique extracted value to display those event records that include an extracted value that matches the selected value. | 1-30. (canceled) 31. A computer-implemented method, comprising:
receiving machine data; generating, using one or more processors, a set of events, wherein each event in the set of events includes a portion of the machine data; associating a time with each event in the set of events, the time for each event extracted from the machine data included in that event; storing the set of events in a data store such that they are searchable at least by their associated times; displaying an extraction rule, wherein the extraction rule specifies how to extract a value for a field from machine data included in an event; displaying a subset of events of the set of events; emphasizing in the displayed subset of events a value for the field that would be extracted from each of the events in the subset of events by applying the extraction rule; receiving input indicating that the emphasized value in a given event in the subset of events should not be the value for the field for the given event; based on the input, automatically modifying the extraction rule so that it would extract a different value as a value for the field for the given event when applied to the given event; and modifying the displayed given event to emphasize the different value for the field for the given event. 32. The method of claim 31, wherein the extraction rule includes a regular expression. 33. The method of claim 31, wherein the machine data includes log data. 34. The method of claim 31, further comprising displaying the modified extraction rule. 35. The method of claim 31, wherein the extraction rule is received from a user through manual keyboard input. 36. The method of claim 31, wherein the extraction rule is automatically generated to extract as the value for the field for a displayed event text that a user has selected in the event. 37. The method of claim 31, further comprising modifying a second event in the displayed subset of events to emphasize a value that would be extracted for the field for the second event by applying the modified extraction rule to the second event. 38. The method of claim 31, further comprising:
receiving a label for the field corresponding to the extraction rule; and using the label to search for an event using the field. 39. The method of claim 31, further comprising:
identifying a set of unique field values that would be extracted for the field by applying the extraction rule to events in the set of events; and displaying one or more unique field values in the set of unique field values. 40. The method of claim 31, further comprising:
identifying a set of unique field values that would be extracted for the field by applying the extraction rule to events in the set of events; and displaying a statistic for one or more unique field values in the set of unique field values. 41. The method of claim 31, further comprising:
identifying a set of unique field values that would be extracted for the field by applying the extraction rule to events in the set of events; and displaying a statistic for one or more unique field values in the set of unique field values, wherein the statistic includes a count of events in which the unique field value appears as the value for the field or a percentage of events in which the unique field value appears as the value for the field. 42. The method of claim 31, further comprising:
identifying a set of unique field values that would be extracted for the field by applying the extraction rule to events in the set of events; receiving a selection of a unique field value in the set of unique field values; and displaying only events in the subset of events for which the extraction rule would extract the selected unique field value when applied to the events. 43. A non-transitory computer-readable medium having computer-executable instructions for performing the method of claim 31. 44. A system having one or more processors that is adapted to perform the method of claim 31. | Embodiments are directed towards real time display of event records and extracted values based on at least one extraction rule, such as a regular expression. A user interface may be employed to enable a user to have an extraction rule automatically generate and/or to manually enter an extraction rule. The user may be enabled to manually edit a previously provided extraction rule, which may result in real time display of updated extracted values. The extraction rule may be utilized to extract values from each of a plurality of records, including event records of unstructured machine data. Statistics may be determined for each unique extracted value, and may be displayed to the user in real time. The user interface may also enable the user to select at least one unique extracted value to display those event records that include an extracted value that matches the selected value.1-30. (canceled) 31. A computer-implemented method, comprising:
receiving machine data; generating, using one or more processors, a set of events, wherein each event in the set of events includes a portion of the machine data; associating a time with each event in the set of events, the time for each event extracted from the machine data included in that event; storing the set of events in a data store such that they are searchable at least by their associated times; displaying an extraction rule, wherein the extraction rule specifies how to extract a value for a field from machine data included in an event; displaying a subset of events of the set of events; emphasizing in the displayed subset of events a value for the field that would be extracted from each of the events in the subset of events by applying the extraction rule; receiving input indicating that the emphasized value in a given event in the subset of events should not be the value for the field for the given event; based on the input, automatically modifying the extraction rule so that it would extract a different value as a value for the field for the given event when applied to the given event; and modifying the displayed given event to emphasize the different value for the field for the given event. 32. The method of claim 31, wherein the extraction rule includes a regular expression. 33. The method of claim 31, wherein the machine data includes log data. 34. The method of claim 31, further comprising displaying the modified extraction rule. 35. The method of claim 31, wherein the extraction rule is received from a user through manual keyboard input. 36. The method of claim 31, wherein the extraction rule is automatically generated to extract as the value for the field for a displayed event text that a user has selected in the event. 37. The method of claim 31, further comprising modifying a second event in the displayed subset of events to emphasize a value that would be extracted for the field for the second event by applying the modified extraction rule to the second event. 38. The method of claim 31, further comprising:
receiving a label for the field corresponding to the extraction rule; and using the label to search for an event using the field. 39. The method of claim 31, further comprising:
identifying a set of unique field values that would be extracted for the field by applying the extraction rule to events in the set of events; and displaying one or more unique field values in the set of unique field values. 40. The method of claim 31, further comprising:
identifying a set of unique field values that would be extracted for the field by applying the extraction rule to events in the set of events; and displaying a statistic for one or more unique field values in the set of unique field values. 41. The method of claim 31, further comprising:
identifying a set of unique field values that would be extracted for the field by applying the extraction rule to events in the set of events; and displaying a statistic for one or more unique field values in the set of unique field values, wherein the statistic includes a count of events in which the unique field value appears as the value for the field or a percentage of events in which the unique field value appears as the value for the field. 42. The method of claim 31, further comprising:
identifying a set of unique field values that would be extracted for the field by applying the extraction rule to events in the set of events; receiving a selection of a unique field value in the set of unique field values; and displaying only events in the subset of events for which the extraction rule would extract the selected unique field value when applied to the events. 43. A non-transitory computer-readable medium having computer-executable instructions for performing the method of claim 31. 44. A system having one or more processors that is adapted to perform the method of claim 31. | 2,100 |
5,991 | 5,991 | 13,771,019 | 2,166 | Managing assets in a movie during production, including: storing an image file in a temporary location; storing an XML file containing various metadata; sending a signal including the path the XML filet wherein the XML file contains a path to the image file that it is to be imported. Keywords include Photoshop and XML. | 1. A method of managing assets in a movie during production, comprising:
storing an image file in a temporary location; storing an XML file containing various metadata; sending a signal including the path the XML filet wherein the XML file contains a path to the image file that it is to be imported. 2. The method of claim 1, wherein the XML file also contains metadata which includes resolution, and any dialogue that might accompany the image file. 3. The method of claim 1, wherein the XML file containing the metadata is generated in a content processing tool. 4. The method of claim 1, wherein the image file includes a storyboard panel. 5. The method of claim 1, further comprising parsing the XML file. 6. The method of claim 1, further comprising:
generating a new shot; formatting the new shot to a specific need of a project; and importing the new shot into a storyboard management system. 7. The method of claim 6, wherein the new shot is placed right after a currently-selected shot in the storyboard management system. 8. A storyboard management system, comprising:
a non-transitory memory configured to receive and store an XML file containing various metadata created by at least one storyboard artist, wherein the metadata includes information about an image file generated in a content processing tool; and a processor configured to parse the XML file to process the image file, and generate and format a new shot to specific needs of a project. 9. The storyboard management system of claim 8, wherein the image file is generated by the content processing tool and stored in a temporary location. 10. The storyboard management system of claim 8, wherein the new shot is placed right after a currently-selected shot, and the new shot is automatically selected. 11. A non-transitory storage medium storing a computer program to manage assets during production of a movie, the computer program comprising executable instructions that cause a computer to:
store an image file in a temporary location; store an XML file containing various metadata; send a signal including the path the XML filet wherein the XML file contains a path to the image file that it is to be imported. 12. The non-transitory storage medium of claim 11, wherein the XML file also contains metadata which includes resolution, and any dialogue that might accompany the image file. 13. The non-transitory storage medium of claim 11, wherein the XML file containing the metadata is generated in a content processing tool. 14. The non-transitory storage medium of claim 11, wherein the image file includes a storyboard panel. 15. The non-transitory storage medium of claim 11, further comprising executable instructions that cause a computer to parse the XML file. 16. The non-transitory storage medium of claim 11, further comprising executable instructions that cause a computer to:
generate a new shot; format the new shot to a specific need of a project; and import the new shot into a storyboard management system. 17. The non-transitory storage medium of claim 16, wherein the new shot is placed right after a currently-selected shot in the storyboard management system. | Managing assets in a movie during production, including: storing an image file in a temporary location; storing an XML file containing various metadata; sending a signal including the path the XML filet wherein the XML file contains a path to the image file that it is to be imported. Keywords include Photoshop and XML.1. A method of managing assets in a movie during production, comprising:
storing an image file in a temporary location; storing an XML file containing various metadata; sending a signal including the path the XML filet wherein the XML file contains a path to the image file that it is to be imported. 2. The method of claim 1, wherein the XML file also contains metadata which includes resolution, and any dialogue that might accompany the image file. 3. The method of claim 1, wherein the XML file containing the metadata is generated in a content processing tool. 4. The method of claim 1, wherein the image file includes a storyboard panel. 5. The method of claim 1, further comprising parsing the XML file. 6. The method of claim 1, further comprising:
generating a new shot; formatting the new shot to a specific need of a project; and importing the new shot into a storyboard management system. 7. The method of claim 6, wherein the new shot is placed right after a currently-selected shot in the storyboard management system. 8. A storyboard management system, comprising:
a non-transitory memory configured to receive and store an XML file containing various metadata created by at least one storyboard artist, wherein the metadata includes information about an image file generated in a content processing tool; and a processor configured to parse the XML file to process the image file, and generate and format a new shot to specific needs of a project. 9. The storyboard management system of claim 8, wherein the image file is generated by the content processing tool and stored in a temporary location. 10. The storyboard management system of claim 8, wherein the new shot is placed right after a currently-selected shot, and the new shot is automatically selected. 11. A non-transitory storage medium storing a computer program to manage assets during production of a movie, the computer program comprising executable instructions that cause a computer to:
store an image file in a temporary location; store an XML file containing various metadata; send a signal including the path the XML filet wherein the XML file contains a path to the image file that it is to be imported. 12. The non-transitory storage medium of claim 11, wherein the XML file also contains metadata which includes resolution, and any dialogue that might accompany the image file. 13. The non-transitory storage medium of claim 11, wherein the XML file containing the metadata is generated in a content processing tool. 14. The non-transitory storage medium of claim 11, wherein the image file includes a storyboard panel. 15. The non-transitory storage medium of claim 11, further comprising executable instructions that cause a computer to parse the XML file. 16. The non-transitory storage medium of claim 11, further comprising executable instructions that cause a computer to:
generate a new shot; format the new shot to a specific need of a project; and import the new shot into a storyboard management system. 17. The non-transitory storage medium of claim 16, wherein the new shot is placed right after a currently-selected shot in the storyboard management system. | 2,100 |
5,992 | 5,992 | 15,047,715 | 2,191 | A system includes a processor configured to detect a vehicle key-off. The processor is also configured to delete from a primary memory (“internal memory”) of an electronic control unit an existing software version for which a new software version update exists in a secondary memory of the ECU. The processor is further configured to load the new software version from the secondary memory (“external memory”) into the primary memory and upon detection of a failure during the load, delete the new software version from the primary memory and reload the existing software version from the secondary memory. | 1. A system comprising:
a processor configured to: detect a vehicle key-off; delete from a primary memory of an electronic control unit an existing software version for which a new software version update exists in a secondary memory of the ECU; load the new software version from the secondary memory into the primary memory; and upon detection of a failure during the load, delete the new software version from the primary memory and reload the existing software version from the secondary memory. 2. The system of claim 1, wherein the processor is further configured to test the new software version following successful load to the internal memory. 3. The system of claim 2, wherein the processor is further configured to delete the new software version from the primary memory and reload the existing software version from the secondary memory upon detection of an error during the test. 4. The system of claim 1, wherein the processor is further configured to delete the new software version from the primary memory and reload the existing software version from the secondary memory if an error occurs when an electronic control unit attempts to utilize the new software version. 5. The system of claim 1, wherein the processor is configured to suspend drivability of the vehicle during the delete and the load. 6. The system of claim 1, wherein the processor is configured to reload the existing software version from a location in the secondary memory in which an update resulting in the existing software version was previously stored when the existing software version was first loaded. 7. The system of claim 1, wherein the processor is configured to copy the existing software version to the secondary memory from the primary memory prior to deleting the existing software version, if the existing software version does not already exist in the secondary memory. 8. A computer-implemented method comprising:
deleting from internal memory an existing software version for which a new software version update exists in external memory in response to a vehicle key-off event; loading the new software version into the internal memory from the external memory; and responsive to a failure in the loading, deleting the new software version from the internal memory and reloading the existing software version from the external memory. 9. The method of claim 8, further comprising testing the new software version following successful loading of the new software version into the internal memory. 10. The method of claim 9, further comprising deleting the new software version from the internal memory and reloading the existing software version from a location where the existing software version exists in the external memory responsive to an error during the testing. 11. The method of claim 8, further comprising deleting the new software version from the internal memory and reloading the existing software version from a location where the existing software version exists in the external memory responsive to an error occurring when an electronic control unit attempts to utilize the new software version. 12. The method of claim 8, further comprising suspending drivability of the vehicle during the deleting and the loading. 13. The method of claim 8, wherein the reloading includes reloading the existing software version from a location in external memory in which an update resulting in the existing software version was previously stored when the existing software version was first loaded. 14. The method of claim 8, further comprising copying the existing software version to the external memory prior to deleting the existing software version, if an update corresponding to the existing software version does not already exist in the external memory. 15. A non-transitory computer-readable storage medium, storing instructions which, when executed, cause a vehicle processor to perform a method comprising:
deleting an existing software version from internal memory if a new software version update exists in external memory in response to a vehicle key-off event; loading the new software version into the internal memory from the external memory; and responsive to a failure in the loading, deleting the new software version from the internal memory and reloading the existing software version from a location where the existing software version exists in the external memory. 16. The storage medium of claim 15, the method further comprising testing the new software version following successful loading of the new software version to the internal memory. 17. The storage medium of claim 16, the method further comprising deleting the new software version from the internal memory and reloading the existing software version from a location where the existing software version exists in the external memory responsive to an error during the testing. 18. The storage medium of claim 15, the method further comprising deleting the new software version from the internal memory and reloading the existing software version from a location where the existing software version exists in the external memory responsive to an error occurring when an electronic control unit attempts to utilize the new software version. 19. The storage medium of claim 15, the method further comprising suspending drivability of the vehicle while any deleting and loading is ongoing. 20. The storage medium of claim 15, the method further comprising copying the existing software version to the external memory prior to deleting the existing software version, if an update corresponding to the existing software version does not already exist in the external memory. | A system includes a processor configured to detect a vehicle key-off. The processor is also configured to delete from a primary memory (“internal memory”) of an electronic control unit an existing software version for which a new software version update exists in a secondary memory of the ECU. The processor is further configured to load the new software version from the secondary memory (“external memory”) into the primary memory and upon detection of a failure during the load, delete the new software version from the primary memory and reload the existing software version from the secondary memory.1. A system comprising:
a processor configured to: detect a vehicle key-off; delete from a primary memory of an electronic control unit an existing software version for which a new software version update exists in a secondary memory of the ECU; load the new software version from the secondary memory into the primary memory; and upon detection of a failure during the load, delete the new software version from the primary memory and reload the existing software version from the secondary memory. 2. The system of claim 1, wherein the processor is further configured to test the new software version following successful load to the internal memory. 3. The system of claim 2, wherein the processor is further configured to delete the new software version from the primary memory and reload the existing software version from the secondary memory upon detection of an error during the test. 4. The system of claim 1, wherein the processor is further configured to delete the new software version from the primary memory and reload the existing software version from the secondary memory if an error occurs when an electronic control unit attempts to utilize the new software version. 5. The system of claim 1, wherein the processor is configured to suspend drivability of the vehicle during the delete and the load. 6. The system of claim 1, wherein the processor is configured to reload the existing software version from a location in the secondary memory in which an update resulting in the existing software version was previously stored when the existing software version was first loaded. 7. The system of claim 1, wherein the processor is configured to copy the existing software version to the secondary memory from the primary memory prior to deleting the existing software version, if the existing software version does not already exist in the secondary memory. 8. A computer-implemented method comprising:
deleting from internal memory an existing software version for which a new software version update exists in external memory in response to a vehicle key-off event; loading the new software version into the internal memory from the external memory; and responsive to a failure in the loading, deleting the new software version from the internal memory and reloading the existing software version from the external memory. 9. The method of claim 8, further comprising testing the new software version following successful loading of the new software version into the internal memory. 10. The method of claim 9, further comprising deleting the new software version from the internal memory and reloading the existing software version from a location where the existing software version exists in the external memory responsive to an error during the testing. 11. The method of claim 8, further comprising deleting the new software version from the internal memory and reloading the existing software version from a location where the existing software version exists in the external memory responsive to an error occurring when an electronic control unit attempts to utilize the new software version. 12. The method of claim 8, further comprising suspending drivability of the vehicle during the deleting and the loading. 13. The method of claim 8, wherein the reloading includes reloading the existing software version from a location in external memory in which an update resulting in the existing software version was previously stored when the existing software version was first loaded. 14. The method of claim 8, further comprising copying the existing software version to the external memory prior to deleting the existing software version, if an update corresponding to the existing software version does not already exist in the external memory. 15. A non-transitory computer-readable storage medium, storing instructions which, when executed, cause a vehicle processor to perform a method comprising:
deleting an existing software version from internal memory if a new software version update exists in external memory in response to a vehicle key-off event; loading the new software version into the internal memory from the external memory; and responsive to a failure in the loading, deleting the new software version from the internal memory and reloading the existing software version from a location where the existing software version exists in the external memory. 16. The storage medium of claim 15, the method further comprising testing the new software version following successful loading of the new software version to the internal memory. 17. The storage medium of claim 16, the method further comprising deleting the new software version from the internal memory and reloading the existing software version from a location where the existing software version exists in the external memory responsive to an error during the testing. 18. The storage medium of claim 15, the method further comprising deleting the new software version from the internal memory and reloading the existing software version from a location where the existing software version exists in the external memory responsive to an error occurring when an electronic control unit attempts to utilize the new software version. 19. The storage medium of claim 15, the method further comprising suspending drivability of the vehicle while any deleting and loading is ongoing. 20. The storage medium of claim 15, the method further comprising copying the existing software version to the external memory prior to deleting the existing software version, if an update corresponding to the existing software version does not already exist in the external memory. | 2,100 |
5,993 | 5,993 | 14,307,489 | 2,115 | A method for virtually connecting design, construction, equipment and control systems as a holistic system to discover the peak performance of energy consuming systems based on system inherent characteristics is provided. The method can include collecting primary information from the energy consuming system, developing energy consuming system schematics, evaluating energy consuming system performance, determining discrepancy and field verification information, defining the energy system limitation for optimization and activating the virtual holistic system and the method further can include performing energy system optimization on the virtual holistic system and performing energy system performance financial analysis on the virtual holistic system. | 1. A method for virtually connecting design, construction, equipment and control systems as a holistic system to determine the peak performance of an energy consuming system based on system inherent characteristics, the method comprising:
collecting primary information from the energy consuming system; developing energy consuming system schematics; evaluating energy consuming system performance; determining discrepancy and field verification information; defining the energy system limitation for optimization; and activating the virtual holistic system. 2. The method of claim 1, further comprising:
performing energy system optimization on the virtual holistic system. 3. The method of claim 2, further comprising:
performing energy system performance financial analysis on the virtual holistic system. 4. The method of claim 1, wherein the collecting primary information from subsystems of the energy consuming system includes:
collecting system design information; collecting control system information; collecting subsystem equipment information; collecting system operation information; and verifying system design by inspection and control system sensor readings. | A method for virtually connecting design, construction, equipment and control systems as a holistic system to discover the peak performance of energy consuming systems based on system inherent characteristics is provided. The method can include collecting primary information from the energy consuming system, developing energy consuming system schematics, evaluating energy consuming system performance, determining discrepancy and field verification information, defining the energy system limitation for optimization and activating the virtual holistic system and the method further can include performing energy system optimization on the virtual holistic system and performing energy system performance financial analysis on the virtual holistic system.1. A method for virtually connecting design, construction, equipment and control systems as a holistic system to determine the peak performance of an energy consuming system based on system inherent characteristics, the method comprising:
collecting primary information from the energy consuming system; developing energy consuming system schematics; evaluating energy consuming system performance; determining discrepancy and field verification information; defining the energy system limitation for optimization; and activating the virtual holistic system. 2. The method of claim 1, further comprising:
performing energy system optimization on the virtual holistic system. 3. The method of claim 2, further comprising:
performing energy system performance financial analysis on the virtual holistic system. 4. The method of claim 1, wherein the collecting primary information from subsystems of the energy consuming system includes:
collecting system design information; collecting control system information; collecting subsystem equipment information; collecting system operation information; and verifying system design by inspection and control system sensor readings. | 2,100 |
5,994 | 5,994 | 15,166,624 | 2,181 | A data acquisition system includes a receptacle and a data acquisition device. The receptacle has a housing, sensor inputs to receive data signals from sensors coupled to an object, and a rib to block insertion of a standard Universal Serial Bus (USB) plug and facilitate insertion of a modified USB plug having a slot that mates with the rib. The data acquisition device includes circuitry to receive, store and process data, a USB plug having pins operatively coupled to the circuitry, a first subset of pins configured to receive data signals from the receptacle and a second subset of pins configured to support standard USB communication with USB-compliant devices, and a slot formed in the USB plug such that the slot facilitates interconnection of the USB plug both with standard USB-compliant devices and with the receptacle, the slot mating with the rib to facilitate interconnection. | 1-30. (canceled) 31. A wearable electrocardiogram (ECG) data acquisition device comprising:
a USB data connector configured to:
receive ECG data signals from electrode leads; and
connect with a standard USB interface and with a living subject-connected receptacle configured to prevent interconnection with the standard USB interface. 32. The device of claim 31, wherein the data connector comprises a feature that defeats a connection prevention mechanism in the living subject-connected receptacle. 33. The device of claim 32, wherein the data connector comprises a slot that mates with a rib in the living subject-connected receptacle. 34. The device of claim 31, wherein the electrode leads are configured to connect to a living subject. 35. The device of claim 31, further comprising circuitry to store, process, and/or transmit the received data signals. 36. The device of claim 31, wherein the standard interface is a physical interface. 37. An ECG data acquisition receptacle comprising:
an input configured to receive data signals from ECG electrode leads; a connection prevention mechanism that is configured to:
prevent interconnection with a standard interface, and
facilitate insertion of and electrical connection with a connector having a feature that defeats the connection prevention mechanism. 38. The receptacle of claim 37, wherein the receptacle comprises a rib that mates with a slot in the data connector. 39. The receptacle of claim 38, wherein the rib is formed along a center portion of the receptacle. 40. The receptacle of claim 37, wherein the electrode leads are connected to a living subject and the data connector is configured to receive biometric data signals from the living subject. 41. The receptacle of claim 37, further comprising a substrate configured to facilitate secure connection of the data connector while inserted in the receptacle. 42. The receptacle of claim 37, further comprising a recess to receive the data connector. 43. The receptacle of claim 37, wherein the receptacle is formed in an unshrouded configuration that lacks a recess. 44. The receptacle of claim 37, wherein the standard interface is a physical interface. 45. A wearable ECG data acquisition device comprising a USB plug configured to connect with (i) a standard USB-compliant device and (ii) a living subject-connected receptacle configured to prevent interconnection with standard USB plugs. 46. The device of claim 45, wherein the USB plug comprises a feature that defeats a connection prevention mechanism in the living subject-connected receptacle. 47. The device of claim 46, wherein the USB plug comprises a slot that mates with a rib in the living subject-connected receptacle. 48. The device of claim 45, further comprising circuitry to store, process, and/or transmit the received data signals. 49. A wearable ECG data acquisition receptacle comprising a connection prevention mechanism that prevents interconnection with standard USB plugs but that allows insertion of and electrical connection with a modified USB plug having a feature that defeats the connection prevention mechanism. 50. The device of claim 49, in which the USB plug comprises a slot that mates with a rib in the living subject-connected receptacle. | A data acquisition system includes a receptacle and a data acquisition device. The receptacle has a housing, sensor inputs to receive data signals from sensors coupled to an object, and a rib to block insertion of a standard Universal Serial Bus (USB) plug and facilitate insertion of a modified USB plug having a slot that mates with the rib. The data acquisition device includes circuitry to receive, store and process data, a USB plug having pins operatively coupled to the circuitry, a first subset of pins configured to receive data signals from the receptacle and a second subset of pins configured to support standard USB communication with USB-compliant devices, and a slot formed in the USB plug such that the slot facilitates interconnection of the USB plug both with standard USB-compliant devices and with the receptacle, the slot mating with the rib to facilitate interconnection.1-30. (canceled) 31. A wearable electrocardiogram (ECG) data acquisition device comprising:
a USB data connector configured to:
receive ECG data signals from electrode leads; and
connect with a standard USB interface and with a living subject-connected receptacle configured to prevent interconnection with the standard USB interface. 32. The device of claim 31, wherein the data connector comprises a feature that defeats a connection prevention mechanism in the living subject-connected receptacle. 33. The device of claim 32, wherein the data connector comprises a slot that mates with a rib in the living subject-connected receptacle. 34. The device of claim 31, wherein the electrode leads are configured to connect to a living subject. 35. The device of claim 31, further comprising circuitry to store, process, and/or transmit the received data signals. 36. The device of claim 31, wherein the standard interface is a physical interface. 37. An ECG data acquisition receptacle comprising:
an input configured to receive data signals from ECG electrode leads; a connection prevention mechanism that is configured to:
prevent interconnection with a standard interface, and
facilitate insertion of and electrical connection with a connector having a feature that defeats the connection prevention mechanism. 38. The receptacle of claim 37, wherein the receptacle comprises a rib that mates with a slot in the data connector. 39. The receptacle of claim 38, wherein the rib is formed along a center portion of the receptacle. 40. The receptacle of claim 37, wherein the electrode leads are connected to a living subject and the data connector is configured to receive biometric data signals from the living subject. 41. The receptacle of claim 37, further comprising a substrate configured to facilitate secure connection of the data connector while inserted in the receptacle. 42. The receptacle of claim 37, further comprising a recess to receive the data connector. 43. The receptacle of claim 37, wherein the receptacle is formed in an unshrouded configuration that lacks a recess. 44. The receptacle of claim 37, wherein the standard interface is a physical interface. 45. A wearable ECG data acquisition device comprising a USB plug configured to connect with (i) a standard USB-compliant device and (ii) a living subject-connected receptacle configured to prevent interconnection with standard USB plugs. 46. The device of claim 45, wherein the USB plug comprises a feature that defeats a connection prevention mechanism in the living subject-connected receptacle. 47. The device of claim 46, wherein the USB plug comprises a slot that mates with a rib in the living subject-connected receptacle. 48. The device of claim 45, further comprising circuitry to store, process, and/or transmit the received data signals. 49. A wearable ECG data acquisition receptacle comprising a connection prevention mechanism that prevents interconnection with standard USB plugs but that allows insertion of and electrical connection with a modified USB plug having a feature that defeats the connection prevention mechanism. 50. The device of claim 49, in which the USB plug comprises a slot that mates with a rib in the living subject-connected receptacle. | 2,100 |
5,995 | 5,995 | 15,603,499 | 2,196 | Systems, methods, and computer program products for scheduling computing jobs are disclosed. In implementations, the systems, methods, and computer program products perform operations including determining that a first computing job has a dependency on a second computing job. The operations also include determining a type of the dependency on the second computing job. The operations further include determining a completion status of the second computing job. Additionally, the operations include executing the first computing job based on the completion status of the second computing job and the type of the dependency on the second computing job. The operations can further include executing the second computing job based on a schedule and/or based on the type of dependency. The type of dependency can include a hard dependency and a soft dependency. | 1. A scheduling system for scheduling computing jobs, the scheduling system comprising a processor, a data storage device, and program instructions stored on the data storage device that, when executed by the processor, control the scheduling system to perform operations comprising:
determining that a first computing job of a plurality of computing jobs has a dependency on a second computing job of the plurality of computing jobs; determining a type of the dependency on the second computing job; determining a completion status of the second computing job; and executing the first computing job based on the completion status of the second computing job and the type of the dependency on the second computing job. 2. The scheduling system of claim 1, wherein the operations further comprise determining job schedule parameters of the first computing job, the job schedule parameters including scheduling information, dependency information, and dependency type information. 3. The scheduling system of claim 2, wherein the scheduling information indicates a time period for starting execution of the first computing job. 4. The scheduling system of claim 2, wherein the dependency information indicates one or more computing jobs that must complete execution before starting execution of the first computing job. 5. The scheduling system of claim 4, wherein the dependency type information indicates the type of the dependency on the computing job that must complete. 6. The scheduling system of claim 5, wherein the dependency type information is a selected from a group consisting of a hard dependency type and a soft dependency type. 7. The scheduling system of claim 5, wherein:
the hard dependency type indicates that the computing job must be successfully completed before starting execution of a first computing job, and the soft dependency type indicates that the computing job must be completed before starting execution of the first computing job regardless of whether the second computing job completed successfully. 8. The scheduling system of claim 1, wherein the operations further comprise determining whether to skip the first computing job based on its dependency on the second computing job. 9. The scheduling system of claim 8, wherein determining whether to skip the first computing job comprises:
determining a dependency of the first computing job; determining a dependency of the dependency of the first computing job; determining whether the dependency of the first computing job and the dependency of the dependency of the first computing job are scheduled for execution during the time period indicated by the scheduling information of the first computing job; determining whether the dependency of the first computing job is scheduled for execution is complete; and determining whether the dependency of the dependency of the first computing job scheduled for execution is complete. 10. A method for scheduling computing jobs comprising:
determining that a first computing job of a plurality of computing jobs has a dependency on a second computing job of the plurality of computing jobs; determining a type of the dependency on the second computing job; determining a completion status of the second computing job; and executing the first computing job based on the completion status of the second computing job and the type of the dependency on the second computing job. 11. The method of claim 10, further comprising determining job schedule parameters of the first computing job, the job schedule parameters including scheduling information, dependency information, and dependency type information. 12. The method of claim 11, wherein the scheduling information indicates an occurrence rate over a time period for starting execution of the first computing job. 13. The method of claim 11, wherein the dependency information indicates that which computing job must complete before starting execution of the first computing job. 14. The method of claim 13, wherein the dependency type information indicates the type of the dependency on the computing job that must be complete. 15. The method of claim 14, wherein:
the hard dependency type indicates that the computing job that must be complete is required to successfully completed before starting execution of a first computing job, and the soft dependency type indicates that the computing job that must be complete is required to be completed before starting execution of the first computing job regardless of whether the second computing job completed successfully. 16. The method of claim 10, further comprising determining whether to skip the first computing job based on the dependency on the second computing job. 17. The method of claim 16, wherein determining whether to skip the first computing job comprises:
determining a dependency of the first computing job; determining a dependency of the dependency of the first computing job; determining whether the dependency of the first computing job and the dependency of the dependency of the first computing job are scheduled for execution during a time period indicated by the frequency parameter of the first computing job; determining whether the dependency of the first computing job scheduled for execution is complete; and determining whether the dependency of the dependency of the first computing job scheduled for execution is complete. 18. A computer program product containing program instruction stored on a computer-readable data storage device, the program instructions, when executed by a processor, control a job scheduling system to perform operations comprising:
maintaining job status information of a plurality of computing jobs, the job status information indicating whether individual computing jobs of the plurality of jobs have been executed, whether the execution was complete, and whether the execution was successful; maintaining a job execution queue for plurality of computing jobs based on respective job schedule parameters of the plurality of computing jobs and the respective job status information of the plurality of computing jobs; determining that a first computing job of the plurality of computing jobs depends on a second computing job of the plurality of computing jobs based on a job dependency parameter of the first computing job; determining that the second computing job is incomplete based on the respective job status information of the second computing job; and skipping execution of the first computing job by excluding the first computing job from the job execution queue. 19. The computer program product of claim 19, further comprising determining a dependency type parameter of the first computing job. 20. The computer program product of claim 19, wherein:
the dependency type parameter is a selected from a group consisting of a hard dependency and a soft dependency; the hard dependency types indicates that the second computing job must be completed successfully before starting execution of a first computing job, and the soft dependency type indicates that the second computing job must be completed before starting execution of the first computing job regardless of whether the second computing job completed successfully. | Systems, methods, and computer program products for scheduling computing jobs are disclosed. In implementations, the systems, methods, and computer program products perform operations including determining that a first computing job has a dependency on a second computing job. The operations also include determining a type of the dependency on the second computing job. The operations further include determining a completion status of the second computing job. Additionally, the operations include executing the first computing job based on the completion status of the second computing job and the type of the dependency on the second computing job. The operations can further include executing the second computing job based on a schedule and/or based on the type of dependency. The type of dependency can include a hard dependency and a soft dependency.1. A scheduling system for scheduling computing jobs, the scheduling system comprising a processor, a data storage device, and program instructions stored on the data storage device that, when executed by the processor, control the scheduling system to perform operations comprising:
determining that a first computing job of a plurality of computing jobs has a dependency on a second computing job of the plurality of computing jobs; determining a type of the dependency on the second computing job; determining a completion status of the second computing job; and executing the first computing job based on the completion status of the second computing job and the type of the dependency on the second computing job. 2. The scheduling system of claim 1, wherein the operations further comprise determining job schedule parameters of the first computing job, the job schedule parameters including scheduling information, dependency information, and dependency type information. 3. The scheduling system of claim 2, wherein the scheduling information indicates a time period for starting execution of the first computing job. 4. The scheduling system of claim 2, wherein the dependency information indicates one or more computing jobs that must complete execution before starting execution of the first computing job. 5. The scheduling system of claim 4, wherein the dependency type information indicates the type of the dependency on the computing job that must complete. 6. The scheduling system of claim 5, wherein the dependency type information is a selected from a group consisting of a hard dependency type and a soft dependency type. 7. The scheduling system of claim 5, wherein:
the hard dependency type indicates that the computing job must be successfully completed before starting execution of a first computing job, and the soft dependency type indicates that the computing job must be completed before starting execution of the first computing job regardless of whether the second computing job completed successfully. 8. The scheduling system of claim 1, wherein the operations further comprise determining whether to skip the first computing job based on its dependency on the second computing job. 9. The scheduling system of claim 8, wherein determining whether to skip the first computing job comprises:
determining a dependency of the first computing job; determining a dependency of the dependency of the first computing job; determining whether the dependency of the first computing job and the dependency of the dependency of the first computing job are scheduled for execution during the time period indicated by the scheduling information of the first computing job; determining whether the dependency of the first computing job is scheduled for execution is complete; and determining whether the dependency of the dependency of the first computing job scheduled for execution is complete. 10. A method for scheduling computing jobs comprising:
determining that a first computing job of a plurality of computing jobs has a dependency on a second computing job of the plurality of computing jobs; determining a type of the dependency on the second computing job; determining a completion status of the second computing job; and executing the first computing job based on the completion status of the second computing job and the type of the dependency on the second computing job. 11. The method of claim 10, further comprising determining job schedule parameters of the first computing job, the job schedule parameters including scheduling information, dependency information, and dependency type information. 12. The method of claim 11, wherein the scheduling information indicates an occurrence rate over a time period for starting execution of the first computing job. 13. The method of claim 11, wherein the dependency information indicates that which computing job must complete before starting execution of the first computing job. 14. The method of claim 13, wherein the dependency type information indicates the type of the dependency on the computing job that must be complete. 15. The method of claim 14, wherein:
the hard dependency type indicates that the computing job that must be complete is required to successfully completed before starting execution of a first computing job, and the soft dependency type indicates that the computing job that must be complete is required to be completed before starting execution of the first computing job regardless of whether the second computing job completed successfully. 16. The method of claim 10, further comprising determining whether to skip the first computing job based on the dependency on the second computing job. 17. The method of claim 16, wherein determining whether to skip the first computing job comprises:
determining a dependency of the first computing job; determining a dependency of the dependency of the first computing job; determining whether the dependency of the first computing job and the dependency of the dependency of the first computing job are scheduled for execution during a time period indicated by the frequency parameter of the first computing job; determining whether the dependency of the first computing job scheduled for execution is complete; and determining whether the dependency of the dependency of the first computing job scheduled for execution is complete. 18. A computer program product containing program instruction stored on a computer-readable data storage device, the program instructions, when executed by a processor, control a job scheduling system to perform operations comprising:
maintaining job status information of a plurality of computing jobs, the job status information indicating whether individual computing jobs of the plurality of jobs have been executed, whether the execution was complete, and whether the execution was successful; maintaining a job execution queue for plurality of computing jobs based on respective job schedule parameters of the plurality of computing jobs and the respective job status information of the plurality of computing jobs; determining that a first computing job of the plurality of computing jobs depends on a second computing job of the plurality of computing jobs based on a job dependency parameter of the first computing job; determining that the second computing job is incomplete based on the respective job status information of the second computing job; and skipping execution of the first computing job by excluding the first computing job from the job execution queue. 19. The computer program product of claim 19, further comprising determining a dependency type parameter of the first computing job. 20. The computer program product of claim 19, wherein:
the dependency type parameter is a selected from a group consisting of a hard dependency and a soft dependency; the hard dependency types indicates that the second computing job must be completed successfully before starting execution of a first computing job, and the soft dependency type indicates that the second computing job must be completed before starting execution of the first computing job regardless of whether the second computing job completed successfully. | 2,100 |
5,996 | 5,996 | 14,449,689 | 2,135 | A memory controller and method are provided for controlling a memory device to process access requests issued by at least one master device, the memory device having a plurality of access regions. The memory controller has a pending access requests storage that buffers access requests that have been issued by a master device prior to those access requests being processed by the memory device. Access control circuitry then issues control commands to the plurality of access regions in order to control the memory device to process access requests retrieved from the pending access requests storage. A query structure is also provided that is configured to maintain, for each access region, information about the buffered access requests in the pending access requests storage, and the access control circuitry references the query structure when determining the control commands to be issued to the plurality of access regions. Such an approach enables significant performance and energy savings to be realized in control of the memory device, without requiring the contents of the pending access requests storage to be directly monitored by the access control circuitry. | 1. A memory controller comprising:
a pending access requests storage configured to buffer access requests issued by at least one master device prior to those access requests being processed by a memory device; access control circuitry configured to issue control commands to a plurality of access regions in the memory device, in order to control the memory device to process access requests retrieved from the pending access requests storage; a query structure configured to maintain, for each access region, information about the buffered access requests in the pending access requests storage; and the access control circuitry being configured to reference the query structure when determining the control commands to be issued to the plurality of access regions. 2. A memory controller as claimed in claim 1, wherein the query structure comprises information storage configured to store the information for each access region, and maintenance circuitry configured to modify the information associated with one or more access regions as each access request is added to the pending access requests storage, or removed from the pending access requests storage. 3. A memory controller as claimed in claim 1, wherein:
each access region comprises a range of memory addresses; and for each access region, the information maintained in the query structure is indicative of whether the buffered access requests include any access requests specifying a memory address within that access region's range of memory addresses. 4. A memory controller as claimed in claim 3, wherein for each access region the information comprises a counter value indicative of the number of buffered access requests that specify a memory address within that access region's range of memory addresses. 5. A memory controller as claimed in claim 4, wherein the query structure comprises:
counter value storage configured to store the counter value for each access region; and update circuitry configured, when an access request is added to the pending access requests storage, to adjust in a first direction the counter value associated with each access region whose range of memory addresses includes the memory address specified by that added access request; the update circuitry being further configured, when an access request is removed from the pending access requests storage, to adjust in a second direction opposite to the first direction the counter value associated with each access region whose range of memory addresses includes the memory address specified by that removed access request. 6. A memory controller as claimed in claim 1, wherein the plurality of access regions comprise a plurality of groups of access regions, and for at least one group of access regions, the query structure is configured to provide information that is shared between multiple access regions in that group of access regions. 7. A memory controller as claimed in claim 6, wherein for said at least one group of access regions, the query structure implements a probabilistic update mechanism for the information that is shared between multiple access regions in that group. 8. A memory controller as claimed in claim 7, wherein the probabilistic update mechanism is a Bloom filter mechanism. 9. A memory controller as claimed in claim 6, wherein the plurality of groups of access regions are arranged as a plurality of hierarchical levels, such that for each access region at one hierarchical level, there are a plurality of associated access regions at a lower hierarchical level. 10. A memory controller as claimed in claim 9, wherein for each access region at said one hierarchical level, the information maintained in the query structure for that access region is an aggregate of the information maintained in the query structure for the associated access regions at said lower hierarchical level. 11. A memory controller as claimed in claim 9, wherein:
the memory device comprises a plurality of banks, and each bank comprises a plurality of rows; each bank forms an access region within a group of access regions at one hierarchical level, and each row forms an access region within another group of access regions at a lower hierarchical level. 12. A memory controller as claimed in claim 11, wherein:
the memory device further comprises a plurality of ranks, each rank comprising multiple banks from said plurality of banks; and each rank forms an access region within a group of access regions at a higher hierarchical level than said one hierarchical level containing access regions for each bank. 13. A memory controller as claimed in claim 11, wherein for the group containing access regions formed from each row, the query structure is configured to provide information that is shared between multiple rows within that group. 14. A memory controller as claimed in claim 13, wherein, in association with the rows in each bank, the query structure maintains a plurality of counter values, the number of counter values being less than the number of rows in each bank, the query structure employing a hash function to identify from an input value a corresponding counter value within said plurality of counter values, the input value providing a row identifier. 15. A memory controller as claimed in claim 14, wherein the input value further provides an attribute associated with an access request. 16. A memory controller as claimed in claim 15, wherein said attribute comprises one or more of a quality of service indication and a master device identifier. 17. A memory controller as claimed in claim 1, wherein:
the access control circuitry is configured to perform a scheduling operation to determine an order in which the buffered access requests are to be processed by the memory device, and the control commands issued by the access control circuitry include scheduling control commands issued to the plurality of access regions in order to cause the buffered access requests to be processed in the determined order; and the access control circuitry being configured to reference the query structure when performing said scheduling operation. 18. A memory controller as claimed in claim 1, wherein:
the access control circuitry is configured to perform a power management operation to control a power state of the access regions during the processing of the access requests by the memory device, and the control commands issued by the access control circuitry include power control commands issued to the plurality of access regions in order to control the power state of each access region; and the access control circuitry being configured to reference the query structure when performing said power management operation. 19. A memory controller as claimed in claim 1, wherein the access requests buffered in the pending access requests storage comprises read access requests and write access requests and the query structure is configured to store, for each access region, information for the read accesses requests and information for the write access requests. 20. A memory device as claimed in claim 1, wherein the pending access requests storage is configured as a queue storage structure. 21. A memory controller as claimed in claim 1, wherein said memory device is a DRAM memory device. 22. A method of controlling a memory device to process access requests issued by at least one master device, the memory device having a plurality of access regions, and the method comprising:
buffering, within a pending access requests storage, access requests issued by said at least one master device prior to those access requests being processed by the memory device; employing access control circuitry to issue control commands to the plurality of access regions in order to control the memory device to process access requests retrieved from the pending access requests storage; maintaining within a query structure, for each access region, information about the buffered access requests in the pending access requests storage; and causing the access control circuitry to reference the query structure when determining the control commands to be issued to the plurality of access regions. 23. A memory controller comprising:
pending access requests storage means for buffering access requests issued by at least one master device prior to those access requests being processed by a memory device; access control means for issuing control commands to a plurality of access regions in the memory device, in order to control the memory device to process access requests retrieved from the pending access requests storage means; query structure means for maintaining, for each access region, information about the buffered access requests in the pending access requests storage means; and the access control means for referencing the query structure means when determining the control commands to be issued to the plurality of access regions. | A memory controller and method are provided for controlling a memory device to process access requests issued by at least one master device, the memory device having a plurality of access regions. The memory controller has a pending access requests storage that buffers access requests that have been issued by a master device prior to those access requests being processed by the memory device. Access control circuitry then issues control commands to the plurality of access regions in order to control the memory device to process access requests retrieved from the pending access requests storage. A query structure is also provided that is configured to maintain, for each access region, information about the buffered access requests in the pending access requests storage, and the access control circuitry references the query structure when determining the control commands to be issued to the plurality of access regions. Such an approach enables significant performance and energy savings to be realized in control of the memory device, without requiring the contents of the pending access requests storage to be directly monitored by the access control circuitry.1. A memory controller comprising:
a pending access requests storage configured to buffer access requests issued by at least one master device prior to those access requests being processed by a memory device; access control circuitry configured to issue control commands to a plurality of access regions in the memory device, in order to control the memory device to process access requests retrieved from the pending access requests storage; a query structure configured to maintain, for each access region, information about the buffered access requests in the pending access requests storage; and the access control circuitry being configured to reference the query structure when determining the control commands to be issued to the plurality of access regions. 2. A memory controller as claimed in claim 1, wherein the query structure comprises information storage configured to store the information for each access region, and maintenance circuitry configured to modify the information associated with one or more access regions as each access request is added to the pending access requests storage, or removed from the pending access requests storage. 3. A memory controller as claimed in claim 1, wherein:
each access region comprises a range of memory addresses; and for each access region, the information maintained in the query structure is indicative of whether the buffered access requests include any access requests specifying a memory address within that access region's range of memory addresses. 4. A memory controller as claimed in claim 3, wherein for each access region the information comprises a counter value indicative of the number of buffered access requests that specify a memory address within that access region's range of memory addresses. 5. A memory controller as claimed in claim 4, wherein the query structure comprises:
counter value storage configured to store the counter value for each access region; and update circuitry configured, when an access request is added to the pending access requests storage, to adjust in a first direction the counter value associated with each access region whose range of memory addresses includes the memory address specified by that added access request; the update circuitry being further configured, when an access request is removed from the pending access requests storage, to adjust in a second direction opposite to the first direction the counter value associated with each access region whose range of memory addresses includes the memory address specified by that removed access request. 6. A memory controller as claimed in claim 1, wherein the plurality of access regions comprise a plurality of groups of access regions, and for at least one group of access regions, the query structure is configured to provide information that is shared between multiple access regions in that group of access regions. 7. A memory controller as claimed in claim 6, wherein for said at least one group of access regions, the query structure implements a probabilistic update mechanism for the information that is shared between multiple access regions in that group. 8. A memory controller as claimed in claim 7, wherein the probabilistic update mechanism is a Bloom filter mechanism. 9. A memory controller as claimed in claim 6, wherein the plurality of groups of access regions are arranged as a plurality of hierarchical levels, such that for each access region at one hierarchical level, there are a plurality of associated access regions at a lower hierarchical level. 10. A memory controller as claimed in claim 9, wherein for each access region at said one hierarchical level, the information maintained in the query structure for that access region is an aggregate of the information maintained in the query structure for the associated access regions at said lower hierarchical level. 11. A memory controller as claimed in claim 9, wherein:
the memory device comprises a plurality of banks, and each bank comprises a plurality of rows; each bank forms an access region within a group of access regions at one hierarchical level, and each row forms an access region within another group of access regions at a lower hierarchical level. 12. A memory controller as claimed in claim 11, wherein:
the memory device further comprises a plurality of ranks, each rank comprising multiple banks from said plurality of banks; and each rank forms an access region within a group of access regions at a higher hierarchical level than said one hierarchical level containing access regions for each bank. 13. A memory controller as claimed in claim 11, wherein for the group containing access regions formed from each row, the query structure is configured to provide information that is shared between multiple rows within that group. 14. A memory controller as claimed in claim 13, wherein, in association with the rows in each bank, the query structure maintains a plurality of counter values, the number of counter values being less than the number of rows in each bank, the query structure employing a hash function to identify from an input value a corresponding counter value within said plurality of counter values, the input value providing a row identifier. 15. A memory controller as claimed in claim 14, wherein the input value further provides an attribute associated with an access request. 16. A memory controller as claimed in claim 15, wherein said attribute comprises one or more of a quality of service indication and a master device identifier. 17. A memory controller as claimed in claim 1, wherein:
the access control circuitry is configured to perform a scheduling operation to determine an order in which the buffered access requests are to be processed by the memory device, and the control commands issued by the access control circuitry include scheduling control commands issued to the plurality of access regions in order to cause the buffered access requests to be processed in the determined order; and the access control circuitry being configured to reference the query structure when performing said scheduling operation. 18. A memory controller as claimed in claim 1, wherein:
the access control circuitry is configured to perform a power management operation to control a power state of the access regions during the processing of the access requests by the memory device, and the control commands issued by the access control circuitry include power control commands issued to the plurality of access regions in order to control the power state of each access region; and the access control circuitry being configured to reference the query structure when performing said power management operation. 19. A memory controller as claimed in claim 1, wherein the access requests buffered in the pending access requests storage comprises read access requests and write access requests and the query structure is configured to store, for each access region, information for the read accesses requests and information for the write access requests. 20. A memory device as claimed in claim 1, wherein the pending access requests storage is configured as a queue storage structure. 21. A memory controller as claimed in claim 1, wherein said memory device is a DRAM memory device. 22. A method of controlling a memory device to process access requests issued by at least one master device, the memory device having a plurality of access regions, and the method comprising:
buffering, within a pending access requests storage, access requests issued by said at least one master device prior to those access requests being processed by the memory device; employing access control circuitry to issue control commands to the plurality of access regions in order to control the memory device to process access requests retrieved from the pending access requests storage; maintaining within a query structure, for each access region, information about the buffered access requests in the pending access requests storage; and causing the access control circuitry to reference the query structure when determining the control commands to be issued to the plurality of access regions. 23. A memory controller comprising:
pending access requests storage means for buffering access requests issued by at least one master device prior to those access requests being processed by a memory device; access control means for issuing control commands to a plurality of access regions in the memory device, in order to control the memory device to process access requests retrieved from the pending access requests storage means; query structure means for maintaining, for each access region, information about the buffered access requests in the pending access requests storage means; and the access control means for referencing the query structure means when determining the control commands to be issued to the plurality of access regions. | 2,100 |
5,997 | 5,997 | 15,709,550 | 2,184 | Full-duplex memory access systems and methods for improved quality of service (QoS) are disclosed. In one aspect, a primary bus owner will evaluate an output from a secondary bus owner when the primary bus owner takes control of the bus to determine if the secondary bus owner has data to send to the primary bus owner and/or is in the midst of a bulk data transfer. If the evaluation determines that there is still data to be transferred, the primary bus owner may refrain from draining an internal register unless a full word is present in the register. By reducing memory access for a partial word in the register, QoS may be improved. | 1. An integrated circuit (IC) comprising:
a communication bus interface configured to be coupled to a communication bus and configured to receive a bulk data transfer from a secondary owner of the communication bus; a register communicatively coupled to the communication bus interface configured to store data associated with the bulk data transfer; a memory element coupled to the register; and a control system configured to:
instruct the register to drain full words to the memory element;
interrupt the bulk data transfer by asserting primary ownership of the communication bus;
determine when the secondary owner still has data to transfer to a primary owner; and
when the secondary owner still has data to transfer, refrain from draining the register to the memory element. 2. The IC of claim 1, wherein the communication bus interface comprises a Serial Low-power Inter-chip Media Bus (SLIMbus) interface. 3. The IC of claim 1, wherein the register comprises a first in, first out (FIFO) register. 4. The IC of claim 1, wherein the control system is configured to read a T2 value asserted by the secondary owner to determine when the secondary owner still has data to transfer to the primary owner. 5. The IC of claim 1, wherein the IC comprises the primary owner. 6. The IC of claim 1, wherein the control system is configured to drain a partial word from the register when the bulk data transfer from the secondary owner is complete. 7. The IC of claim 1, wherein the control system is configured to assert the primary ownership of the communication bus by asserting a T1 value equal to one (1). 8. The IC of claim 1, wherein the control system is configured to release the primary ownership of the communication bus by asserting a T1 value equal to zero (0). 9. The IC of claim 1 integrated into a device selected from the group consisting of:
a set top box; an entertainment unit; a navigation device; a communications device; a fixed location data unit; a mobile location data unit; a global positioning system (GPS) device; a mobile phone; a cellular phone; a smart phone; a session initiation protocol (SIP) phone; a tablet; a phablet; a server; a computer; a portable computer; a mobile computing device; a wearable computing device; a desktop computer; a personal digital assistant (PDA); a monitor; a computer monitor; a television; a tuner; a radio; a satellite radio; a music player; a digital music player; a portable music player; a digital video player; a video player; a digital video disc (DVD) player; a portable digital video player;
an automobile; a vehicle component; avionics systems; a drone; and a multicopter. 10. An integrated circuit (IC) comprising:
a means for coupling to a communication bus and configured to receive a bulk data transfer from a secondary owner of the communication bus; a register communicatively coupled to the means for coupling and configured to store data associated with the bulk data transfer; a means for storing data coupled to the register; and a control system configured to:
instruct the register to drain full words to the memory element;
interrupt the bulk data transfer by asserting primary ownership of the communication bus;
determine when the secondary owner still has data to transfer to a primary owner; and
when the secondary owner still has data to transfer, refrain from draining the register to the memory element. 11. A method for controlling an integrated circuit (IC) comprising:
beginning to receive a bulk data transfer over a communication bus from a secondary bus owner; storing one or more words of the bulk data transfer in a register; draining full words from the register to a memory element; asserting primary ownership of the communication bus and interrupting the bulk data transfer; determining that data remains to be received in the bulk data transfer; and refraining from draining a partial word from the register while data remains to be received in the bulk data transfer. 12. The method of claim 11, wherein beginning to receive comprises beginning to receive over a Serial Low-power Inter-chip Media Bus (SLIMbus). 13. The method of claim 11, further comprising asserting the primary ownership by asserting a T1 value equal to one (1). 14. The method of claim 11, wherein storing the one or more words in the register comprises storing the one or more words in a first in, first out (FIFO) register. 15. The method of claim 11, further comprising draining the partial word from the register when the bulk data transfer is complete. 16. A system comprising:
a full-duplex communication bus; a first integrated circuit (IC) configured to be a primary owner of the full-duplex communication bus; and a second IC configured to be a secondary owner of the full-duplex communication bus; the first IC comprising:
a communication bus interface configured to be coupled to the full-duplex communication bus and configured to receive a bulk data transfer from the second IC over the full-duplex communication bus;
a register communicatively coupled to the communication bus interface configured to store data associated with the bulk data transfer;
a memory element coupled to the register; and
a control system configured to:
instruct the register to drain full words to the memory element;
interrupt the bulk data transfer by asserting primary ownership of the communication bus;
determine when the secondary IC still has data to transfer to the first IC; and
when the second IC still has data to transfer, refrain from draining the register to the memory element. | Full-duplex memory access systems and methods for improved quality of service (QoS) are disclosed. In one aspect, a primary bus owner will evaluate an output from a secondary bus owner when the primary bus owner takes control of the bus to determine if the secondary bus owner has data to send to the primary bus owner and/or is in the midst of a bulk data transfer. If the evaluation determines that there is still data to be transferred, the primary bus owner may refrain from draining an internal register unless a full word is present in the register. By reducing memory access for a partial word in the register, QoS may be improved.1. An integrated circuit (IC) comprising:
a communication bus interface configured to be coupled to a communication bus and configured to receive a bulk data transfer from a secondary owner of the communication bus; a register communicatively coupled to the communication bus interface configured to store data associated with the bulk data transfer; a memory element coupled to the register; and a control system configured to:
instruct the register to drain full words to the memory element;
interrupt the bulk data transfer by asserting primary ownership of the communication bus;
determine when the secondary owner still has data to transfer to a primary owner; and
when the secondary owner still has data to transfer, refrain from draining the register to the memory element. 2. The IC of claim 1, wherein the communication bus interface comprises a Serial Low-power Inter-chip Media Bus (SLIMbus) interface. 3. The IC of claim 1, wherein the register comprises a first in, first out (FIFO) register. 4. The IC of claim 1, wherein the control system is configured to read a T2 value asserted by the secondary owner to determine when the secondary owner still has data to transfer to the primary owner. 5. The IC of claim 1, wherein the IC comprises the primary owner. 6. The IC of claim 1, wherein the control system is configured to drain a partial word from the register when the bulk data transfer from the secondary owner is complete. 7. The IC of claim 1, wherein the control system is configured to assert the primary ownership of the communication bus by asserting a T1 value equal to one (1). 8. The IC of claim 1, wherein the control system is configured to release the primary ownership of the communication bus by asserting a T1 value equal to zero (0). 9. The IC of claim 1 integrated into a device selected from the group consisting of:
a set top box; an entertainment unit; a navigation device; a communications device; a fixed location data unit; a mobile location data unit; a global positioning system (GPS) device; a mobile phone; a cellular phone; a smart phone; a session initiation protocol (SIP) phone; a tablet; a phablet; a server; a computer; a portable computer; a mobile computing device; a wearable computing device; a desktop computer; a personal digital assistant (PDA); a monitor; a computer monitor; a television; a tuner; a radio; a satellite radio; a music player; a digital music player; a portable music player; a digital video player; a video player; a digital video disc (DVD) player; a portable digital video player;
an automobile; a vehicle component; avionics systems; a drone; and a multicopter. 10. An integrated circuit (IC) comprising:
a means for coupling to a communication bus and configured to receive a bulk data transfer from a secondary owner of the communication bus; a register communicatively coupled to the means for coupling and configured to store data associated with the bulk data transfer; a means for storing data coupled to the register; and a control system configured to:
instruct the register to drain full words to the memory element;
interrupt the bulk data transfer by asserting primary ownership of the communication bus;
determine when the secondary owner still has data to transfer to a primary owner; and
when the secondary owner still has data to transfer, refrain from draining the register to the memory element. 11. A method for controlling an integrated circuit (IC) comprising:
beginning to receive a bulk data transfer over a communication bus from a secondary bus owner; storing one or more words of the bulk data transfer in a register; draining full words from the register to a memory element; asserting primary ownership of the communication bus and interrupting the bulk data transfer; determining that data remains to be received in the bulk data transfer; and refraining from draining a partial word from the register while data remains to be received in the bulk data transfer. 12. The method of claim 11, wherein beginning to receive comprises beginning to receive over a Serial Low-power Inter-chip Media Bus (SLIMbus). 13. The method of claim 11, further comprising asserting the primary ownership by asserting a T1 value equal to one (1). 14. The method of claim 11, wherein storing the one or more words in the register comprises storing the one or more words in a first in, first out (FIFO) register. 15. The method of claim 11, further comprising draining the partial word from the register when the bulk data transfer is complete. 16. A system comprising:
a full-duplex communication bus; a first integrated circuit (IC) configured to be a primary owner of the full-duplex communication bus; and a second IC configured to be a secondary owner of the full-duplex communication bus; the first IC comprising:
a communication bus interface configured to be coupled to the full-duplex communication bus and configured to receive a bulk data transfer from the second IC over the full-duplex communication bus;
a register communicatively coupled to the communication bus interface configured to store data associated with the bulk data transfer;
a memory element coupled to the register; and
a control system configured to:
instruct the register to drain full words to the memory element;
interrupt the bulk data transfer by asserting primary ownership of the communication bus;
determine when the secondary IC still has data to transfer to the first IC; and
when the second IC still has data to transfer, refrain from draining the register to the memory element. | 2,100 |
5,998 | 5,998 | 15,236,599 | 2,129 | A method determines measurement locations in an energy grid. In the energy grid, use is made of a controllable device for wide-range voltage control. A model of the energy grid is provided which specifies a voltage distribution within the energy grid by a system of equations and/or a system of inequalities depending on the control position of the controllable device. A simulation for minimizing the number of measurement locations is carried out on the basis of the model, and in that as a result of the simulation a minimum number and the respective position of measurement locations and also the control position of the controllable device are specified in order that the energy grid complies with a predefined voltage band during operation. | 1. A method for determining measurement locations in an energy grid, wherein in the energy grid use is made of a controllable device for wide-range voltage control, which comprises the steps of:
providing a model of the energy grid specifying a voltage distribution within the energy grid by means of at least one of a system of equations or a system of inequalities depending on a control position of the controllable device; and carrying out a simulation for minimizing a number of the measurement locations on a basis of the model, and as a result of the simulation a minimum number and a respective position of the measurement locations and also the control position of the controllable device are specified in order that the energy grid complies with a predefined voltage band during operation. 2. The method according to claim 1, wherein during the simulation for all control positions of the controllable device and in each case for all nodes in the energy grid, the following steps are repeated:
cancelling a condition in at least one of the system of equations or the system of inequalities that the predefined voltage band must be complied with, for a respective node; carrying out the simulation; and adding the respective node to a set of the measurement locations required at a minimum, if a result of the simulation reveals that the predefined voltage band was violated at the respective node. 3. The method according to claim 1, which further comprises installing voltage measuring devices at determined positions in the energy grid. 4. The method according to claim 1, which further comprises setting the controllable device to the control position which requires the minimum number of the measurement locations in accordance with the result. 5. The method according to claim 1, which further comprising providing a controllable substation transformer as the controllable device. 6. The method according to claim 1, which further comprises providing a grid controller as the controllable device. 7. The method according to claim 1, which further comprises providing a controllable local grid transformer as the controllable device. 8. A configuration for determining measurement locations in an energy grid, wherein in the energy grid use can be made of a controllable device for wide-range voltage control, the configuration comprising:
a simulation device configured for providing a model of the energy grid, wherein the model specifies a voltage distribution within the energy grid by means of at least one of a system of equations or a system of inequalities depending on a control position of the controllable device, and a simulation for minimizing a number of the measurement locations is carried out on a basis of the model, and said simulation device specifying as a result of the simulation a minimum number and a respective position of the measurement locations and also the control position of the controllable device in order that a predefined voltage band can be complied with for the energy grid during operation. 9. The configuration according to claim 8, wherein said simulation device is configured to repeat during the simulation for all control positions of the controllable device and in each case for all nodes in the energy grid the following steps:
cancelling a condition in at least one of the system of equations or the system of inequalities that the predefined voltage band must be complied with, for a respective node; carrying out the simulation; and adding the respective node to a set of the measurement locations required at a minimum, if a result of the simulation reveals that the predefined voltage band was violated at the respective node. 10. A configuration, comprising:
a controllable device; an energy grid, wherein in said energy grid use can be made of said controllable device for wide-range voltage control; a simulation device configured for providing a model of said energy grid, wherein the model specifies a voltage distribution within said energy grid by means of at least one of a system of equations or a system of inequalities depending on a control position of said controllable device, and a simulation for minimizing a number of measurement locations is carried out on a basis of the model, and said simulation device specifying as a result of the simulation a minimum number and a respective position of the measurement locations and also the control position of said controllable device in order that a predefined voltage band can be complied with for said energy grid during operation; and voltage measuring devices disposed at determined positions in said energy grid. 11. The configuration according to claim 10, wherein said controllable device is set to the control position, the control position requires a minimum number of the measurement locations in accordance with the result. 12. The configuration according to claim 10, wherein said controllable device has a controllable substation transformer. 13. The configuration according to claim 10, wherein said controllable device has a grid controller. 14. The configuration according to claim 10, wherein said controllable device has a controllable local grid transformer. | A method determines measurement locations in an energy grid. In the energy grid, use is made of a controllable device for wide-range voltage control. A model of the energy grid is provided which specifies a voltage distribution within the energy grid by a system of equations and/or a system of inequalities depending on the control position of the controllable device. A simulation for minimizing the number of measurement locations is carried out on the basis of the model, and in that as a result of the simulation a minimum number and the respective position of measurement locations and also the control position of the controllable device are specified in order that the energy grid complies with a predefined voltage band during operation.1. A method for determining measurement locations in an energy grid, wherein in the energy grid use is made of a controllable device for wide-range voltage control, which comprises the steps of:
providing a model of the energy grid specifying a voltage distribution within the energy grid by means of at least one of a system of equations or a system of inequalities depending on a control position of the controllable device; and carrying out a simulation for minimizing a number of the measurement locations on a basis of the model, and as a result of the simulation a minimum number and a respective position of the measurement locations and also the control position of the controllable device are specified in order that the energy grid complies with a predefined voltage band during operation. 2. The method according to claim 1, wherein during the simulation for all control positions of the controllable device and in each case for all nodes in the energy grid, the following steps are repeated:
cancelling a condition in at least one of the system of equations or the system of inequalities that the predefined voltage band must be complied with, for a respective node; carrying out the simulation; and adding the respective node to a set of the measurement locations required at a minimum, if a result of the simulation reveals that the predefined voltage band was violated at the respective node. 3. The method according to claim 1, which further comprises installing voltage measuring devices at determined positions in the energy grid. 4. The method according to claim 1, which further comprises setting the controllable device to the control position which requires the minimum number of the measurement locations in accordance with the result. 5. The method according to claim 1, which further comprising providing a controllable substation transformer as the controllable device. 6. The method according to claim 1, which further comprises providing a grid controller as the controllable device. 7. The method according to claim 1, which further comprises providing a controllable local grid transformer as the controllable device. 8. A configuration for determining measurement locations in an energy grid, wherein in the energy grid use can be made of a controllable device for wide-range voltage control, the configuration comprising:
a simulation device configured for providing a model of the energy grid, wherein the model specifies a voltage distribution within the energy grid by means of at least one of a system of equations or a system of inequalities depending on a control position of the controllable device, and a simulation for minimizing a number of the measurement locations is carried out on a basis of the model, and said simulation device specifying as a result of the simulation a minimum number and a respective position of the measurement locations and also the control position of the controllable device in order that a predefined voltage band can be complied with for the energy grid during operation. 9. The configuration according to claim 8, wherein said simulation device is configured to repeat during the simulation for all control positions of the controllable device and in each case for all nodes in the energy grid the following steps:
cancelling a condition in at least one of the system of equations or the system of inequalities that the predefined voltage band must be complied with, for a respective node; carrying out the simulation; and adding the respective node to a set of the measurement locations required at a minimum, if a result of the simulation reveals that the predefined voltage band was violated at the respective node. 10. A configuration, comprising:
a controllable device; an energy grid, wherein in said energy grid use can be made of said controllable device for wide-range voltage control; a simulation device configured for providing a model of said energy grid, wherein the model specifies a voltage distribution within said energy grid by means of at least one of a system of equations or a system of inequalities depending on a control position of said controllable device, and a simulation for minimizing a number of measurement locations is carried out on a basis of the model, and said simulation device specifying as a result of the simulation a minimum number and a respective position of the measurement locations and also the control position of said controllable device in order that a predefined voltage band can be complied with for said energy grid during operation; and voltage measuring devices disposed at determined positions in said energy grid. 11. The configuration according to claim 10, wherein said controllable device is set to the control position, the control position requires a minimum number of the measurement locations in accordance with the result. 12. The configuration according to claim 10, wherein said controllable device has a controllable substation transformer. 13. The configuration according to claim 10, wherein said controllable device has a grid controller. 14. The configuration according to claim 10, wherein said controllable device has a controllable local grid transformer. | 2,100 |
5,999 | 5,999 | 15,348,503 | 2,192 | Methods and systems for dynamically providing application analytic information are provided herein. The method includes inserting instrumentation points into an application file via an application analytic service and dynamically determining desired instrumentation points from which to collect application analytic data. The method also includes receiving, at the application analytic service, the application analytic data corresponding to the desired instrumentation points and analyzing the application analytic data to generate application analytic information. The method further includes sending the application analytic information to a client computing device. | 1. (canceled) 2. A method for binary rewriting of application files, comprising:
unpacking an application file comprising application code; inserting an application analytic agent into the application file; performing a binary rewriting of the application file to insert specified application analytic code; and repacking the application file to obtain a new application file comprising the application code and the application analytic code. 3. The method of claim 2, comprising receiving the application file at an application analytic service. 4. The method of claim 3, wherein the application analytic service is hosted within an online marketplace. 5. The method of claim 2, comprising locating an application manifest within the unpacked application file. 6. The method of claim 5, wherein the application manifest comprises one or more metadata files. 7. The method of claim 6, wherein the application manifest comprises one or more property files. 8. The method of claim 2, wherein the application analytic agent collects application analytic information from one or more instrumentation points within the unpacked application file. 9. The method of claim 3, wherein inserting the application analytic agent is performed by a rewriter of the application analytic service. 10. The method of claim 9, wherein the binary rewriting is performed by the rewriter of the application analytic service. 11. The method of claim 2, wherein the new application file is capable of being executed. 12. A system for binary rewriting of application files, comprising:
a processor; and a memory comprising instructions that cause the processor to: unpack an application file comprising application code; insert an application analytic agent into the application file; perform a binary rewriting of the application file to insert specified application analytic code; repack the application file to obtain a new application file comprising the application code and the application analytic code; and receive the application file at an application analytic service, wherein the application analytic service is hosted within an online marketplace. 13. The system of claim 12, wherein the instructions cause the processor to locate an application manifest within the unpacked application file. 14. The system of claim 13, wherein the application manifest comprises:
one or more metadata files; and one or more property files. 15. The system of claim 12, wherein the application analytic agent collects application analytic information from one or more instrumentation points within the unpacked application file. 16. The system of claim 12, wherein the application analytic agent is inserted by a rewriter of the application analytic service, and wherein the binary rewriting is performed by the rewriter of the application analytic service. 17. The system of claim 12, wherein the new application file is capable of being executed. 18. A computer-readable memory storage device for binary rewriting of application files, the storage device comprising instructions that cause a processor to:
unpack an application file comprising application code; insert an application analytic agent into the application file; perform a binary rewriting of the application file to insert specified application analytic code; repack the application file to obtain a new application file comprising the application code and the application analytic code; receive the application file at an application analytic service, wherein the application analytic service is hosted within an online marketplace; and locate an application manifest within the unpacked application file. 19. The computer-readable memory storage device of claim 18, wherein the application manifest comprises:
one or more metadata files; and one or more property files. 20. The computer-readable memory storage device of claim 18, wherein the application analytic agent collects application analytic information from one or more instrumentation points within the unpacked application file. 21. The computer-readable memory storage device of claim 18, wherein the new application file is capable of being executed. | Methods and systems for dynamically providing application analytic information are provided herein. The method includes inserting instrumentation points into an application file via an application analytic service and dynamically determining desired instrumentation points from which to collect application analytic data. The method also includes receiving, at the application analytic service, the application analytic data corresponding to the desired instrumentation points and analyzing the application analytic data to generate application analytic information. The method further includes sending the application analytic information to a client computing device.1. (canceled) 2. A method for binary rewriting of application files, comprising:
unpacking an application file comprising application code; inserting an application analytic agent into the application file; performing a binary rewriting of the application file to insert specified application analytic code; and repacking the application file to obtain a new application file comprising the application code and the application analytic code. 3. The method of claim 2, comprising receiving the application file at an application analytic service. 4. The method of claim 3, wherein the application analytic service is hosted within an online marketplace. 5. The method of claim 2, comprising locating an application manifest within the unpacked application file. 6. The method of claim 5, wherein the application manifest comprises one or more metadata files. 7. The method of claim 6, wherein the application manifest comprises one or more property files. 8. The method of claim 2, wherein the application analytic agent collects application analytic information from one or more instrumentation points within the unpacked application file. 9. The method of claim 3, wherein inserting the application analytic agent is performed by a rewriter of the application analytic service. 10. The method of claim 9, wherein the binary rewriting is performed by the rewriter of the application analytic service. 11. The method of claim 2, wherein the new application file is capable of being executed. 12. A system for binary rewriting of application files, comprising:
a processor; and a memory comprising instructions that cause the processor to: unpack an application file comprising application code; insert an application analytic agent into the application file; perform a binary rewriting of the application file to insert specified application analytic code; repack the application file to obtain a new application file comprising the application code and the application analytic code; and receive the application file at an application analytic service, wherein the application analytic service is hosted within an online marketplace. 13. The system of claim 12, wherein the instructions cause the processor to locate an application manifest within the unpacked application file. 14. The system of claim 13, wherein the application manifest comprises:
one or more metadata files; and one or more property files. 15. The system of claim 12, wherein the application analytic agent collects application analytic information from one or more instrumentation points within the unpacked application file. 16. The system of claim 12, wherein the application analytic agent is inserted by a rewriter of the application analytic service, and wherein the binary rewriting is performed by the rewriter of the application analytic service. 17. The system of claim 12, wherein the new application file is capable of being executed. 18. A computer-readable memory storage device for binary rewriting of application files, the storage device comprising instructions that cause a processor to:
unpack an application file comprising application code; insert an application analytic agent into the application file; perform a binary rewriting of the application file to insert specified application analytic code; repack the application file to obtain a new application file comprising the application code and the application analytic code; receive the application file at an application analytic service, wherein the application analytic service is hosted within an online marketplace; and locate an application manifest within the unpacked application file. 19. The computer-readable memory storage device of claim 18, wherein the application manifest comprises:
one or more metadata files; and one or more property files. 20. The computer-readable memory storage device of claim 18, wherein the application analytic agent collects application analytic information from one or more instrumentation points within the unpacked application file. 21. The computer-readable memory storage device of claim 18, wherein the new application file is capable of being executed. | 2,100 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.