Unnamed: 0 int64 0 350k | level_0 int64 0 351k | ApplicationNumber int64 9.75M 96.1M | ArtUnit int64 1.6k 3.99k | Abstract stringlengths 1 8.37k | Claims stringlengths 3 292k | abstract-claims stringlengths 68 293k | TechCenter int64 1.6k 3.9k |
|---|---|---|---|---|---|---|---|
7,500 | 7,500 | 14,096,300 | 2,458 | According to one aspect of the present disclosure a system and technique for dynamic system level agreement provisioning includes: a computing environment configured with allocatable computing resources; and a host having a processor unit operable to execute a service level agreement (SLA) module. The SLA module is configured to: identify service level criteria for a customer of computing services of the computing environment; determine characteristics of the computing environment; identify a time period for providing the computing services; evaluate one or more utility functions defining service level variables; and automatically determine a service level agreement (SLA) provision for the customer based on the one or more utility functions. | 1-7. (canceled) 8. A system, comprising:
a computing environment configured with allocatable computing resources; and a host having a processor unit operable to execute a service level agreement (SLA) module, the SLA module configured to:
identify service level criteria for a customer of computing services of the computing environment;
determine characteristics of the computing environment;
identify a time period for providing the computing services;
evaluate one or more utility functions defining service level variables; and
automatically determine a service level agreement (SLA) provision for the customer based on the one or more utility functions. 9. The system of claim 8, wherein SLA module is further configured to:
determine a plurality of computing resource configurations for the computing services; and evaluate at least one utility function for each of the plurality of computing resource configurations. 10. The system of claim 8, wherein the SLA module is further configured to select a weight to apply to each service level variable. 11. The system of claim 10, wherein the SLA module is configured to select the weight representing a probability of a violation of the respective service level variable. 12. The system of claim 8, wherein the SLA module is further configured to:
monitor the computing environment over the time period; and automatically update the SLA provision based on a change to the computing environment. 13. The system of claim 12, wherein the SLA module is configured to receive from the customer a specification of a response time parameter, an availability parameter, and a capacity parameter. 14. The system of claim 8, wherein the SLA module is further configured to:
determine a plurality of computing resource configurations for the computing services; define a plurality of service level variables for the one or more utility functions for each of the plurality of resource configurations; select a weight to apply to each service level variable; and evaluate at least one combined utility function value for each of the plurality of computing resource configurations using the weighted service level variables. 15. A computer program product for dynamic system level agreement provisioning, the computer program product comprising:
a computer readable storage medium having computer readable program code embodied therewith, the computer readable program code comprising computer readable program code configured to:
identify service level criteria for a customer of computing services of a computing environment;
determine characteristics of the computing environment;
identify a time period for providing the computing services;
evaluate one or more utility functions defining service level variables; and
automatically determine a service level agreement (SLA) provision for the customer based on the one or more utility functions. 16. The computer program product of claim 15, wherein the computer readable program code is configured to:
determine a plurality of computing resource configurations for the computing services; and evaluate at least one utility function for each of the plurality of computing resource configurations. 17. The computer program product of claim 15, wherein the computer readable program code is configured to select a weight to apply to each service level variable. 18. The computer program product of claim 17, wherein the computer readable program code is configured to select the weight representing a probability of a violation of the respective service level variable. 19. The computer program product of claim 15, wherein the computer readable program code is configured to:
monitor the computing environment over the time period; and automatically update the SLA provision based on a change to the computing environment. 20. The computer program product of claim 15, wherein the computer readable program code is configured to.
determine a plurality of computing resource configurations for the computing services; define a plurality of service level variables for the one or more utility functions for each of the plurality of resource configurations; select a weight to apply to each service level variable; and evaluate at least one combined utility function value for each of the plurality of computing resource configurations using the weighted service level variables. | According to one aspect of the present disclosure a system and technique for dynamic system level agreement provisioning includes: a computing environment configured with allocatable computing resources; and a host having a processor unit operable to execute a service level agreement (SLA) module. The SLA module is configured to: identify service level criteria for a customer of computing services of the computing environment; determine characteristics of the computing environment; identify a time period for providing the computing services; evaluate one or more utility functions defining service level variables; and automatically determine a service level agreement (SLA) provision for the customer based on the one or more utility functions.1-7. (canceled) 8. A system, comprising:
a computing environment configured with allocatable computing resources; and a host having a processor unit operable to execute a service level agreement (SLA) module, the SLA module configured to:
identify service level criteria for a customer of computing services of the computing environment;
determine characteristics of the computing environment;
identify a time period for providing the computing services;
evaluate one or more utility functions defining service level variables; and
automatically determine a service level agreement (SLA) provision for the customer based on the one or more utility functions. 9. The system of claim 8, wherein SLA module is further configured to:
determine a plurality of computing resource configurations for the computing services; and evaluate at least one utility function for each of the plurality of computing resource configurations. 10. The system of claim 8, wherein the SLA module is further configured to select a weight to apply to each service level variable. 11. The system of claim 10, wherein the SLA module is configured to select the weight representing a probability of a violation of the respective service level variable. 12. The system of claim 8, wherein the SLA module is further configured to:
monitor the computing environment over the time period; and automatically update the SLA provision based on a change to the computing environment. 13. The system of claim 12, wherein the SLA module is configured to receive from the customer a specification of a response time parameter, an availability parameter, and a capacity parameter. 14. The system of claim 8, wherein the SLA module is further configured to:
determine a plurality of computing resource configurations for the computing services; define a plurality of service level variables for the one or more utility functions for each of the plurality of resource configurations; select a weight to apply to each service level variable; and evaluate at least one combined utility function value for each of the plurality of computing resource configurations using the weighted service level variables. 15. A computer program product for dynamic system level agreement provisioning, the computer program product comprising:
a computer readable storage medium having computer readable program code embodied therewith, the computer readable program code comprising computer readable program code configured to:
identify service level criteria for a customer of computing services of a computing environment;
determine characteristics of the computing environment;
identify a time period for providing the computing services;
evaluate one or more utility functions defining service level variables; and
automatically determine a service level agreement (SLA) provision for the customer based on the one or more utility functions. 16. The computer program product of claim 15, wherein the computer readable program code is configured to:
determine a plurality of computing resource configurations for the computing services; and evaluate at least one utility function for each of the plurality of computing resource configurations. 17. The computer program product of claim 15, wherein the computer readable program code is configured to select a weight to apply to each service level variable. 18. The computer program product of claim 17, wherein the computer readable program code is configured to select the weight representing a probability of a violation of the respective service level variable. 19. The computer program product of claim 15, wherein the computer readable program code is configured to:
monitor the computing environment over the time period; and automatically update the SLA provision based on a change to the computing environment. 20. The computer program product of claim 15, wherein the computer readable program code is configured to.
determine a plurality of computing resource configurations for the computing services; define a plurality of service level variables for the one or more utility functions for each of the plurality of resource configurations; select a weight to apply to each service level variable; and evaluate at least one combined utility function value for each of the plurality of computing resource configurations using the weighted service level variables. | 2,400 |
7,501 | 7,501 | 14,496,868 | 2,458 | According to one aspect of the present disclosure a method and technique for dynamic system level agreement provisioning is disclosed. The method includes: identifying, by a data processing system of a computing environment service provider, service level criteria for a customer of computing services; determining characteristics of the computing environment; identifying a time period for providing the computing services; evaluating one or more utility functions defining service level variables; and automatically determining, by the data processing system, a service level agreement (SLA) provision for the customer based on the one or more utility functions. | 1. A method, comprising:
identifying, by a data processing system of a computing environment service provider, service level criteria for a customer of computing services; determining characteristics of the computing environment; identifying a time period for providing the computing services; evaluating one or more utility functions defining service level variables; and automatically determining, by the data processing system, a service level agreement (SLA) provision for the customer based on the one or more utility functions. 2. The method of claim 1, further comprising:
determining a plurality of computing resource configurations for the computing services; and wherein evaluating the one or more utility functions comprises evaluating at least one utility function for each of the plurality of computing resource configurations. 3. The method of claim 1, further comprising selecting a weight to apply to each service level variable. 4. The method of claim 3, wherein selecting the weight comprises selecting a weight representing a probability of a violation of the respective service level variable. 5. The method of claim 1, further comprising:
monitoring the computing environment over the time period; and automatically updating the SLA provision based on a change to the computing environment. 6. The method of claim 1, wherein identifying service level criteria for the customer comprises receiving a specification of a response time parameter, an availability parameter, and a capacity parameter. 7. The method of claim 6, further comprising:
determining a plurality of computing resource configurations for the computing services; and selecting a weight to apply to each service level variable; and wherein evaluating the one or more utility functions comprises evaluating at least one combined utility function value for each of the plurality of computing resource configurations using the weighted service level variables. | According to one aspect of the present disclosure a method and technique for dynamic system level agreement provisioning is disclosed. The method includes: identifying, by a data processing system of a computing environment service provider, service level criteria for a customer of computing services; determining characteristics of the computing environment; identifying a time period for providing the computing services; evaluating one or more utility functions defining service level variables; and automatically determining, by the data processing system, a service level agreement (SLA) provision for the customer based on the one or more utility functions.1. A method, comprising:
identifying, by a data processing system of a computing environment service provider, service level criteria for a customer of computing services; determining characteristics of the computing environment; identifying a time period for providing the computing services; evaluating one or more utility functions defining service level variables; and automatically determining, by the data processing system, a service level agreement (SLA) provision for the customer based on the one or more utility functions. 2. The method of claim 1, further comprising:
determining a plurality of computing resource configurations for the computing services; and wherein evaluating the one or more utility functions comprises evaluating at least one utility function for each of the plurality of computing resource configurations. 3. The method of claim 1, further comprising selecting a weight to apply to each service level variable. 4. The method of claim 3, wherein selecting the weight comprises selecting a weight representing a probability of a violation of the respective service level variable. 5. The method of claim 1, further comprising:
monitoring the computing environment over the time period; and automatically updating the SLA provision based on a change to the computing environment. 6. The method of claim 1, wherein identifying service level criteria for the customer comprises receiving a specification of a response time parameter, an availability parameter, and a capacity parameter. 7. The method of claim 6, further comprising:
determining a plurality of computing resource configurations for the computing services; and selecting a weight to apply to each service level variable; and wherein evaluating the one or more utility functions comprises evaluating at least one combined utility function value for each of the plurality of computing resource configurations using the weighted service level variables. | 2,400 |
7,502 | 7,502 | 14,356,926 | 2,457 | Network devices, servers, and modules operating within MCA capable to selectively defer delivery of non-time sensitive content are provided. A network device ( 315 ) includes a communication interface ( 410 ) configured to enable communication with a client device ( 310 ), and to receive a request for a content delivery from the client device ( 310 ). The network device also includes a processing unit ( 420 ) configured to determine whether to defer the request depending on a network load at a time when the request has been received. | 1. A network device, comprising:
a communication interface configured to enable communication with a client device, and to receive a request for a content delivery from the client device; and a processing unit configured to determine whether to defer the request depending on a network load at a time when the request has been received. 2. The network device of claim 1, wherein:
if the processing unit has determined that the request is not deferred, the processing unit generates a first message to be sent to the client device via the communication interface, to enable the content delivery; and if the processing unit has determined that the request is deferred, the processing unit generates a second message to be sent to the client device via the communication interface. 3. The network device of claim 2, wherein the second message includes a time value indicating when to resubmit the client request. 4. The network device of claim 3, wherein the time value included in the second message is an absolute time value or a time interval after the time when the request has been received at which to resubmit the request. 5. The network device of claim 2, wherein the network device is configured to operate as a cache server, further comprising a data storage unit connected to the processing unit and configured to store temporarily the content, wherein the first message includes the content. 6. The network device of claim 2, wherein the network device receives the request from another network device that stores pairs of domain names and Internet Protocol addresses, and the other network device, upon receiving the first message, enables the client to pursue the content delivery by sending, to the client, an IP address corresponding to a domain name provided by the client. 7. The network device of claim 1, wherein the processing unit is configured to infer the network load by comparing the time when the request has been received with daily network load data including peak hours and off-peak hours, wherein if the time when the request has been received corresponds to the peak hours, the processing unit defers the request. 8. The network device of claim 1, wherein the processing unit is configured to determine the network load based on information extracted from a network load database depending on a time when the request has been received. 9. The network device of claim 8, wherein the network load database is a historical database storing data related to past network load. 10. The network device of claim 8, wherein the network load database is a near real-time database fed with current network load information by a module configured to perform network traffic analysis. 11. The network device of claim 8, further comprising:
a data storage unit configured to store the network load database. 12. The network device of claim 8, wherein the network load database is stored in another network device, and the network device further comprises:
a network load database interface configured to enable communication with another network device to enable extracting the information from the network load database. 13. The network device of claim 1, wherein the processing unit is configured to determine the network load based on latest network load information received from a module configured to perform network traffic analysis, at a time when the request has been received. 14. The network device of claim 1, further comprising:
a billing module interface configured to enable communication with a billing module, wherein the processing unit is further configured to generate a billing report to the billing module, the billing report reflecting whether the request is deferred. 15. The network device of claim 1, wherein the client device is a mobile edge server configured to store temporarily the content, a Domain Name Server configured to store a database storing pairs of domain names and Internet Protocol addresses or a user equipment. 16. The network device of claim 1, wherein the processing unit is further configured to operate as a smart pipe controller within a mobile cloud accelerator. 17. A cache server in a mobile network, comprising:
a communication interface configured to enable communication with a client device that submits a request for a delivery of content; a memory configured to temporarily store a content specified in the request; and a processing unit configured to send a query to a network module as to whether to proceed with delivering the content depending on a network load, wherein, if a response to the query is positive, the processing unit controls the communication interface to send the content stored in the memory to the client device, and, if the response to the query is negative, the processing unit generates a message to indicate, to the client device, that the request is deferred, and controls the communication interface to send the message to the client device. 18. A charging device, comprising:
a communication interface configured to enable communication with a network device that submits an indication that a request for a content delivery of a client device has been deferred; and a processing unit configured to control charging a rate different from a regular rate to a client account corresponding to the request for the content delivery when the indication has been received. 19. A method performed by a network device, the method comprising:
receiving a request for a content delivery from a client device in the network; and determining whether to defer the request depending on network load when the request has been received. 20. The method of claim 19, further comprising:
sending a first message to the client if the request is not deferred, the first message including information enabling the content delivery; and sending a second message to the client if the request is deferred. 21. The method of claim 20, wherein the second message includes a time value indicating, to the client, when to resubmit the client request. 22. The method of claim 19, further comprising:
comparing a time when the request has been received with daily network load data including peak hours and off-peak hours; and deferring the request if the time corresponds to the peak hours. 23. The method of claim 19, further comprising:
determining the network load based on information extracted from a network load database depending on a time when the request has been received. 24. The method of claim 19, further comprising:
generating a billing report reflecting that the request has been deferred. 25. A computer readable storage medium storing executable codes which, when executed on a network device including a communication interface and a processing unit, makes the network device to perform a method comprising:
receiving a request for a content delivery from a client device in the network; and determining whether to defer the request depending on network load when the request has been received. | Network devices, servers, and modules operating within MCA capable to selectively defer delivery of non-time sensitive content are provided. A network device ( 315 ) includes a communication interface ( 410 ) configured to enable communication with a client device ( 310 ), and to receive a request for a content delivery from the client device ( 310 ). The network device also includes a processing unit ( 420 ) configured to determine whether to defer the request depending on a network load at a time when the request has been received.1. A network device, comprising:
a communication interface configured to enable communication with a client device, and to receive a request for a content delivery from the client device; and a processing unit configured to determine whether to defer the request depending on a network load at a time when the request has been received. 2. The network device of claim 1, wherein:
if the processing unit has determined that the request is not deferred, the processing unit generates a first message to be sent to the client device via the communication interface, to enable the content delivery; and if the processing unit has determined that the request is deferred, the processing unit generates a second message to be sent to the client device via the communication interface. 3. The network device of claim 2, wherein the second message includes a time value indicating when to resubmit the client request. 4. The network device of claim 3, wherein the time value included in the second message is an absolute time value or a time interval after the time when the request has been received at which to resubmit the request. 5. The network device of claim 2, wherein the network device is configured to operate as a cache server, further comprising a data storage unit connected to the processing unit and configured to store temporarily the content, wherein the first message includes the content. 6. The network device of claim 2, wherein the network device receives the request from another network device that stores pairs of domain names and Internet Protocol addresses, and the other network device, upon receiving the first message, enables the client to pursue the content delivery by sending, to the client, an IP address corresponding to a domain name provided by the client. 7. The network device of claim 1, wherein the processing unit is configured to infer the network load by comparing the time when the request has been received with daily network load data including peak hours and off-peak hours, wherein if the time when the request has been received corresponds to the peak hours, the processing unit defers the request. 8. The network device of claim 1, wherein the processing unit is configured to determine the network load based on information extracted from a network load database depending on a time when the request has been received. 9. The network device of claim 8, wherein the network load database is a historical database storing data related to past network load. 10. The network device of claim 8, wherein the network load database is a near real-time database fed with current network load information by a module configured to perform network traffic analysis. 11. The network device of claim 8, further comprising:
a data storage unit configured to store the network load database. 12. The network device of claim 8, wherein the network load database is stored in another network device, and the network device further comprises:
a network load database interface configured to enable communication with another network device to enable extracting the information from the network load database. 13. The network device of claim 1, wherein the processing unit is configured to determine the network load based on latest network load information received from a module configured to perform network traffic analysis, at a time when the request has been received. 14. The network device of claim 1, further comprising:
a billing module interface configured to enable communication with a billing module, wherein the processing unit is further configured to generate a billing report to the billing module, the billing report reflecting whether the request is deferred. 15. The network device of claim 1, wherein the client device is a mobile edge server configured to store temporarily the content, a Domain Name Server configured to store a database storing pairs of domain names and Internet Protocol addresses or a user equipment. 16. The network device of claim 1, wherein the processing unit is further configured to operate as a smart pipe controller within a mobile cloud accelerator. 17. A cache server in a mobile network, comprising:
a communication interface configured to enable communication with a client device that submits a request for a delivery of content; a memory configured to temporarily store a content specified in the request; and a processing unit configured to send a query to a network module as to whether to proceed with delivering the content depending on a network load, wherein, if a response to the query is positive, the processing unit controls the communication interface to send the content stored in the memory to the client device, and, if the response to the query is negative, the processing unit generates a message to indicate, to the client device, that the request is deferred, and controls the communication interface to send the message to the client device. 18. A charging device, comprising:
a communication interface configured to enable communication with a network device that submits an indication that a request for a content delivery of a client device has been deferred; and a processing unit configured to control charging a rate different from a regular rate to a client account corresponding to the request for the content delivery when the indication has been received. 19. A method performed by a network device, the method comprising:
receiving a request for a content delivery from a client device in the network; and determining whether to defer the request depending on network load when the request has been received. 20. The method of claim 19, further comprising:
sending a first message to the client if the request is not deferred, the first message including information enabling the content delivery; and sending a second message to the client if the request is deferred. 21. The method of claim 20, wherein the second message includes a time value indicating, to the client, when to resubmit the client request. 22. The method of claim 19, further comprising:
comparing a time when the request has been received with daily network load data including peak hours and off-peak hours; and deferring the request if the time corresponds to the peak hours. 23. The method of claim 19, further comprising:
determining the network load based on information extracted from a network load database depending on a time when the request has been received. 24. The method of claim 19, further comprising:
generating a billing report reflecting that the request has been deferred. 25. A computer readable storage medium storing executable codes which, when executed on a network device including a communication interface and a processing unit, makes the network device to perform a method comprising:
receiving a request for a content delivery from a client device in the network; and determining whether to defer the request depending on network load when the request has been received. | 2,400 |
7,503 | 7,503 | 14,945,454 | 2,425 | A device, such as for example and without limitation, a set-top box, is programmed to receive metadata relating to available media content. At least one item of media content is identified as selected to be provided to a display upon activation of the display. The at least one item of selected media content is provided to the display upon activation of the display. | 1. A computing device that includes a processor and a memory, the memory storing instructions executable by the processor such that the computing device is programmed to:
receive metadata relating to available media content; identify at least one item of media content as selected to be provided to a display upon activation in response to user input, of an application running in a background status on the computing device wherein activation of the application includes bringing the application from the background status to a foreground status; and in response to activation of the application, provide the at least one item of selected media content to the display. 2. The device of claim 1, further programmed to identify the at least one item of selected item of media content according to a programming channel via which the media content is available. 3. The device of claim 1, further programmed to identify the at least one item of media content as selected at least in part according to an identifier for a programming channel stored in a memory of the media device. 4. The device of claim 1, further programmed to identify the at least one item of media content as selected at least in part according to an expiration date associated with the at least one item of media content. 5. The device of claim 1, wherein the at least one item of selected media content is a plurality of selected items of media content, the device being further programmed to choose at most one of the selected items to provide to the display. 6. The device of claim 1, further programmed to provide the at least one item of media content to the display only if the media content passes one or more programming parameters. 7. The device of claim 6, wherein the programming parameters include parental controls. 8. The device of claim 1, wherein the media content includes video content. 9. The device of claim 1, wherein the device is a set-top box. 10. (canceled) 11. A method, comprising:
receiving, by a computing device, metadata relating to available media content; identifying at least one item of media content as selected to be provided to a display upon activation in response to user input to the computing device, of an application running in a background status on the computing device wherein activation of the application includes bringing the application from the background status to a foreground status; and in response to activation of the application, providing the at least one item of selected media content to the display. 12. The method of claim 11, further comprising identifying the at least one item of selected item of media content according to a programming channel via which the media content is available. 13. The method of claim 11, further comprising identifying the at least one item of media content as selected at least in part according to an identifier for a programming channel stored in a memory of the media device. 14. The method of claim 11, further comprising identifying the at least one item of media content as selected at least in part according to an expiration date associated with the at least one item of media content. 15. The method of claim 11, wherein the at least one item of selected media content is a plurality of selected items of media content, the method further comprising choosing at most one of the selected items to provide to the display. 16. The method of claim 11, further comprising providing the at least one item of media content to the display only if the media content passes one or more programming parameters. 17. The method of claim 16, wherein the programming parameters include parental controls. 18. The method of claim 11, wherein the media content includes video content. 19. The method of claim 11, executed according to program instructions stored in the memory of a set-top box. 20. (canceled) | A device, such as for example and without limitation, a set-top box, is programmed to receive metadata relating to available media content. At least one item of media content is identified as selected to be provided to a display upon activation of the display. The at least one item of selected media content is provided to the display upon activation of the display.1. A computing device that includes a processor and a memory, the memory storing instructions executable by the processor such that the computing device is programmed to:
receive metadata relating to available media content; identify at least one item of media content as selected to be provided to a display upon activation in response to user input, of an application running in a background status on the computing device wherein activation of the application includes bringing the application from the background status to a foreground status; and in response to activation of the application, provide the at least one item of selected media content to the display. 2. The device of claim 1, further programmed to identify the at least one item of selected item of media content according to a programming channel via which the media content is available. 3. The device of claim 1, further programmed to identify the at least one item of media content as selected at least in part according to an identifier for a programming channel stored in a memory of the media device. 4. The device of claim 1, further programmed to identify the at least one item of media content as selected at least in part according to an expiration date associated with the at least one item of media content. 5. The device of claim 1, wherein the at least one item of selected media content is a plurality of selected items of media content, the device being further programmed to choose at most one of the selected items to provide to the display. 6. The device of claim 1, further programmed to provide the at least one item of media content to the display only if the media content passes one or more programming parameters. 7. The device of claim 6, wherein the programming parameters include parental controls. 8. The device of claim 1, wherein the media content includes video content. 9. The device of claim 1, wherein the device is a set-top box. 10. (canceled) 11. A method, comprising:
receiving, by a computing device, metadata relating to available media content; identifying at least one item of media content as selected to be provided to a display upon activation in response to user input to the computing device, of an application running in a background status on the computing device wherein activation of the application includes bringing the application from the background status to a foreground status; and in response to activation of the application, providing the at least one item of selected media content to the display. 12. The method of claim 11, further comprising identifying the at least one item of selected item of media content according to a programming channel via which the media content is available. 13. The method of claim 11, further comprising identifying the at least one item of media content as selected at least in part according to an identifier for a programming channel stored in a memory of the media device. 14. The method of claim 11, further comprising identifying the at least one item of media content as selected at least in part according to an expiration date associated with the at least one item of media content. 15. The method of claim 11, wherein the at least one item of selected media content is a plurality of selected items of media content, the method further comprising choosing at most one of the selected items to provide to the display. 16. The method of claim 11, further comprising providing the at least one item of media content to the display only if the media content passes one or more programming parameters. 17. The method of claim 16, wherein the programming parameters include parental controls. 18. The method of claim 11, wherein the media content includes video content. 19. The method of claim 11, executed according to program instructions stored in the memory of a set-top box. 20. (canceled) | 2,400 |
7,504 | 7,504 | 11,320,593 | 2,434 | A technique for authenticating network users is disclosed. In one particular exemplary embodiment, the technique may be realized as a method for authenticating network users. The method may comprise receiving, from a client device, a request for connection to a network. The method may also comprise evaluating a security context associated with the requested connection. The method may further comprise assigning the client device one or more access privileges based at least in part on the evaluation of the security context. | 1. A method for authenticating network users comprising the steps of:
receiving, from a client device, a request for connection to a network; evaluating a security context associated with the requested connection; and assigning the client device one or more access privileges based at least in part on the evaluation of the security context. 2. The method according to claim 1, wherein the security context is evaluated at least in part by an agent program in the client device. 3. The method according to claim 2, wherein the agent program interacts with the network to evaluate the security context. 4. The method according to claim 2, wherein at least a portion of the security context is evaluated prior to the request for connection. 5. The method according to claim 2, wherein the agent program comprises a JAVA applet. 6. The method according to claim 2, wherein the agent program is automatically downloaded to the client device upon receipt of the request for connection. 7. The method according to claim 6, wherein:
the agent program remains in the client device, after the client device disconnects from the network, in preparation for a subsequent connection to the network. 8. The method according to claim 1, wherein the security context comprises one or more factors selected from a group consisting of:
a user login mechanism employed by the client device; a threat level associated with the network; vulnerabilities of an access medium with which the client device connects to the network; and a security level associated with the client device. 9. The method according to claim 1, further comprising:
generating a security token that records the one or more access privileges assigned to the client device; and storing the security token in the client device. 10. The method according to claim 9, further comprising:
detecting the security token in the client device when the client device, after ending a first connection to the network, attempts a second connection to the network; and granting the client device access to the network based on the one or more recorded access privileges if the security token is detected and verified. 11. The method according to claim 10, wherein the first and the second connections to the network are through different ports. 12. The method according to claim 1, further comprising:
generating a security token that records at least a portion of the security context; and storing the security token in the client device. 13. The method according to claim 11, further comprising:
detecting the security token in the client device when the client device, after ending a first connection to the network, attempts a second connection to the network; and granting the client device access to the network based at least in part on the recorded security context if the security token is detected and verified. 14. The method according to claim 13, wherein the recorded security context is updated prior to the client device's attempt of the second connection to the network. 15. The method according to claim 1, further comprising:
configuring a connection between the client device and the network based at least in part on the evaluation of the security context. 16. The method according to claim 15, further comprising:
re-configuring the connection between the client device and the network based at least in part on a security token stored in the client device. 17. At least one signal embodied in at least one carrier wave for transmitting a computer program of instructions configured to be readable by at least one processor for instructing the at least one processor to execute a computer process for performing the method as recited in claim 1. 18. At least one processor readable carrier for storing a computer program of instructions configured to be readable by at least one processor for instructing the at least one processor to execute a computer process for performing the method as recited in claim 1. 19. A system for authenticating network users, the system comprising:
a network interface that facilitates communications between a client device and a network; and at least one processor that
receives, from a client device, a request for connection to the network;
causes a security context associated with the requested connection to be evaluated; and
assigns the client device one or more access privileges based at least in part on the evaluation of the security context. 20. A method for authenticating network users, the method comprising the steps of:
receiving, from a client device, a request for connection to a network; identifying a communication protocol employed by the client device; adopting an authentication scheme that is compatible with the communication protocol, if the compatible authentication scheme is available for use by the network to authenticate the client device; and downloading an agent program to the client device if the compatible authentication scheme is not available, wherein the agent program interacts with the network to authenticate the client device. 21. The method according to claim 20, wherein the compatible authentication scheme is selected from a group consisting of:
authentication schemes associated with IEEE 802.1x standard; authentication schemes based on one or more Media Access Control (MAC) address lists; authentication schemes based on one or more Internet Protocol (IP) address lists; and authentication schemes based on Remote Authentication Dial In User Server (RADIUS) protocol. | A technique for authenticating network users is disclosed. In one particular exemplary embodiment, the technique may be realized as a method for authenticating network users. The method may comprise receiving, from a client device, a request for connection to a network. The method may also comprise evaluating a security context associated with the requested connection. The method may further comprise assigning the client device one or more access privileges based at least in part on the evaluation of the security context.1. A method for authenticating network users comprising the steps of:
receiving, from a client device, a request for connection to a network; evaluating a security context associated with the requested connection; and assigning the client device one or more access privileges based at least in part on the evaluation of the security context. 2. The method according to claim 1, wherein the security context is evaluated at least in part by an agent program in the client device. 3. The method according to claim 2, wherein the agent program interacts with the network to evaluate the security context. 4. The method according to claim 2, wherein at least a portion of the security context is evaluated prior to the request for connection. 5. The method according to claim 2, wherein the agent program comprises a JAVA applet. 6. The method according to claim 2, wherein the agent program is automatically downloaded to the client device upon receipt of the request for connection. 7. The method according to claim 6, wherein:
the agent program remains in the client device, after the client device disconnects from the network, in preparation for a subsequent connection to the network. 8. The method according to claim 1, wherein the security context comprises one or more factors selected from a group consisting of:
a user login mechanism employed by the client device; a threat level associated with the network; vulnerabilities of an access medium with which the client device connects to the network; and a security level associated with the client device. 9. The method according to claim 1, further comprising:
generating a security token that records the one or more access privileges assigned to the client device; and storing the security token in the client device. 10. The method according to claim 9, further comprising:
detecting the security token in the client device when the client device, after ending a first connection to the network, attempts a second connection to the network; and granting the client device access to the network based on the one or more recorded access privileges if the security token is detected and verified. 11. The method according to claim 10, wherein the first and the second connections to the network are through different ports. 12. The method according to claim 1, further comprising:
generating a security token that records at least a portion of the security context; and storing the security token in the client device. 13. The method according to claim 11, further comprising:
detecting the security token in the client device when the client device, after ending a first connection to the network, attempts a second connection to the network; and granting the client device access to the network based at least in part on the recorded security context if the security token is detected and verified. 14. The method according to claim 13, wherein the recorded security context is updated prior to the client device's attempt of the second connection to the network. 15. The method according to claim 1, further comprising:
configuring a connection between the client device and the network based at least in part on the evaluation of the security context. 16. The method according to claim 15, further comprising:
re-configuring the connection between the client device and the network based at least in part on a security token stored in the client device. 17. At least one signal embodied in at least one carrier wave for transmitting a computer program of instructions configured to be readable by at least one processor for instructing the at least one processor to execute a computer process for performing the method as recited in claim 1. 18. At least one processor readable carrier for storing a computer program of instructions configured to be readable by at least one processor for instructing the at least one processor to execute a computer process for performing the method as recited in claim 1. 19. A system for authenticating network users, the system comprising:
a network interface that facilitates communications between a client device and a network; and at least one processor that
receives, from a client device, a request for connection to the network;
causes a security context associated with the requested connection to be evaluated; and
assigns the client device one or more access privileges based at least in part on the evaluation of the security context. 20. A method for authenticating network users, the method comprising the steps of:
receiving, from a client device, a request for connection to a network; identifying a communication protocol employed by the client device; adopting an authentication scheme that is compatible with the communication protocol, if the compatible authentication scheme is available for use by the network to authenticate the client device; and downloading an agent program to the client device if the compatible authentication scheme is not available, wherein the agent program interacts with the network to authenticate the client device. 21. The method according to claim 20, wherein the compatible authentication scheme is selected from a group consisting of:
authentication schemes associated with IEEE 802.1x standard; authentication schemes based on one or more Media Access Control (MAC) address lists; authentication schemes based on one or more Internet Protocol (IP) address lists; and authentication schemes based on Remote Authentication Dial In User Server (RADIUS) protocol. | 2,400 |
7,505 | 7,505 | 14,563,261 | 2,413 | The disclosure relates to a user equipment for a wireless communications system, and to a related method for identifying a resource to use for a transmission of control information on a physical uplink control channel, PUCCH, format 3. The method comprises receiving ( 610 ) a resource index from a serving radio base station, and identifying ( 620 ) the resource to use for the transmission of the control information in a subframe based on the received resource index, wherein the identified resource is within a same confined set of physical resource blocks regardless of if a normal or a shortened PUCCH format 3 is used in the subframe. | 1. A method in a user equipment of a wireless communication system, for identifying a resource to use for a transmission of control information on a physical uplink control channel, PUCCH, format 3, the method comprising:
receiving a resource index from a serving radio base station, and identifying the resource to use for the transmission of the control information in a subframe based on the received resource index, wherein the identified resource is within a same confined set of physical resource blocks regardless of if a normal or a shortened PUCCH format 3 is used in the subframe. 2. The method according to claim 1, wherein identifying the resource comprises identifying a physical resource block based on the received resource index, wherein the identified physical resource block is the same regardless of if a normal or a shortened PUCCH format 3 is used in the subframe. 3. The method according to claim 2, wherein the physical resource block is identified based on nPRB given by the following equation:
n
PRB
=
⌊
n
PUCCH
N
SF
,
0
PUCCH
⌋
where nPUCCH is the received resource index and NSF,0 PUCCH is a number of orthogonal sequences available for a physical resource block in a first time slot of the subframe. 4. The method according to claim 1, wherein identifying the resource comprises identifying an orthogonal sequence based on an orthogonal sequence index noc given by the following equation:
n oc =n PUCCHmodN SF,1 PUCCH
where nPUCCH is the received resource index, and NSF,1 PUCCH is a number of orthogonal sequences available for a physical resource block in a second time slot of the subframe. 5. The method according to claim 1, wherein identifying the resource comprises:
calculating a modified resource index based on the received resource index and a total number of physical resource blocks available for PUCCH format 3, and identifying the resource based on the modified resource index, wherein the identified resource is within a same confined set of physical resource blocks regardless of if a normal or a shortened PUCCH format 3 is used in the subframe. 6. The method according to claim 5, wherein the modified resource index is calculated as a modulo operation with the received resource index as the dividend and the total number of physical resource blocks available for PUCCH format 3 as the divisor. 7. The method according to claim 5, wherein identifying the resource based on the modified resource index comprises identifying a physical resource block based on nPRB given by the following equation:
n
PRB
=
⌊
n
~
PUCCH
N
SF
,
1
PUCCH
⌋
+
N
start
where ñPUCCH is the modified resource index, NSF,1 PUCCH is a number of orthogonal sequences available for a physical resource block in a second time slot of the subframe, and Nstart is a starting position of the confined set of physical resource blocks. 8. The method according to claim 5, wherein identifying the resource based on the modified resource index comprises identifying an orthogonal sequence based on an orthogonal sequence index noc given by the following equation:
n oc =ñ PUCCHmodN SF,1 PUCCH
where ñPUCCH is the modified resource index, and NSF,1 PUCCH is a number of orthogonal sequences available for a physical resource block in a second time slot of the subframe. 9. A user equipment for a wireless communication system, configured to identify a resource to use for a transmission of control information on a physical uplink control channel, PUCCH, format 3, the user equipment comprising:
a receiving unit adapted to receive a resource index from a serving radio base station, and an identifying unit adapted to identify the resource to use for the transmission of the control information in a subframe based on the received resource index, wherein the identified resource is within a same confined set of physical resource blocks regardless of if a normal or a shortened PUCCH format 3 is used in the subframe. 10. The user equipment according to claim 9, wherein the identifying unit is adapted to identify a physical resource block based on the received resource index, wherein the identified physical resource block is the same regardless of if a normal or a shortened PUCCH format 3 is used in the subframe. 11. The user equipment according to claim 10, wherein the identifying unit is adapted to identify the physical resource block based on nPRB given by the following equation:
n
PRB
=
⌊
n
PUCCH
N
SF
,
0
PUCCH
⌋
where nPUCCH is the received resource index and NSF,0 PUCCH is a number of orthogonal sequences available for a physical resource block in a first time slot of the subframe. 12. The user equipment according to claim 9, wherein the identifying unit is adapted to identify an orthogonal sequence based on an orthogonal sequence index noc given by the following equation:
n oc =n PUCCHmodN SF,1 PUCCH
where nPUCCH is the received resource index, and NSF,1 PUCCH is a number of orthogonal sequences available for a physical resource block in a second time slot of the subframe. 13. The user equipment according to claim 9, wherein the identifying unit is further adapted to calculate a modified resource index based on the received resource index and a total number of physical resource blocks available for PUCCH format 3, and to identify the resource based on the modified resource index, wherein the identified resource is within a same confined set of physical resource blocks regardless of if a normal or a shortened PUCCH format 3 is used in the subframe. 14. The user equipment according to claim 13, wherein the identifying unit is further adapted to calculate the modified resource index as a modulo operation with the received resource index as the dividend and the total number of physical resource blocks available for PUCCH format 3 as the divisor. 15. The user equipment according to claim 13, wherein the identifying unit is adapted to identify a physical resource block based on nPRB given by the following equation:
n
PRB
=
⌊
n
~
PUCCH
N
SF
,
1
PUCCH
⌋
+
N
start
where ñPUCCH is the modified resource index, NSF,1 PUCCH is a number of orthogonal sequences available for a physical resource block in a second time slot of the subframe, and Nstart is a starting position of the confined set of physical resource blocks. 16. The user equipment according to claim 13, wherein the identifying unit is adapted to identify an orthogonal sequence based on an orthogonal sequence index noc given by the following equation:
n oc =ñ PUCCHmodN SF,1 PUCCH
where ñPUCCH is the modified resource index, and NSF,1 PUCCH is a number of orthogonal sequences available for a physical resource block in a second time slot of the subframe. | The disclosure relates to a user equipment for a wireless communications system, and to a related method for identifying a resource to use for a transmission of control information on a physical uplink control channel, PUCCH, format 3. The method comprises receiving ( 610 ) a resource index from a serving radio base station, and identifying ( 620 ) the resource to use for the transmission of the control information in a subframe based on the received resource index, wherein the identified resource is within a same confined set of physical resource blocks regardless of if a normal or a shortened PUCCH format 3 is used in the subframe.1. A method in a user equipment of a wireless communication system, for identifying a resource to use for a transmission of control information on a physical uplink control channel, PUCCH, format 3, the method comprising:
receiving a resource index from a serving radio base station, and identifying the resource to use for the transmission of the control information in a subframe based on the received resource index, wherein the identified resource is within a same confined set of physical resource blocks regardless of if a normal or a shortened PUCCH format 3 is used in the subframe. 2. The method according to claim 1, wherein identifying the resource comprises identifying a physical resource block based on the received resource index, wherein the identified physical resource block is the same regardless of if a normal or a shortened PUCCH format 3 is used in the subframe. 3. The method according to claim 2, wherein the physical resource block is identified based on nPRB given by the following equation:
n
PRB
=
⌊
n
PUCCH
N
SF
,
0
PUCCH
⌋
where nPUCCH is the received resource index and NSF,0 PUCCH is a number of orthogonal sequences available for a physical resource block in a first time slot of the subframe. 4. The method according to claim 1, wherein identifying the resource comprises identifying an orthogonal sequence based on an orthogonal sequence index noc given by the following equation:
n oc =n PUCCHmodN SF,1 PUCCH
where nPUCCH is the received resource index, and NSF,1 PUCCH is a number of orthogonal sequences available for a physical resource block in a second time slot of the subframe. 5. The method according to claim 1, wherein identifying the resource comprises:
calculating a modified resource index based on the received resource index and a total number of physical resource blocks available for PUCCH format 3, and identifying the resource based on the modified resource index, wherein the identified resource is within a same confined set of physical resource blocks regardless of if a normal or a shortened PUCCH format 3 is used in the subframe. 6. The method according to claim 5, wherein the modified resource index is calculated as a modulo operation with the received resource index as the dividend and the total number of physical resource blocks available for PUCCH format 3 as the divisor. 7. The method according to claim 5, wherein identifying the resource based on the modified resource index comprises identifying a physical resource block based on nPRB given by the following equation:
n
PRB
=
⌊
n
~
PUCCH
N
SF
,
1
PUCCH
⌋
+
N
start
where ñPUCCH is the modified resource index, NSF,1 PUCCH is a number of orthogonal sequences available for a physical resource block in a second time slot of the subframe, and Nstart is a starting position of the confined set of physical resource blocks. 8. The method according to claim 5, wherein identifying the resource based on the modified resource index comprises identifying an orthogonal sequence based on an orthogonal sequence index noc given by the following equation:
n oc =ñ PUCCHmodN SF,1 PUCCH
where ñPUCCH is the modified resource index, and NSF,1 PUCCH is a number of orthogonal sequences available for a physical resource block in a second time slot of the subframe. 9. A user equipment for a wireless communication system, configured to identify a resource to use for a transmission of control information on a physical uplink control channel, PUCCH, format 3, the user equipment comprising:
a receiving unit adapted to receive a resource index from a serving radio base station, and an identifying unit adapted to identify the resource to use for the transmission of the control information in a subframe based on the received resource index, wherein the identified resource is within a same confined set of physical resource blocks regardless of if a normal or a shortened PUCCH format 3 is used in the subframe. 10. The user equipment according to claim 9, wherein the identifying unit is adapted to identify a physical resource block based on the received resource index, wherein the identified physical resource block is the same regardless of if a normal or a shortened PUCCH format 3 is used in the subframe. 11. The user equipment according to claim 10, wherein the identifying unit is adapted to identify the physical resource block based on nPRB given by the following equation:
n
PRB
=
⌊
n
PUCCH
N
SF
,
0
PUCCH
⌋
where nPUCCH is the received resource index and NSF,0 PUCCH is a number of orthogonal sequences available for a physical resource block in a first time slot of the subframe. 12. The user equipment according to claim 9, wherein the identifying unit is adapted to identify an orthogonal sequence based on an orthogonal sequence index noc given by the following equation:
n oc =n PUCCHmodN SF,1 PUCCH
where nPUCCH is the received resource index, and NSF,1 PUCCH is a number of orthogonal sequences available for a physical resource block in a second time slot of the subframe. 13. The user equipment according to claim 9, wherein the identifying unit is further adapted to calculate a modified resource index based on the received resource index and a total number of physical resource blocks available for PUCCH format 3, and to identify the resource based on the modified resource index, wherein the identified resource is within a same confined set of physical resource blocks regardless of if a normal or a shortened PUCCH format 3 is used in the subframe. 14. The user equipment according to claim 13, wherein the identifying unit is further adapted to calculate the modified resource index as a modulo operation with the received resource index as the dividend and the total number of physical resource blocks available for PUCCH format 3 as the divisor. 15. The user equipment according to claim 13, wherein the identifying unit is adapted to identify a physical resource block based on nPRB given by the following equation:
n
PRB
=
⌊
n
~
PUCCH
N
SF
,
1
PUCCH
⌋
+
N
start
where ñPUCCH is the modified resource index, NSF,1 PUCCH is a number of orthogonal sequences available for a physical resource block in a second time slot of the subframe, and Nstart is a starting position of the confined set of physical resource blocks. 16. The user equipment according to claim 13, wherein the identifying unit is adapted to identify an orthogonal sequence based on an orthogonal sequence index noc given by the following equation:
n oc =ñ PUCCHmodN SF,1 PUCCH
where ñPUCCH is the modified resource index, and NSF,1 PUCCH is a number of orthogonal sequences available for a physical resource block in a second time slot of the subframe. | 2,400 |
7,506 | 7,506 | 14,780,892 | 2,495 | A controller that is separate from a processor of the system verifies controller code for execution on the controller. In response to verifying the controller code, the controller verifies a system boot code. | 1. A method comprising:
during an initialization procedure of a controller in a system, verifying, by the controller that is separate from a processor of the system, controller code for execution on the controller, wherein the verifying is performed before execution of the processor; and after verifying the controller code, verifying, by the controller, system boot code, wherein the system boot code is for execution by the processor. 2. The method of claim 1, further comprising:
the controller code upon execution by the controller verifying the system boot code prior to each instance of the processor restarting execution of the system boot code. 3. The method of claim 1, wherein the controller code upon execution in the controller causes the controller to perform at least one selected from among: power supply control in the system, thermal monitoring in the system, fan control in the system, battery charging and control in the system, and interaction with a user input device,. 4. The method of claim 1, wherein the system boot code includes core root of trust for measurement (CRTM) logic executable on the processor to make measurements in the system that are used by the system to determine trustworthiness of the system. 5. The method of claim 1, wherein verifying the controller code comprises verifying embedded controller firmware that is part of basic input/output system (BIOS) code stored in the memory. 6. The method of claim 1, wherein the controller code is retrieved from a private memory to perform the verifying of the controller code, and wherein the private memory is accessible by the controller and inaccessible by the processor, the method further comprising:
in response to detecting compromise of the controller code in the private memory, retrieving, by the controller, controller code from a shared memory that is also accessible by the processor, and verifying the controller code retrieved from the shared memory; and in response to verifying the controller code retrieved from the shared memory, executing the controller code retrieved from the shared memory in the controller to perform the verifying of the system boot code in one or both of the private memory and the shared memory. 7. The method of claim 6, further comprising:
in response to determining that either the system boot code in the private memory or the system boot code in the shared memory is compromised, updating the compromised system boot code in the private memory or the shared memory with a non-compromised system boot code from the private memory or the shared memory. 8. The method of claim 6, further comprising:
determining, by the controller, whether the system boot code in the shared memory is a different version from the system boot code in the private memory; in response to determining that the system boot code in the shared memory is of a different version from the system boot code in the private memory, determining whether a lock policy is set specifying that system boot code is to be locked to a version of the system boot code in the private memory; and in response to determining that the lock policy is set, updating the system boot code in the shared memory to the version of the system boot code in the private memory. 9. The method of claim 1, wherein the controller has read-only memory storing a cryptographic key, and wherein the verifying of the controller code uses the cryptographic key. 10. A system comprising:
a processor; a first memory storing controller firmware and a boot block of Basic Input/Output System (BIOS) code; a second memory storing controller firmware and a boot block of BIOS code; and an embedded controller to:
during initialization of the embedded controller while the processor is off, verify the controller firmware stored in the first memory, wherein the controller firmware is for execution in the controller, and wherein the first memory is accessible by the embedded controller but inaccessible to the processor; and
in response to detecting compromise of the controller firmware in the first memory, retrieve the controller firmware stored in the second memory. 11. The system of claim 10, wherein the embedded controller is to execute the controller firmware of the first memory or second memory to verify the boot blocks in the first memory and the second memory. 12. The system of claim 11, wherein the embedded controller is to replace a compromised one of the boot blocks with a non compromised one of the boot blocks. 13. The system of claim 11, wherein the verifying of the boot blocks is performed in response to the system transitioning to a state after which the processor will subsequently restart execution from the boot block in the second memory. 14. The system of claim 10, wherein the first memory stores policy information indicating at least one or a combination of the following policies:
a policy specifying whether an aggressive mode of operation is to be used to enable verification of the boot block in the second memory before each instance of the processor restarting execution from the boot block in the second memory; a policy specifying whether a manual or automated recovery mode is to be used, where a manual recovery mode involves a user action before recovery of a compromised boot block is allowed to be performed; and a policy specifying whether a locked or unlocked mode is to be used, where locked mode causes system firmware to be locked to a specific version. 15. An article comprising at least one machine-readable storage medium storing instructions that upon execution cause a system to:
before a processor of the system starts executing a boot block, verify a controller code stored in a first memory, wherein the controller code is for execution in an embedded controller, and wherein the first memory is accessible by the embedded controller but inaccessible to the processor of the system; and in response to detecting compromise of the controller code in the first memory, retrieve controller code stored in a second memory that is accessible by both the embedded controller and the processor; and execute the controller code of the first memory or the second memory to perform verification of the boot block in one or both of the first and second memories. | A controller that is separate from a processor of the system verifies controller code for execution on the controller. In response to verifying the controller code, the controller verifies a system boot code.1. A method comprising:
during an initialization procedure of a controller in a system, verifying, by the controller that is separate from a processor of the system, controller code for execution on the controller, wherein the verifying is performed before execution of the processor; and after verifying the controller code, verifying, by the controller, system boot code, wherein the system boot code is for execution by the processor. 2. The method of claim 1, further comprising:
the controller code upon execution by the controller verifying the system boot code prior to each instance of the processor restarting execution of the system boot code. 3. The method of claim 1, wherein the controller code upon execution in the controller causes the controller to perform at least one selected from among: power supply control in the system, thermal monitoring in the system, fan control in the system, battery charging and control in the system, and interaction with a user input device,. 4. The method of claim 1, wherein the system boot code includes core root of trust for measurement (CRTM) logic executable on the processor to make measurements in the system that are used by the system to determine trustworthiness of the system. 5. The method of claim 1, wherein verifying the controller code comprises verifying embedded controller firmware that is part of basic input/output system (BIOS) code stored in the memory. 6. The method of claim 1, wherein the controller code is retrieved from a private memory to perform the verifying of the controller code, and wherein the private memory is accessible by the controller and inaccessible by the processor, the method further comprising:
in response to detecting compromise of the controller code in the private memory, retrieving, by the controller, controller code from a shared memory that is also accessible by the processor, and verifying the controller code retrieved from the shared memory; and in response to verifying the controller code retrieved from the shared memory, executing the controller code retrieved from the shared memory in the controller to perform the verifying of the system boot code in one or both of the private memory and the shared memory. 7. The method of claim 6, further comprising:
in response to determining that either the system boot code in the private memory or the system boot code in the shared memory is compromised, updating the compromised system boot code in the private memory or the shared memory with a non-compromised system boot code from the private memory or the shared memory. 8. The method of claim 6, further comprising:
determining, by the controller, whether the system boot code in the shared memory is a different version from the system boot code in the private memory; in response to determining that the system boot code in the shared memory is of a different version from the system boot code in the private memory, determining whether a lock policy is set specifying that system boot code is to be locked to a version of the system boot code in the private memory; and in response to determining that the lock policy is set, updating the system boot code in the shared memory to the version of the system boot code in the private memory. 9. The method of claim 1, wherein the controller has read-only memory storing a cryptographic key, and wherein the verifying of the controller code uses the cryptographic key. 10. A system comprising:
a processor; a first memory storing controller firmware and a boot block of Basic Input/Output System (BIOS) code; a second memory storing controller firmware and a boot block of BIOS code; and an embedded controller to:
during initialization of the embedded controller while the processor is off, verify the controller firmware stored in the first memory, wherein the controller firmware is for execution in the controller, and wherein the first memory is accessible by the embedded controller but inaccessible to the processor; and
in response to detecting compromise of the controller firmware in the first memory, retrieve the controller firmware stored in the second memory. 11. The system of claim 10, wherein the embedded controller is to execute the controller firmware of the first memory or second memory to verify the boot blocks in the first memory and the second memory. 12. The system of claim 11, wherein the embedded controller is to replace a compromised one of the boot blocks with a non compromised one of the boot blocks. 13. The system of claim 11, wherein the verifying of the boot blocks is performed in response to the system transitioning to a state after which the processor will subsequently restart execution from the boot block in the second memory. 14. The system of claim 10, wherein the first memory stores policy information indicating at least one or a combination of the following policies:
a policy specifying whether an aggressive mode of operation is to be used to enable verification of the boot block in the second memory before each instance of the processor restarting execution from the boot block in the second memory; a policy specifying whether a manual or automated recovery mode is to be used, where a manual recovery mode involves a user action before recovery of a compromised boot block is allowed to be performed; and a policy specifying whether a locked or unlocked mode is to be used, where locked mode causes system firmware to be locked to a specific version. 15. An article comprising at least one machine-readable storage medium storing instructions that upon execution cause a system to:
before a processor of the system starts executing a boot block, verify a controller code stored in a first memory, wherein the controller code is for execution in an embedded controller, and wherein the first memory is accessible by the embedded controller but inaccessible to the processor of the system; and in response to detecting compromise of the controller code in the first memory, retrieve controller code stored in a second memory that is accessible by both the embedded controller and the processor; and execute the controller code of the first memory or the second memory to perform verification of the boot block in one or both of the first and second memories. | 2,400 |
7,507 | 7,507 | 15,391,662 | 2,456 | A method is disclosed for dynamically updating the content of a website or a web service via a text message. A text service may receive, at a text phone number, the text message sent by a user from a text device. The text device may have a user phone number. In preferred embodiments, the text service may have a plurality of text phone numbers that may be called by a plurality of users. The text service may produce an action code based, at least in part, on the text message, the text phone number called by the user, the user phone number or some combination thereof. The text service may transmit the action code to one or more hosting servers to alter a website which may then be published. In another embodiment, the text service may transmit the action code to web server(s) that may alter a web service for the user. | 1. A method, comprising the steps of:
hosting a website configured to be altered by a text message sent by an owner of the website from a text device having a user phone number, wherein a text service determines the website to be altered out of a plurality of websites based on the user phone number; receiving the text message at a text phone number of the text service running on one or more hardware servers sent by the owner of the website from the text device; producing, on the one or more hardware servers, an action code comprising an alpha-numeric string based, at least in part, on the text message; and transmitting, by the one or more hardware servers, the action code to one or more hosting servers, wherein the action code is used to alter the-website by changing a template of the website. 2. The method of claim 1, further comprising the step of:
publishing the altered website on the one or more hosting servers. 3. The method of claim 1, further comprising the step of producing, on the one or more hardware servers, the action code based, at least in part, on the text message and the user phone number. 4. The method of claim 1, further comprising the step of producing, on the one or more hardware servers, the action code based, at least in part, on the text message and the text phone number. 5. The method of claim 2, further comprising the steps of:
the one or more hosting servers transmitting the action code to a plugin for the website; and the plugin altering the website based on the action code. 6. The method of claim 1, wherein the received text message is in a Short Message Service format or a Multimedia Messaging Service format. 7. The method of claim 1, wherein the text phone number is included in a plurality of text phone numbers operated on the one or more hardware servers. 8. A method, comprising the steps of:
hosting a first website configured to be altered by a first text message sent by an owner of the first website from a first text device having a first user phone number, wherein a text service determines the first website to be altered out of a plurality of websites based on the first user phone number and hosting a second website configured to be altered by a second text message sent by an owner of the second website from a second text device having a second user phone number, wherein the text service determines the second website to be altered out of a plurality of websites based on the second user phone number; receiving the first text message at the first text phone number of the text service running on one or more hardware servers sent by the owner of the first website from the first text device; receiving the second text message at the first text phone number of the text service running on the one or more hardware servers sent by the owner of the second website from the second text device; producing, on the one or more hardware servers, a first action code comprising a first alpha-numeric string based, at least in part, on the first text message; producing, on the one or more hardware servers, a second action code comprising a second alpha-numeric string based, at least in part, on the second text message; transmitting, by the one or more hardware servers, the first action code to one or more hosting servers, wherein the first action code is used to alter the first website by changing a first price of a first good or service; and transmitting, by the one or more hardware servers, the second action code to the one or more hosting servers, wherein the second action code is used to alter the second website by changing a second price of a second good or service. 9. The method of claim 8, further comprising the steps of:
publishing the altered first website on the one or more hosting servers; and publishing the altered second website on the one or more hosting servers, wherein the first website is different from the second website. 10. The method of claim 8, further comprising the step of producing, on the one or more hardware servers, the first action code based, at least in part, on the first text message and the first user phone number. 11. The method of claim 8, further comprising the step of producing, on the one or more hardware servers, the first action code based, at least in part, on the first text message and the first text phone number, wherein the first text phone number is in a plurality of text phone numbers. 12. The method of claim 8,
the one or more hosting servers transmitting the first action code to a first plugin for the first website; the one or more hosting servers transmitting the second action code to a second plugin for the second website; the first plugin altering the first website based on the first action code; and the second plugin altering the second website based on the second action code. 13. The method of claim 8, wherein the first text phone number is included in a plurality of text phone numbers operated on the one or more hardware servers. 14. A method, comprising the steps of:
providing one or more web services configured to be altered by a text message sent by an owner of the one or more web services from a text device having a user phone number, wherein a text service determines the one or more web services based on the user phone number; receiving the text message at a text phone number of the text service running on one or more hardware servers sent by the owner of the web services from the text device; producing, on the one or more hardware servers, an action code comprising an alpha-numeric string based, at least in part, on the text message; and transmitting, on the one or more hardware servers, the action code to one or more web servers, wherein the one or more web servers alter the one or more web services based on the action code and the one or more web services comprise domain name registration services. 15. The method of claim 14, further comprising the step of producing, on the one or more hardware servers, the action code based, at least in part, on the text message and the user phone number. 16. The method of claim 14, further comprising the step of producing, on the one or more hardware servers, the action code based, at least in part, on the text message and the text phone number. 17. The method of claim 14, further comprising the step of determining, on the one or more hardware servers, an account of the user, in a plurality of users, based on the user phone number. 18. The method of claim 14, wherein the text phone number is included in a plurality of text phone numbers operated on the one or more hardware servers. 19. The method of claim 14, wherein the one or more web services comprise an ability to register a domain name. 20. The method of claim 14, wherein the one or more web services comprise an ability to purchase and install a Secure Sockets Layer (SSL) certificate for a website. | A method is disclosed for dynamically updating the content of a website or a web service via a text message. A text service may receive, at a text phone number, the text message sent by a user from a text device. The text device may have a user phone number. In preferred embodiments, the text service may have a plurality of text phone numbers that may be called by a plurality of users. The text service may produce an action code based, at least in part, on the text message, the text phone number called by the user, the user phone number or some combination thereof. The text service may transmit the action code to one or more hosting servers to alter a website which may then be published. In another embodiment, the text service may transmit the action code to web server(s) that may alter a web service for the user.1. A method, comprising the steps of:
hosting a website configured to be altered by a text message sent by an owner of the website from a text device having a user phone number, wherein a text service determines the website to be altered out of a plurality of websites based on the user phone number; receiving the text message at a text phone number of the text service running on one or more hardware servers sent by the owner of the website from the text device; producing, on the one or more hardware servers, an action code comprising an alpha-numeric string based, at least in part, on the text message; and transmitting, by the one or more hardware servers, the action code to one or more hosting servers, wherein the action code is used to alter the-website by changing a template of the website. 2. The method of claim 1, further comprising the step of:
publishing the altered website on the one or more hosting servers. 3. The method of claim 1, further comprising the step of producing, on the one or more hardware servers, the action code based, at least in part, on the text message and the user phone number. 4. The method of claim 1, further comprising the step of producing, on the one or more hardware servers, the action code based, at least in part, on the text message and the text phone number. 5. The method of claim 2, further comprising the steps of:
the one or more hosting servers transmitting the action code to a plugin for the website; and the plugin altering the website based on the action code. 6. The method of claim 1, wherein the received text message is in a Short Message Service format or a Multimedia Messaging Service format. 7. The method of claim 1, wherein the text phone number is included in a plurality of text phone numbers operated on the one or more hardware servers. 8. A method, comprising the steps of:
hosting a first website configured to be altered by a first text message sent by an owner of the first website from a first text device having a first user phone number, wherein a text service determines the first website to be altered out of a plurality of websites based on the first user phone number and hosting a second website configured to be altered by a second text message sent by an owner of the second website from a second text device having a second user phone number, wherein the text service determines the second website to be altered out of a plurality of websites based on the second user phone number; receiving the first text message at the first text phone number of the text service running on one or more hardware servers sent by the owner of the first website from the first text device; receiving the second text message at the first text phone number of the text service running on the one or more hardware servers sent by the owner of the second website from the second text device; producing, on the one or more hardware servers, a first action code comprising a first alpha-numeric string based, at least in part, on the first text message; producing, on the one or more hardware servers, a second action code comprising a second alpha-numeric string based, at least in part, on the second text message; transmitting, by the one or more hardware servers, the first action code to one or more hosting servers, wherein the first action code is used to alter the first website by changing a first price of a first good or service; and transmitting, by the one or more hardware servers, the second action code to the one or more hosting servers, wherein the second action code is used to alter the second website by changing a second price of a second good or service. 9. The method of claim 8, further comprising the steps of:
publishing the altered first website on the one or more hosting servers; and publishing the altered second website on the one or more hosting servers, wherein the first website is different from the second website. 10. The method of claim 8, further comprising the step of producing, on the one or more hardware servers, the first action code based, at least in part, on the first text message and the first user phone number. 11. The method of claim 8, further comprising the step of producing, on the one or more hardware servers, the first action code based, at least in part, on the first text message and the first text phone number, wherein the first text phone number is in a plurality of text phone numbers. 12. The method of claim 8,
the one or more hosting servers transmitting the first action code to a first plugin for the first website; the one or more hosting servers transmitting the second action code to a second plugin for the second website; the first plugin altering the first website based on the first action code; and the second plugin altering the second website based on the second action code. 13. The method of claim 8, wherein the first text phone number is included in a plurality of text phone numbers operated on the one or more hardware servers. 14. A method, comprising the steps of:
providing one or more web services configured to be altered by a text message sent by an owner of the one or more web services from a text device having a user phone number, wherein a text service determines the one or more web services based on the user phone number; receiving the text message at a text phone number of the text service running on one or more hardware servers sent by the owner of the web services from the text device; producing, on the one or more hardware servers, an action code comprising an alpha-numeric string based, at least in part, on the text message; and transmitting, on the one or more hardware servers, the action code to one or more web servers, wherein the one or more web servers alter the one or more web services based on the action code and the one or more web services comprise domain name registration services. 15. The method of claim 14, further comprising the step of producing, on the one or more hardware servers, the action code based, at least in part, on the text message and the user phone number. 16. The method of claim 14, further comprising the step of producing, on the one or more hardware servers, the action code based, at least in part, on the text message and the text phone number. 17. The method of claim 14, further comprising the step of determining, on the one or more hardware servers, an account of the user, in a plurality of users, based on the user phone number. 18. The method of claim 14, wherein the text phone number is included in a plurality of text phone numbers operated on the one or more hardware servers. 19. The method of claim 14, wherein the one or more web services comprise an ability to register a domain name. 20. The method of claim 14, wherein the one or more web services comprise an ability to purchase and install a Secure Sockets Layer (SSL) certificate for a website. | 2,400 |
7,508 | 7,508 | 15,135,772 | 2,421 | A system and method of delivering video on demand includes a web site for receiving customer requests for video content, locating the requested content on one of a plurality of distributed video servers, and arranging the located content to be distributed to the customer's set top box via a broadband connection. | 1. A method comprising:
receiving, from a computer of a user, a network address corresponding to a set-top box; receiving information for selecting video content stored in one of a plurality of video servers; determining a particular one of the plurality of video servers storing the selected video content; and forwarding the network address to the particular video server storing the selected video content, wherein the particular video server is configured to directly communicate with the set-top box for delivery of the selected video content. 2. A method according to claim 1, wherein the particular video server is configured to attach a piece of active code to the selected video content, such that, upon execution of the active code, the set-top box deletes the selected video content from the set-top box. 3. A method according to claim 1, wherein the set-top box deletes the selected video content from the set-top box after a set amount of time. 4. A method according to claim 1, wherein the set-top box deletes the video content from the set-top box after viewing the selected video content. 5. A method according to claim 1, wherein the network address corresponds to an internet protocol (IP) address. 6. A method according to claim 1, further comprising:
aggregating a list of available video content associated with the plurality of video servers; and generating the list to the user for selecting the selected video content. 7. An apparatus comprising:
a processor configured to receive, from a computer of a user, a network address corresponding to a set-top box, and to receive information for selecting video content stored in one of a plurality of video servers, wherein the processor is further configured to determine a particular one of the plurality of video servers storing the selected video content, and to forward the network address to the particular video server storing the selected video content, the particular video server being configured to directly communicate with the step-top box for delivery of the selected video content. 8. An apparatus according to claim 7, wherein the particular video server is configured to attach a piece of active code to the selected video content, such that, upon execution of the active code, the set-top box deletes the selected video content from the set-top box. 9. An apparatus according to claim 7, wherein the set-top box deletes the selected video content from the set-top box after a set amount of time. 10. An apparatus according to claim 7, wherein the set-top box deletes the selected video content from the set-top box after viewing the selected video content. 11. An apparatus according to claim 7, wherein the network address corresponds to an internet protocol (IP) address. 12. An apparatus according to claim 7, wherein the processor is further configured to aggregate a list of available video content associated with the plurality of video servers, and to generate the list to the user for selecting the selected video content. 13. A system comprising:
a plurality of video content servers being respectively configured to store video content; and a web server configured to communicate with the plurality of video content servers and to receive, from a computer of a user, a network address corresponding to a set-top box, wherein the web server is further configured to forward the network address to a particular one of the plurality of video servers based on user selection, via the web server, of video content, and wherein the particular video server is further configured to directly establish a connection to the set-top box for delivery of the selected video content. 14. A system according to claim 13, wherein the particular video server is configured to attach a piece of active code to the selected video content, such that, upon execution of the active code, the set-top box deletes the selected video content from the set-top box. 15. A system according to claim 13, wherein the set-top box deletes the selected video content from the set-top box after a set amount of time. 16. A system according to claim 13, wherein the set-top box deletes the selected video content from the set-top box after viewing the selected video content. 17. A system according to claim 13, wherein the network address corresponds to an internet protocol (IP) address. 18. A system according to claim 13, wherein the web server is further configured to aggregate a list of available video content associated with the plurality of video servers, and to provide the list to the user for selecting the selected video content. | A system and method of delivering video on demand includes a web site for receiving customer requests for video content, locating the requested content on one of a plurality of distributed video servers, and arranging the located content to be distributed to the customer's set top box via a broadband connection.1. A method comprising:
receiving, from a computer of a user, a network address corresponding to a set-top box; receiving information for selecting video content stored in one of a plurality of video servers; determining a particular one of the plurality of video servers storing the selected video content; and forwarding the network address to the particular video server storing the selected video content, wherein the particular video server is configured to directly communicate with the set-top box for delivery of the selected video content. 2. A method according to claim 1, wherein the particular video server is configured to attach a piece of active code to the selected video content, such that, upon execution of the active code, the set-top box deletes the selected video content from the set-top box. 3. A method according to claim 1, wherein the set-top box deletes the selected video content from the set-top box after a set amount of time. 4. A method according to claim 1, wherein the set-top box deletes the video content from the set-top box after viewing the selected video content. 5. A method according to claim 1, wherein the network address corresponds to an internet protocol (IP) address. 6. A method according to claim 1, further comprising:
aggregating a list of available video content associated with the plurality of video servers; and generating the list to the user for selecting the selected video content. 7. An apparatus comprising:
a processor configured to receive, from a computer of a user, a network address corresponding to a set-top box, and to receive information for selecting video content stored in one of a plurality of video servers, wherein the processor is further configured to determine a particular one of the plurality of video servers storing the selected video content, and to forward the network address to the particular video server storing the selected video content, the particular video server being configured to directly communicate with the step-top box for delivery of the selected video content. 8. An apparatus according to claim 7, wherein the particular video server is configured to attach a piece of active code to the selected video content, such that, upon execution of the active code, the set-top box deletes the selected video content from the set-top box. 9. An apparatus according to claim 7, wherein the set-top box deletes the selected video content from the set-top box after a set amount of time. 10. An apparatus according to claim 7, wherein the set-top box deletes the selected video content from the set-top box after viewing the selected video content. 11. An apparatus according to claim 7, wherein the network address corresponds to an internet protocol (IP) address. 12. An apparatus according to claim 7, wherein the processor is further configured to aggregate a list of available video content associated with the plurality of video servers, and to generate the list to the user for selecting the selected video content. 13. A system comprising:
a plurality of video content servers being respectively configured to store video content; and a web server configured to communicate with the plurality of video content servers and to receive, from a computer of a user, a network address corresponding to a set-top box, wherein the web server is further configured to forward the network address to a particular one of the plurality of video servers based on user selection, via the web server, of video content, and wherein the particular video server is further configured to directly establish a connection to the set-top box for delivery of the selected video content. 14. A system according to claim 13, wherein the particular video server is configured to attach a piece of active code to the selected video content, such that, upon execution of the active code, the set-top box deletes the selected video content from the set-top box. 15. A system according to claim 13, wherein the set-top box deletes the selected video content from the set-top box after a set amount of time. 16. A system according to claim 13, wherein the set-top box deletes the selected video content from the set-top box after viewing the selected video content. 17. A system according to claim 13, wherein the network address corresponds to an internet protocol (IP) address. 18. A system according to claim 13, wherein the web server is further configured to aggregate a list of available video content associated with the plurality of video servers, and to provide the list to the user for selecting the selected video content. | 2,400 |
7,509 | 7,509 | 13,341,865 | 2,432 | An online service may maintain or create data for a user, and a user may be allowed to exert control over how the data are used. In one example, there may be several categories of data, and the user may be able to specify who may use the data, and the purpose for which the data may be used. Additionally, a user may be able to see how many of his “friends” (or other contacts) have extended trust to a particular entity, which may aid the user in making a decision about whether to extend trust to that entity. User interfaces may be provided to allow users to specify how their data are to be used. | 1. A computer-readable medium having executable instructions to control use of data that is maintained on or by a service, the executable instructions, when executed by a computer, causing the computer to perform acts comprising:
presenting a user interface to a user, said user interface indicating a plurality of categories of data and a plurality of purposes for which said data can be used; receiving, from said user, an indication of usage restrictions on data that said service maintains for said user, said indication indicating, for each combination of a category and a purpose, whether an entity is permitted to use data falling into said category for said purpose; applying said usage restrictions to data that said service maintains for said user; and enforcing said usage restrictions. 2. The computer-readable medium of claim 1, said entity being said service. 3. The computer-readable medium of claim 1, data subject to said usage restrictions being maintained at a site operated by an operator of said service. 4. The computer-readable medium of claim 1, data subject to said usage restrictions comprising a cookie that is maintained on said user's computer. 5. The computer-readable medium of claim 1, said categories comprising said user's contact information, said user's demographic information, or said user's activity with the service. 6. The computer-readable medium of claim 1, one of said categories comprising a photo album or other collection of data items defined by said user. 7. The computer-readable medium of claim 1, information on how many of said user's friends or contacts trust said entity being visible on said user interface to assist said user in determining whether to trust said entity. 8. The computer-readable medium of claim 7, information on said user's friends' or contacts' trust of said entity being limited by restrictions that said friends or said contacts have place on use of trust data. 9. The computer-readable medium of claim 1, said enforcing of said usage restrictions comprising:
receiving a request to use data that said service maintains for said user; and granting or denying said request based on whether said request complies with said usage restrictions. 10. A method of allowing a user to control use of data on an online service, the method comprising:
using a processor to perform acts comprising:
presenting a user interface to a user, said user interface indicating a plurality of entities and including information on how many of said user's friends or contacts trust said entities to assist said user in determining whether to trust said entities;
receiving, from said user, indications of which of said entities said user will allow to use data that said online service manages for said user;
applying, to said data, usage restrictions based on said indications; and
enforcing said usage restrictions. 11. The method of claim 10, said user interface presenting, for each of said plurality of entities, categories of data, said indications indicating which categories of said user's data each of said entities is allowed to use. 12. The method of claim 11, said user interface further presenting a plurality of purposes, said indications indicating, for each entity and for each category, a purpose for which the entity may use the category of said user's data. 13. The method of claim 11, said categories comprising said user's contact information, said user's demographic information, or said user's activity with the service. 14. The method of claim 11, one of said categories comprising a photo album or other collection of data items defined by said user. 15. The method of claim 10, said information on how many of said user's friends' or contacts' trust of said entities being limited by restrictions that said friends or said contacts have place on use of trust data. 16. The method of claim 10, said enforcing of said usage restrictions comprising:
receiving a request to use data that said online service maintains for said user; and granting or denying said request based on whether said request complies with said usage restrictions. 17. A system that allows a user to control use of data on a service, the system comprising:
a memory; a processor; a display; and a component that is stored in said memory, that executes on said processor, and that displays, on said display, a user interface that shows categories of data and purposes for which data can be used, said user interface allowing a user of said service provide an indication, for an entity, which categories of data that said service maintains for said user can be used by said entity and which purposes data in each category can be used, said component applying said indication to said data that said service maintains for said user, said component enforcing, based on said indication, restrictions on how said entity can use said data that said service maintains for said user, said user interface indicating, for said entity, how many friends or contacts of said user trust said entity. 18. The system of claim 17, information on said user's friends' or contacts' trust of said entity being limited by restrictions that said friends or said contacts have place on use of trust data. 19. The system of claim 17, said categories comprising said user's contact information, said user's demographic information, or said user's activity with the service. 20. The system of claim 17, one of said categories comprising a photo album or other collection of data items defined by said user. | An online service may maintain or create data for a user, and a user may be allowed to exert control over how the data are used. In one example, there may be several categories of data, and the user may be able to specify who may use the data, and the purpose for which the data may be used. Additionally, a user may be able to see how many of his “friends” (or other contacts) have extended trust to a particular entity, which may aid the user in making a decision about whether to extend trust to that entity. User interfaces may be provided to allow users to specify how their data are to be used.1. A computer-readable medium having executable instructions to control use of data that is maintained on or by a service, the executable instructions, when executed by a computer, causing the computer to perform acts comprising:
presenting a user interface to a user, said user interface indicating a plurality of categories of data and a plurality of purposes for which said data can be used; receiving, from said user, an indication of usage restrictions on data that said service maintains for said user, said indication indicating, for each combination of a category and a purpose, whether an entity is permitted to use data falling into said category for said purpose; applying said usage restrictions to data that said service maintains for said user; and enforcing said usage restrictions. 2. The computer-readable medium of claim 1, said entity being said service. 3. The computer-readable medium of claim 1, data subject to said usage restrictions being maintained at a site operated by an operator of said service. 4. The computer-readable medium of claim 1, data subject to said usage restrictions comprising a cookie that is maintained on said user's computer. 5. The computer-readable medium of claim 1, said categories comprising said user's contact information, said user's demographic information, or said user's activity with the service. 6. The computer-readable medium of claim 1, one of said categories comprising a photo album or other collection of data items defined by said user. 7. The computer-readable medium of claim 1, information on how many of said user's friends or contacts trust said entity being visible on said user interface to assist said user in determining whether to trust said entity. 8. The computer-readable medium of claim 7, information on said user's friends' or contacts' trust of said entity being limited by restrictions that said friends or said contacts have place on use of trust data. 9. The computer-readable medium of claim 1, said enforcing of said usage restrictions comprising:
receiving a request to use data that said service maintains for said user; and granting or denying said request based on whether said request complies with said usage restrictions. 10. A method of allowing a user to control use of data on an online service, the method comprising:
using a processor to perform acts comprising:
presenting a user interface to a user, said user interface indicating a plurality of entities and including information on how many of said user's friends or contacts trust said entities to assist said user in determining whether to trust said entities;
receiving, from said user, indications of which of said entities said user will allow to use data that said online service manages for said user;
applying, to said data, usage restrictions based on said indications; and
enforcing said usage restrictions. 11. The method of claim 10, said user interface presenting, for each of said plurality of entities, categories of data, said indications indicating which categories of said user's data each of said entities is allowed to use. 12. The method of claim 11, said user interface further presenting a plurality of purposes, said indications indicating, for each entity and for each category, a purpose for which the entity may use the category of said user's data. 13. The method of claim 11, said categories comprising said user's contact information, said user's demographic information, or said user's activity with the service. 14. The method of claim 11, one of said categories comprising a photo album or other collection of data items defined by said user. 15. The method of claim 10, said information on how many of said user's friends' or contacts' trust of said entities being limited by restrictions that said friends or said contacts have place on use of trust data. 16. The method of claim 10, said enforcing of said usage restrictions comprising:
receiving a request to use data that said online service maintains for said user; and granting or denying said request based on whether said request complies with said usage restrictions. 17. A system that allows a user to control use of data on a service, the system comprising:
a memory; a processor; a display; and a component that is stored in said memory, that executes on said processor, and that displays, on said display, a user interface that shows categories of data and purposes for which data can be used, said user interface allowing a user of said service provide an indication, for an entity, which categories of data that said service maintains for said user can be used by said entity and which purposes data in each category can be used, said component applying said indication to said data that said service maintains for said user, said component enforcing, based on said indication, restrictions on how said entity can use said data that said service maintains for said user, said user interface indicating, for said entity, how many friends or contacts of said user trust said entity. 18. The system of claim 17, information on said user's friends' or contacts' trust of said entity being limited by restrictions that said friends or said contacts have place on use of trust data. 19. The system of claim 17, said categories comprising said user's contact information, said user's demographic information, or said user's activity with the service. 20. The system of claim 17, one of said categories comprising a photo album or other collection of data items defined by said user. | 2,400 |
7,510 | 7,510 | 15,345,584 | 2,492 | Improved techniques for managing enterprise applications on mobile devices are described herein. Each enterprise mobile application running on the mobile device has an associated policy through which it interacts with its environment. The policy selectively blocks or allows activities involving the enterprise application in accordance with rules established by the enterprise. Together, the enterprise applications running on the mobile device form a set of managed applications. Managed applications are typically allowed to exchange data with other managed applications, but are blocked from exchanging data with other applications, such as the user's own personal applications. Policies may be defined to manage data sharing, mobile resource management, application specific information, networking and data access solutions, device cloud and transfer, dual mode application software, enterprise app store access, and virtualized application and resources, among other things. | 1. A method of managing applications on a mobile device, comprising:
executing, on the mobile device, a client agent application configured to enforce one or more policy files of a mobile device management system, wherein each policy file defines one or more access controls enforced by the mobile device management system when one or more applications are executing locally on the mobile device, and wherein the client agent application is further configured to wirelessly communicate with one or more applications executing on a remote computing device and presented on a display of the mobile device. 2. The method of claim 1, wherein the client agent is configured to facilitate the one or more remote applications by:
receiving input from a user intended for a particular remote application; passing the user input to the particular remote application; receiving data from the particular remote application responsive to the user input; and presenting the data by the client agent application on the display of the mobile device. 3. The method of claim 2, wherein receiving data from the particular remote application comprises receiving the data via a remote presentation protocol, and wherein the received data comprises output from the remote application to update a graphical user interface presented by the client agent on the display of the mobile device. 4. The method of claim 1, further comprising:
applying, by the client agent application, a first set of one or more policy files when an application is executing locally on the mobile device; and applying, by the client agent application, a second set of one or more policy files when a remote application is presented on the mobile device. 5. The method of claim 1, further comprising automatically determining, by the client agent application, whether to initiate execution of a user-requested application locally or remotely, based on one or more policy files identifying whether or not each of one or more applications comprising the user-requested application is permitted to run locally on the mobile device. 6. The method of claim 1, further comprising:
receiving first user input requesting execution of a first application on the mobile device, wherein the first application is a local application; executing, responsive to the first user input, the first application according to a first set of policy files; receiving second user input requesting execution of a second application on the mobile device, wherein the second application is a local application; executing, responsive to the second user input, the second application according to a second set of policy files; receiving third user input requesting execution of a third application on the mobile device, wherein the third application is a remote application; responsive to the third user input, initiating remote execution of the third application on the remote computing device according to a third set of policy files. 7. The method of claim 1, further comprising:
receiving one or more updated policy files replacing a corresponding one or more existing policy files stored on the mobile device; and updating the access controls enforced by the mobile device management system according to the one or more updated policy files. 8. The method of claim 7, wherein updating the access controls comprises automatically removing from the mobile device a local application. 9. The method of claim 7, wherein updating the access controls comprises automatically deleting user data associated with the removed local application. 10. A mobile device comprising a processor configured to execute, based on instructions stored in a memory, a client agent application configured to enforce one or more policy files of a mobile device management system, wherein each policy file defines one or more access controls enforced by the mobile device management system when one or more applications are executing locally on the mobile device, and wherein the client agent application is further configured to wirelessly communicate with one or more applications executing on a remote computing device and presented on a display of the mobile device. 11. One or more non-transitory computer readable media storing computer executable instructions that, when executed, cause a system to manage applications on a mobile device by:
executing, on the mobile device, a client agent application configured to enforce one or more policy files of a mobile device management system, wherein each policy file defines one or more access controls enforced by the mobile device management system when one or more applications are executing locally on the mobile device, and wherein the client agent application is further configured to wirelessly communicate with one or more applications executing on a remote computing device and presented on a display of the mobile device. 12. The computer readable media of claim 11, wherein the client agent is configured to facilitate the one or more remote applications by:
receiving input from a user intended for a particular remote application; passing the user input to the particular remote application; receiving data from the particular remote application responsive to the user input; and presenting the data for display by the client agent application. 13. The computer readable media of claim 12, wherein receiving data from the particular remote application comprises receiving the data via a remote presentation protocol, and wherein the received data comprises output from the remote application to update a graphical user interface presented by the client agent on the display of the mobile device. 14. The computer readable media of claim 11, wherein the instructions further cause the system to manage applications on a mobile device by:
applying, by the client agent application, a first set of one or more policy files when an application is executing locally on the mobile device; and applying, by the client agent application, a second set of one or more policy files when a remote application is presented on the mobile device. 15. The computer readable media of claim 11, wherein the instructions further cause the system to manage applications on a mobile device by automatically determining, by the client agent application, whether to initiate execution of a user-requested application locally or remotely, on an application by application basis, based on one or more policy files identifying whether or not each of one or more applications comprising the user-requested application is permitted to run locally on the mobile device. 16. The computer readable media of claim 11, wherein the instructions further cause the system to manage applications on a mobile device by:
receiving first user input requesting execution of a first application on the mobile device, wherein the first application is a local application; executing, responsive to the first user input, the first application according to a first set of policy files; receiving second user input requesting execution of a second application on the mobile device, wherein the second application is a local application; executing, responsive to the second user input, the second application according to a second set of policy files; receiving third user input requesting execution of a third application on the mobile device, wherein the third application is a remote application; responsive to the third user input, initiating remote execution of the third application on the remote computing device according to a third set of policy files. 17. The computer readable media of claim 11, wherein the instructions further cause the system to manage applications on a mobile device by:
receiving one or more updated policy files replacing a corresponding one or more existing policy files stored on the mobile device; and updating the access controls enforced by the mobile device management system according to the one or more updated policy files. 18. The computer readable media of claim 17, wherein updating the access controls comprises automatically removing from the mobile device a local application. 19. The computer readable media of claim 17, wherein updating the access controls comprises automatically deleting user data associated with the removed local application. 20. The mobile device of claim 10, wherein the instructions further cause the mobile device to manage applications by:
applying, by the client agent application, a first set of one or more policy files when an application is executing locally on the mobile device; and applying, by the client agent application, a second set of one or more policy files when a remote application is presented on the mobile device. | Improved techniques for managing enterprise applications on mobile devices are described herein. Each enterprise mobile application running on the mobile device has an associated policy through which it interacts with its environment. The policy selectively blocks or allows activities involving the enterprise application in accordance with rules established by the enterprise. Together, the enterprise applications running on the mobile device form a set of managed applications. Managed applications are typically allowed to exchange data with other managed applications, but are blocked from exchanging data with other applications, such as the user's own personal applications. Policies may be defined to manage data sharing, mobile resource management, application specific information, networking and data access solutions, device cloud and transfer, dual mode application software, enterprise app store access, and virtualized application and resources, among other things.1. A method of managing applications on a mobile device, comprising:
executing, on the mobile device, a client agent application configured to enforce one or more policy files of a mobile device management system, wherein each policy file defines one or more access controls enforced by the mobile device management system when one or more applications are executing locally on the mobile device, and wherein the client agent application is further configured to wirelessly communicate with one or more applications executing on a remote computing device and presented on a display of the mobile device. 2. The method of claim 1, wherein the client agent is configured to facilitate the one or more remote applications by:
receiving input from a user intended for a particular remote application; passing the user input to the particular remote application; receiving data from the particular remote application responsive to the user input; and presenting the data by the client agent application on the display of the mobile device. 3. The method of claim 2, wherein receiving data from the particular remote application comprises receiving the data via a remote presentation protocol, and wherein the received data comprises output from the remote application to update a graphical user interface presented by the client agent on the display of the mobile device. 4. The method of claim 1, further comprising:
applying, by the client agent application, a first set of one or more policy files when an application is executing locally on the mobile device; and applying, by the client agent application, a second set of one or more policy files when a remote application is presented on the mobile device. 5. The method of claim 1, further comprising automatically determining, by the client agent application, whether to initiate execution of a user-requested application locally or remotely, based on one or more policy files identifying whether or not each of one or more applications comprising the user-requested application is permitted to run locally on the mobile device. 6. The method of claim 1, further comprising:
receiving first user input requesting execution of a first application on the mobile device, wherein the first application is a local application; executing, responsive to the first user input, the first application according to a first set of policy files; receiving second user input requesting execution of a second application on the mobile device, wherein the second application is a local application; executing, responsive to the second user input, the second application according to a second set of policy files; receiving third user input requesting execution of a third application on the mobile device, wherein the third application is a remote application; responsive to the third user input, initiating remote execution of the third application on the remote computing device according to a third set of policy files. 7. The method of claim 1, further comprising:
receiving one or more updated policy files replacing a corresponding one or more existing policy files stored on the mobile device; and updating the access controls enforced by the mobile device management system according to the one or more updated policy files. 8. The method of claim 7, wherein updating the access controls comprises automatically removing from the mobile device a local application. 9. The method of claim 7, wherein updating the access controls comprises automatically deleting user data associated with the removed local application. 10. A mobile device comprising a processor configured to execute, based on instructions stored in a memory, a client agent application configured to enforce one or more policy files of a mobile device management system, wherein each policy file defines one or more access controls enforced by the mobile device management system when one or more applications are executing locally on the mobile device, and wherein the client agent application is further configured to wirelessly communicate with one or more applications executing on a remote computing device and presented on a display of the mobile device. 11. One or more non-transitory computer readable media storing computer executable instructions that, when executed, cause a system to manage applications on a mobile device by:
executing, on the mobile device, a client agent application configured to enforce one or more policy files of a mobile device management system, wherein each policy file defines one or more access controls enforced by the mobile device management system when one or more applications are executing locally on the mobile device, and wherein the client agent application is further configured to wirelessly communicate with one or more applications executing on a remote computing device and presented on a display of the mobile device. 12. The computer readable media of claim 11, wherein the client agent is configured to facilitate the one or more remote applications by:
receiving input from a user intended for a particular remote application; passing the user input to the particular remote application; receiving data from the particular remote application responsive to the user input; and presenting the data for display by the client agent application. 13. The computer readable media of claim 12, wherein receiving data from the particular remote application comprises receiving the data via a remote presentation protocol, and wherein the received data comprises output from the remote application to update a graphical user interface presented by the client agent on the display of the mobile device. 14. The computer readable media of claim 11, wherein the instructions further cause the system to manage applications on a mobile device by:
applying, by the client agent application, a first set of one or more policy files when an application is executing locally on the mobile device; and applying, by the client agent application, a second set of one or more policy files when a remote application is presented on the mobile device. 15. The computer readable media of claim 11, wherein the instructions further cause the system to manage applications on a mobile device by automatically determining, by the client agent application, whether to initiate execution of a user-requested application locally or remotely, on an application by application basis, based on one or more policy files identifying whether or not each of one or more applications comprising the user-requested application is permitted to run locally on the mobile device. 16. The computer readable media of claim 11, wherein the instructions further cause the system to manage applications on a mobile device by:
receiving first user input requesting execution of a first application on the mobile device, wherein the first application is a local application; executing, responsive to the first user input, the first application according to a first set of policy files; receiving second user input requesting execution of a second application on the mobile device, wherein the second application is a local application; executing, responsive to the second user input, the second application according to a second set of policy files; receiving third user input requesting execution of a third application on the mobile device, wherein the third application is a remote application; responsive to the third user input, initiating remote execution of the third application on the remote computing device according to a third set of policy files. 17. The computer readable media of claim 11, wherein the instructions further cause the system to manage applications on a mobile device by:
receiving one or more updated policy files replacing a corresponding one or more existing policy files stored on the mobile device; and updating the access controls enforced by the mobile device management system according to the one or more updated policy files. 18. The computer readable media of claim 17, wherein updating the access controls comprises automatically removing from the mobile device a local application. 19. The computer readable media of claim 17, wherein updating the access controls comprises automatically deleting user data associated with the removed local application. 20. The mobile device of claim 10, wherein the instructions further cause the mobile device to manage applications by:
applying, by the client agent application, a first set of one or more policy files when an application is executing locally on the mobile device; and applying, by the client agent application, a second set of one or more policy files when a remote application is presented on the mobile device. | 2,400 |
7,511 | 7,511 | 15,015,631 | 2,448 | Social media and data sharing controls may be provided. Upon receiving a request to transmit an element of data to a recipient, a determination may be made as to whether the recipient is appropriate to receive the element of data. In response to determining that the recipient is appropriate to receive the element of data, the element of data may be transmitted. In response to determining that the recipient is not appropriate to receive the element of data, at least one remedial action may be performed. | 1-20. (canceled) 21. A method comprising:
receiving a request to transmit an element of data to a recipient on behalf of a sender, the request identifying a service through which the element of data is requested to be shared; determining whether the element of data is allowed to be shared through the service based upon an analysis of a content of the element of data; identifying a contact record associated with the recipient; determining whether the recipient is appropriate to receive the element of data based upon an identity of the service and an identity of the recipient; in response to determining that the recipient is appropriate to receive the element of data and that the element of data is allowed to be shared, causing the element of data to be transmitted; and in response to determining that the recipient is not appropriate to receive the element of data, performing at least one remedial action. 22. The method of claim 21, further comprising identifying a publicly accessible server through which the element of data is requested to be shared based upon the service identified by the request. 23. The method of claim 22, wherein determining whether the element of data is allowed to be shared through the service is based on an analysis of a network address of the publicly accessible server through which the element of data is requested to be shared. 24. The method of claim 21, wherein determining whether the element of data is allowed to be shared through the service further comprises determining whether the content of the element of data contains a keyword associated with a professional context of the sender. 25. The method of claim 24, wherein determining whether the content of the element of data contains a keyword associated with the professional context of the sender further comprises determining whether the content includes at least one of “Confidential,” or “Privileged.” 26. The method of claim 24, wherein determining whether the content of the element of data contains a keyword associated with the professional context of the sender further comprises determining whether the content includes a predetermined term specified in a compliance rule. 27. The method of claim 21, wherein determining whether the element of data is allowed to be shared through the service further comprises determining whether the service is designated as a professional service or a personal service. 28. The method of claim 21, wherein performing the at least one remedial action further comprises at least one of: requesting user confirmation of the request to transmit the element of data to the recipient, logging the request to transmit the element of data, or preventing sharing of the element of data. 29. A system comprising:
a user device; and an agent executed by the use device, the agent, when executed, configured to cause the user device to at least:
receive a request to transmit an element of data to a recipient on behalf of a sender, the request identifying a service through which the element of data is requested to be shared;
determine whether the element of data is allowed to be shared through the service based upon an analysis of a content of the element of data;
identify a contact record associated with the recipient;
determine whether the recipient is appropriate to receive the element of data based upon an identity of the service and an identity of the recipient;
in response to determining that the recipient is appropriate to receive the element of data and that the element of data is allowed to be shared, cause the element of data to be transmitted; and
in response to determining that the recipient is not appropriate to receive the element of data, perform at least one remedial action. 30. The system of claim 29, wherein the agent identifies a publicly accessible server through which the element of data is requested to be shared based upon the service identified by the request. 31. The system of claim 30, wherein the agent determines whether the element of data is allowed to be shared through the service is based on an analysis of a network address of the publicly accessible server through which the element of data is requested to be shared. 32. The system of claim 29, wherein the agent determines whether the element of data is allowed to be shared through the service by determining whether the content of the element of data contains a keyword associated with a professional context of the sender. 33. The system of claim 32, wherein the agent determines whether the content of the element of data contains a keyword associated with the professional context of the sender by determining whether the content includes a predetermined term specified in a compliance rule. 34. The system of claim 29, wherein the agent determines whether the element of data is allowed to be shared through the service by determining whether the service is designated as a professional service or a personal service. 35. The system of claim 29, wherein the agent performs the at least one remedial action by at least one of: requesting user confirmation of the request to transmit the element of data to the recipient, logging the request to transmit the element of data, or preventing sharing of the element of data. 36. A non-transitory computer-readable medium embodying a program that, when executed, causes at least one computing device to at least:
receive a request to transmit an element of data to a recipient on behalf of a sender, the request identifying a service through which the element of data is requested to be shared; determine whether the element of data is allowed to be shared through the service based upon an analysis of a content of the element of data; identify a contact record associated with the recipient; determine whether the recipient is appropriate to receive the element of data based upon an identity of the service and an identity of the recipient; in response to determining that the recipient is appropriate to receive the element of data and that the element of data is allowed to be shared, cause the element of data to be transmitted; and in response to determining that the recipient is not appropriate to receive the element of data, perform at least one remedial action. 37. The non-transitory computer-readable medium of claim 36, wherein the program, when executed, determines whether the content of the element of data contains a keyword associated with a professional context of the sender by determining whether the content includes a predetermined term specified in a compliance rule. 38. The non-transitory computer-readable medium of claim 36, wherein the program, when executed, determines whether the element of data is allowed to be shared through the service by determining whether the service is designated as a professional service or a personal service. 39. The non-transitory computer-readable medium of claim 36, wherein the program, when executed, determines whether the element of data is allowed to be shared through the service by determining whether the content of the element of data contains a keyword associated with a professional context of the sender. 40. The non-transitory computer-readable medium of claim 36, wherein the program, when executed, performs the at least one remedial action by at least one of: requesting user confirmation of the request to transmit the element of data to the recipient, logging the request to transmit the element of data, or preventing sharing of the element of data. | Social media and data sharing controls may be provided. Upon receiving a request to transmit an element of data to a recipient, a determination may be made as to whether the recipient is appropriate to receive the element of data. In response to determining that the recipient is appropriate to receive the element of data, the element of data may be transmitted. In response to determining that the recipient is not appropriate to receive the element of data, at least one remedial action may be performed.1-20. (canceled) 21. A method comprising:
receiving a request to transmit an element of data to a recipient on behalf of a sender, the request identifying a service through which the element of data is requested to be shared; determining whether the element of data is allowed to be shared through the service based upon an analysis of a content of the element of data; identifying a contact record associated with the recipient; determining whether the recipient is appropriate to receive the element of data based upon an identity of the service and an identity of the recipient; in response to determining that the recipient is appropriate to receive the element of data and that the element of data is allowed to be shared, causing the element of data to be transmitted; and in response to determining that the recipient is not appropriate to receive the element of data, performing at least one remedial action. 22. The method of claim 21, further comprising identifying a publicly accessible server through which the element of data is requested to be shared based upon the service identified by the request. 23. The method of claim 22, wherein determining whether the element of data is allowed to be shared through the service is based on an analysis of a network address of the publicly accessible server through which the element of data is requested to be shared. 24. The method of claim 21, wherein determining whether the element of data is allowed to be shared through the service further comprises determining whether the content of the element of data contains a keyword associated with a professional context of the sender. 25. The method of claim 24, wherein determining whether the content of the element of data contains a keyword associated with the professional context of the sender further comprises determining whether the content includes at least one of “Confidential,” or “Privileged.” 26. The method of claim 24, wherein determining whether the content of the element of data contains a keyword associated with the professional context of the sender further comprises determining whether the content includes a predetermined term specified in a compliance rule. 27. The method of claim 21, wherein determining whether the element of data is allowed to be shared through the service further comprises determining whether the service is designated as a professional service or a personal service. 28. The method of claim 21, wherein performing the at least one remedial action further comprises at least one of: requesting user confirmation of the request to transmit the element of data to the recipient, logging the request to transmit the element of data, or preventing sharing of the element of data. 29. A system comprising:
a user device; and an agent executed by the use device, the agent, when executed, configured to cause the user device to at least:
receive a request to transmit an element of data to a recipient on behalf of a sender, the request identifying a service through which the element of data is requested to be shared;
determine whether the element of data is allowed to be shared through the service based upon an analysis of a content of the element of data;
identify a contact record associated with the recipient;
determine whether the recipient is appropriate to receive the element of data based upon an identity of the service and an identity of the recipient;
in response to determining that the recipient is appropriate to receive the element of data and that the element of data is allowed to be shared, cause the element of data to be transmitted; and
in response to determining that the recipient is not appropriate to receive the element of data, perform at least one remedial action. 30. The system of claim 29, wherein the agent identifies a publicly accessible server through which the element of data is requested to be shared based upon the service identified by the request. 31. The system of claim 30, wherein the agent determines whether the element of data is allowed to be shared through the service is based on an analysis of a network address of the publicly accessible server through which the element of data is requested to be shared. 32. The system of claim 29, wherein the agent determines whether the element of data is allowed to be shared through the service by determining whether the content of the element of data contains a keyword associated with a professional context of the sender. 33. The system of claim 32, wherein the agent determines whether the content of the element of data contains a keyword associated with the professional context of the sender by determining whether the content includes a predetermined term specified in a compliance rule. 34. The system of claim 29, wherein the agent determines whether the element of data is allowed to be shared through the service by determining whether the service is designated as a professional service or a personal service. 35. The system of claim 29, wherein the agent performs the at least one remedial action by at least one of: requesting user confirmation of the request to transmit the element of data to the recipient, logging the request to transmit the element of data, or preventing sharing of the element of data. 36. A non-transitory computer-readable medium embodying a program that, when executed, causes at least one computing device to at least:
receive a request to transmit an element of data to a recipient on behalf of a sender, the request identifying a service through which the element of data is requested to be shared; determine whether the element of data is allowed to be shared through the service based upon an analysis of a content of the element of data; identify a contact record associated with the recipient; determine whether the recipient is appropriate to receive the element of data based upon an identity of the service and an identity of the recipient; in response to determining that the recipient is appropriate to receive the element of data and that the element of data is allowed to be shared, cause the element of data to be transmitted; and in response to determining that the recipient is not appropriate to receive the element of data, perform at least one remedial action. 37. The non-transitory computer-readable medium of claim 36, wherein the program, when executed, determines whether the content of the element of data contains a keyword associated with a professional context of the sender by determining whether the content includes a predetermined term specified in a compliance rule. 38. The non-transitory computer-readable medium of claim 36, wherein the program, when executed, determines whether the element of data is allowed to be shared through the service by determining whether the service is designated as a professional service or a personal service. 39. The non-transitory computer-readable medium of claim 36, wherein the program, when executed, determines whether the element of data is allowed to be shared through the service by determining whether the content of the element of data contains a keyword associated with a professional context of the sender. 40. The non-transitory computer-readable medium of claim 36, wherein the program, when executed, performs the at least one remedial action by at least one of: requesting user confirmation of the request to transmit the element of data to the recipient, logging the request to transmit the element of data, or preventing sharing of the element of data. | 2,400 |
7,512 | 7,512 | 13,977,756 | 2,487 | A video encoding device includes encoding control means 11 for controlling an inter-PU partition type of a CU to be encoded, based on the maximum number (PA) of motion vectors allowed for an image block having a predetermined area and the number (PB) of motion vectors of an encoded image block contained in the image block having the predetermined area. A video decoding device includes decoding control means for controlling an inter-PU partition type of a CU to be decoded, based on the maximum number (PA) of motion vectors allowed for an image block having a predetermined area and the number (PB) of motion vectors of a decoded image block contained in the image block having the predetermined area. | 1. A video encoding device for encoding video using inter prediction, comprising:
an encoding control unit which controls an inter-PU partition type of a CU to be encoded, based on the maximum number of motion vectors allowed for an image block having a predetermined area and the number of motion vectors of an encoded image block contained in the image block having the predetermined area. 2. The video encoding device according to claim 1, wherein the encoding control unit also controls an inter prediction direction of the inter-PU partition type of the CU to be encoded. 3. The video encoding device according to claim 1, further comprising a multiplexer which multiplexes, into a bitstream, data indicative of the predetermined area and the maximum number of motion vectors. 4. The video encoding device according to claim 1, further comprising an entropy encoder,
wherein the encoding control unit causes the entropy encoder to set an inter-PU partition type syntax at a predetermined inter-PU partition type in a PU header layer of the CU to be encoded, and entropy-encode the inter-PU partition type syntax when the number of motion vectors of the encoded image block contained in the image block having the predetermined area is less than the maximum number of motion vectors. 5. The video encoding device according to claim 1, further comprising entropy encoder,
wherein the encoding control unit causes the entropy encoder not to entropy-encode an inter-PU partition type syntax in a PU header layer of the CU to be encoded when the number of motion vectors of the encoded image block contained in the image block having the predetermined area is greater than or equal to a number obtained by subtracting one from the maximum number of motion vectors, while the encoding control unit causes the entropy encoder to entropy-encode the inter-PU partition type syntax in the PU header layer of the CU to be encoded when the number of motion vectors is less than the number obtained by subtracting one from the maximum number of motion vectors. 6. A video decoding device for decoding video using inter prediction, comprising:
a decoding control unit which controls an inter-PU partition type of a CU to be decoded, based on the maximum number of motion vectors allowed for an image block having a predetermined area and the number of motion vectors of a decoded image block contained in the image block having the predetermined area. 7. The video decoding device according to claim 6, wherein the decoding control unit also controls an inter prediction direction of the inter-PU partition type of the CU to be decoded. 8. The video decoding device according to claim 6, further comprising a de-multiplexer which de-multiplexes data indicative of the predetermined area and the maximum number of motion vectors from a bitstream. 9. The video decoding device according to claim 6, further comprising an entropy decoder,
wherein the decoding control unit causes the entropy decoder not to entropy-decode an inter-PU partition type syntax in a PU header layer of the CU to be decoded when the number of motion vectors of the decoded image block contained in the image block having the predetermined area is greater than or equal to a number obtained by subtracting one from the maximum number of motion vectors, while the decoding control unit causes the entropy decoder to entropy-decode the inter-PU partition type syntax in the PU header layer of the CU to be decoded when the number of motion vectors is less than the number obtained by subtracting one from the maximum number of motion vectors. 10. The video decoding device according to claim 6, further comprising entropy decoder,
wherein when the decoding control unit determines that there is an error in an access unit accessing a bitstream including the CU to be decoded the number of motion vectors of the decoded image block contained in the image block having the predetermined area is greater than the maximum number of motion vectors. 11. A video encoding method for encoding video using inter prediction, comprising:
controlling an inter-PU partition type of a CU to be encoded, based on the maximum number of motion vectors allowed for an image block having a predetermined area and the number of motion vectors of an encoded image block contained in the image block having the predetermined area. 12. The video encoding method according to claim 11, controlling also an inter prediction direction of the inter-PU partition type of the CU to be encoded. 13. The video encoding method according to claim 11 further comprising:
multiplexing data indicative of the predetermined area and the maximum number of motion vectors into a bitstream. 14. The video encoding method according to claim 11, wherein an inter-PU partition type syntax in a PU header layer of the CU to be encoded is set in a predetermined inter-PU partition type and entropy-encoded when the number of motion vectors of the encoded image block contained in the image block having the predetermined area is less than the maximum number of motion vectors. 15. The video encoding method according to claim 11, wherein an inter-PU partition type syntax in a PU header layer of the CU to be encoded is not entropy-encoded when the number of motion vectors of the encoded image block contained in the image block having the predetermined area is greater than or equal to a number obtained by subtracting one from the maximum number of motion vectors, while the inter-PU partition type syntax in the PU header layer of the CU to be encoded is entropy-encoded when the number of motion vectors is less than the number obtained by subtracting one from the maximum number of motion vectors. 16. A video decoding method for decoding video using inter prediction, comprising:
controlling an inter-PU partition type of a CU to be decoded, based on the maximum number of motion vectors allowed for an image block having a predetermined area and the number of motion vectors of a decoded image block contained in the image block having the predetermined area. 17. The video decoding method according to claim 16,
controlling also an inter prediction direction of the inter-PU partition type of the CU to be decoded. 18. The decoding method according to claim 16, further comprising:
de-multiplexing data indicative of the predetermined area and the maximum number of motion vectors from a bitstream. 19. The video decoding method according to claim 16, wherein an inter-PU partition type syntax in a PU header layer of the CU to be decoded is not entropy-decoded when the number of motion vectors of the decoded image block contained in the image block having the predetermined area is greater than or equal to a number obtained by subtracting one from the maximum number of motion vectors, while the inter-PU partition type syntax in the PU header layer of the CU to be decoded is entropy-decoded when the number of motion vectors is less than the number obtained by subtracting one from the maximum number of motion vectors. 20. The video decoding method according to claim 16, wherein it is determined that there is an error in an access unit accessing a bitstream including the CU to be decoded when the number of motion vectors of the decoded image block contained in the image block having the predetermined area is greater than the maximum number of motion vectors. 21. A non-transitory computer readable information recording medium storing a video encoding program for encoding video using inter prediction, when executed by a processor, performs a method for:
controlling an inter-PU partition type of a CU to be encoded, based on the maximum number of motion vectors allowed for an image block having a predetermined area and the number of motion vectors of an encoded image block contained in the image block having the predetermined area. 22. The computer readable information recording medium according to claim 21, further comprising: controlling an inter prediction direction of the inter-PU partition type of the CU to be encoded. 23. The computer readable information recording medium according to claim 21, further comprising: multiplexing, into a bitstream, data indicative of the predetermined area and the maximum number of motion vectors. 24. The computer readable information recording medium according to claim 21, further comprising: setting an inter-PU partition type syntax at a predetermined inter-PU partition type in a PU header layer of the CU to be encoded, and entropy-encoding the inter-PU partition type syntax when the number of motion vectors of the encoded image block contained in the image block having the predetermined area is less than the maximum number of motion vectors. 25. The computer readable information recording medium according to claim 21, further comprising: inhibiting entropy-encoding an inter-PU partition type syntax in a PU header layer of the CU to be encoded when the number of motion vectors of the encoded image block contained in the image block having the predetermined area is greater than or equal to a number obtained by subtracting one from the maximum number of motion vectors, and executing the process of entropy-encoding the inter-PU partition type syntax in the PU header layer of the CU to be encoded when the number of motion vectors is less than the number obtained by subtracting one from the maximum number of motion vectors. 26. A non-transitory computer readable information recording medium storing a video decoding program for decoding video using inter prediction, when executed by a processor, performs a method for: controlling an inter-PU partition type of a CU to be decoded, based on the maximum number of motion vectors allowed for an image block having a predetermined area and the number of motion vectors of a decoded image block contained in the image block having the predetermined area. 27. The computer readable information recording medium according to claim 26, further comprising: controlling an inter prediction direction of the inter-PU partition type of the CU to be decoded. 28. The computer readable information recording medium according to claim 26, further comprising: de-multiplexing data indicative of the predetermined area and the maximum number of motion vectors from a bitstream. 29. The computer readable information recording medium according to claim 26, further comprising: entropy-decoding an inter-PU partition type syntax in a PU header layer of the CU to be decoded when the number of motion vectors of the decoded image block contained in the image block having the predetermined area is greater than or equal to a number obtained by subtracting one from the maximum number of motion vectors, or to execute the process of entropy-decoding the inter-PU partition type syntax in the PU header layer of the CU to be decoded when the number of motion vectors of the decoded image block contained in the image block having the predetermined area is less than the number obtained by subtracting one from the maximum number of motion vectors. 30. The computer readable information recording medium according to claim 26, further comprising: determining that there is an error in an access unit accessing a bitstream including the CU to be decoded when the number of motion vectors of the decoded image block contained in the image block having the predetermined area is greater than the maximum number of motion vectors. | A video encoding device includes encoding control means 11 for controlling an inter-PU partition type of a CU to be encoded, based on the maximum number (PA) of motion vectors allowed for an image block having a predetermined area and the number (PB) of motion vectors of an encoded image block contained in the image block having the predetermined area. A video decoding device includes decoding control means for controlling an inter-PU partition type of a CU to be decoded, based on the maximum number (PA) of motion vectors allowed for an image block having a predetermined area and the number (PB) of motion vectors of a decoded image block contained in the image block having the predetermined area.1. A video encoding device for encoding video using inter prediction, comprising:
an encoding control unit which controls an inter-PU partition type of a CU to be encoded, based on the maximum number of motion vectors allowed for an image block having a predetermined area and the number of motion vectors of an encoded image block contained in the image block having the predetermined area. 2. The video encoding device according to claim 1, wherein the encoding control unit also controls an inter prediction direction of the inter-PU partition type of the CU to be encoded. 3. The video encoding device according to claim 1, further comprising a multiplexer which multiplexes, into a bitstream, data indicative of the predetermined area and the maximum number of motion vectors. 4. The video encoding device according to claim 1, further comprising an entropy encoder,
wherein the encoding control unit causes the entropy encoder to set an inter-PU partition type syntax at a predetermined inter-PU partition type in a PU header layer of the CU to be encoded, and entropy-encode the inter-PU partition type syntax when the number of motion vectors of the encoded image block contained in the image block having the predetermined area is less than the maximum number of motion vectors. 5. The video encoding device according to claim 1, further comprising entropy encoder,
wherein the encoding control unit causes the entropy encoder not to entropy-encode an inter-PU partition type syntax in a PU header layer of the CU to be encoded when the number of motion vectors of the encoded image block contained in the image block having the predetermined area is greater than or equal to a number obtained by subtracting one from the maximum number of motion vectors, while the encoding control unit causes the entropy encoder to entropy-encode the inter-PU partition type syntax in the PU header layer of the CU to be encoded when the number of motion vectors is less than the number obtained by subtracting one from the maximum number of motion vectors. 6. A video decoding device for decoding video using inter prediction, comprising:
a decoding control unit which controls an inter-PU partition type of a CU to be decoded, based on the maximum number of motion vectors allowed for an image block having a predetermined area and the number of motion vectors of a decoded image block contained in the image block having the predetermined area. 7. The video decoding device according to claim 6, wherein the decoding control unit also controls an inter prediction direction of the inter-PU partition type of the CU to be decoded. 8. The video decoding device according to claim 6, further comprising a de-multiplexer which de-multiplexes data indicative of the predetermined area and the maximum number of motion vectors from a bitstream. 9. The video decoding device according to claim 6, further comprising an entropy decoder,
wherein the decoding control unit causes the entropy decoder not to entropy-decode an inter-PU partition type syntax in a PU header layer of the CU to be decoded when the number of motion vectors of the decoded image block contained in the image block having the predetermined area is greater than or equal to a number obtained by subtracting one from the maximum number of motion vectors, while the decoding control unit causes the entropy decoder to entropy-decode the inter-PU partition type syntax in the PU header layer of the CU to be decoded when the number of motion vectors is less than the number obtained by subtracting one from the maximum number of motion vectors. 10. The video decoding device according to claim 6, further comprising entropy decoder,
wherein when the decoding control unit determines that there is an error in an access unit accessing a bitstream including the CU to be decoded the number of motion vectors of the decoded image block contained in the image block having the predetermined area is greater than the maximum number of motion vectors. 11. A video encoding method for encoding video using inter prediction, comprising:
controlling an inter-PU partition type of a CU to be encoded, based on the maximum number of motion vectors allowed for an image block having a predetermined area and the number of motion vectors of an encoded image block contained in the image block having the predetermined area. 12. The video encoding method according to claim 11, controlling also an inter prediction direction of the inter-PU partition type of the CU to be encoded. 13. The video encoding method according to claim 11 further comprising:
multiplexing data indicative of the predetermined area and the maximum number of motion vectors into a bitstream. 14. The video encoding method according to claim 11, wherein an inter-PU partition type syntax in a PU header layer of the CU to be encoded is set in a predetermined inter-PU partition type and entropy-encoded when the number of motion vectors of the encoded image block contained in the image block having the predetermined area is less than the maximum number of motion vectors. 15. The video encoding method according to claim 11, wherein an inter-PU partition type syntax in a PU header layer of the CU to be encoded is not entropy-encoded when the number of motion vectors of the encoded image block contained in the image block having the predetermined area is greater than or equal to a number obtained by subtracting one from the maximum number of motion vectors, while the inter-PU partition type syntax in the PU header layer of the CU to be encoded is entropy-encoded when the number of motion vectors is less than the number obtained by subtracting one from the maximum number of motion vectors. 16. A video decoding method for decoding video using inter prediction, comprising:
controlling an inter-PU partition type of a CU to be decoded, based on the maximum number of motion vectors allowed for an image block having a predetermined area and the number of motion vectors of a decoded image block contained in the image block having the predetermined area. 17. The video decoding method according to claim 16,
controlling also an inter prediction direction of the inter-PU partition type of the CU to be decoded. 18. The decoding method according to claim 16, further comprising:
de-multiplexing data indicative of the predetermined area and the maximum number of motion vectors from a bitstream. 19. The video decoding method according to claim 16, wherein an inter-PU partition type syntax in a PU header layer of the CU to be decoded is not entropy-decoded when the number of motion vectors of the decoded image block contained in the image block having the predetermined area is greater than or equal to a number obtained by subtracting one from the maximum number of motion vectors, while the inter-PU partition type syntax in the PU header layer of the CU to be decoded is entropy-decoded when the number of motion vectors is less than the number obtained by subtracting one from the maximum number of motion vectors. 20. The video decoding method according to claim 16, wherein it is determined that there is an error in an access unit accessing a bitstream including the CU to be decoded when the number of motion vectors of the decoded image block contained in the image block having the predetermined area is greater than the maximum number of motion vectors. 21. A non-transitory computer readable information recording medium storing a video encoding program for encoding video using inter prediction, when executed by a processor, performs a method for:
controlling an inter-PU partition type of a CU to be encoded, based on the maximum number of motion vectors allowed for an image block having a predetermined area and the number of motion vectors of an encoded image block contained in the image block having the predetermined area. 22. The computer readable information recording medium according to claim 21, further comprising: controlling an inter prediction direction of the inter-PU partition type of the CU to be encoded. 23. The computer readable information recording medium according to claim 21, further comprising: multiplexing, into a bitstream, data indicative of the predetermined area and the maximum number of motion vectors. 24. The computer readable information recording medium according to claim 21, further comprising: setting an inter-PU partition type syntax at a predetermined inter-PU partition type in a PU header layer of the CU to be encoded, and entropy-encoding the inter-PU partition type syntax when the number of motion vectors of the encoded image block contained in the image block having the predetermined area is less than the maximum number of motion vectors. 25. The computer readable information recording medium according to claim 21, further comprising: inhibiting entropy-encoding an inter-PU partition type syntax in a PU header layer of the CU to be encoded when the number of motion vectors of the encoded image block contained in the image block having the predetermined area is greater than or equal to a number obtained by subtracting one from the maximum number of motion vectors, and executing the process of entropy-encoding the inter-PU partition type syntax in the PU header layer of the CU to be encoded when the number of motion vectors is less than the number obtained by subtracting one from the maximum number of motion vectors. 26. A non-transitory computer readable information recording medium storing a video decoding program for decoding video using inter prediction, when executed by a processor, performs a method for: controlling an inter-PU partition type of a CU to be decoded, based on the maximum number of motion vectors allowed for an image block having a predetermined area and the number of motion vectors of a decoded image block contained in the image block having the predetermined area. 27. The computer readable information recording medium according to claim 26, further comprising: controlling an inter prediction direction of the inter-PU partition type of the CU to be decoded. 28. The computer readable information recording medium according to claim 26, further comprising: de-multiplexing data indicative of the predetermined area and the maximum number of motion vectors from a bitstream. 29. The computer readable information recording medium according to claim 26, further comprising: entropy-decoding an inter-PU partition type syntax in a PU header layer of the CU to be decoded when the number of motion vectors of the decoded image block contained in the image block having the predetermined area is greater than or equal to a number obtained by subtracting one from the maximum number of motion vectors, or to execute the process of entropy-decoding the inter-PU partition type syntax in the PU header layer of the CU to be decoded when the number of motion vectors of the decoded image block contained in the image block having the predetermined area is less than the number obtained by subtracting one from the maximum number of motion vectors. 30. The computer readable information recording medium according to claim 26, further comprising: determining that there is an error in an access unit accessing a bitstream including the CU to be decoded when the number of motion vectors of the decoded image block contained in the image block having the predetermined area is greater than the maximum number of motion vectors. | 2,400 |
7,513 | 7,513 | 13,411,323 | 2,421 | An exemplary method and apparatus to deliver rich media to wireless hand-held devices. | 1. (canceled) 2. A set-top box (STB) operable to receive a transport stream containing regular programming and co-cast programming and to deliver the co-cast programming to a hand-held device, the STB comprising:
a receiver for receiving the transport stream; a demultiplexer for de-multiplexing the received transport stream into a first portion containing regular programming and a second portion containing co-cast programming; a channel for delivery of the de-multiplexed first portion to a television set; and a wireless transmitter for transmitting the de-multiplexed second portion to a hand-held device. 3. The STB of claim 2, wherein the set top box is operable to deliver co-cast index information to the hand held device, the transmitted second portion being a function of a selection from the co-cast index information that is made at the hand-held device. 4. The STB of claim 2, wherein the wirelessly-transmitted second portion includes multiple co-cast programs. 5. The STB of claim 2, wherein the wirelessly-transmitted second portion includes regular programming co-cast programs. 6. A system for delivery of co-cast programming from a content provider to a hand-held device, comprising:
a multiplexer for multiplexing a plurality of programs, including at least one co-cast program, into a transport stream; a set-top box (STB) operable to receive the transport stream, the STB including: a receiver for receiving the transport stream; a demultiplexer for de-multiplexing the received transport stream into a first portion containing regular programming and a second portion containing co-cast programming; a channel for delivery of the de-multiplexed first portion to a television set; and a wireless transmitter for transmitting the de-multiplexed second portion. 7. The system of claim 6, wherein the set top box is operable to deliver co-cast index information, the transmitted second portion being a function of a selection from the co-cast index information that is made at the hand-held device. 8. The system of claim 6, wherein the wirelessly-transmitted second portion includes multiple co-cast programs. 9. The system of claim 6, wherein the wirelessly-transmitted second portion includes regular programs. 10. The system of claim 6, further including a hand-held device to which the second portion is wireless delivered. 11. The system of claim 8, further including a hand-held device to which the second portion is wireless delivered, the hand-held device including a de-multiplexer to de-multiple the multiple co-cast programs. 12. The system of claim 9, further including a hand-held device to which the second portion is wireless delivered, the hand-held device including a de-multiplexer to de-multiple the co-cast program. | An exemplary method and apparatus to deliver rich media to wireless hand-held devices.1. (canceled) 2. A set-top box (STB) operable to receive a transport stream containing regular programming and co-cast programming and to deliver the co-cast programming to a hand-held device, the STB comprising:
a receiver for receiving the transport stream; a demultiplexer for de-multiplexing the received transport stream into a first portion containing regular programming and a second portion containing co-cast programming; a channel for delivery of the de-multiplexed first portion to a television set; and a wireless transmitter for transmitting the de-multiplexed second portion to a hand-held device. 3. The STB of claim 2, wherein the set top box is operable to deliver co-cast index information to the hand held device, the transmitted second portion being a function of a selection from the co-cast index information that is made at the hand-held device. 4. The STB of claim 2, wherein the wirelessly-transmitted second portion includes multiple co-cast programs. 5. The STB of claim 2, wherein the wirelessly-transmitted second portion includes regular programming co-cast programs. 6. A system for delivery of co-cast programming from a content provider to a hand-held device, comprising:
a multiplexer for multiplexing a plurality of programs, including at least one co-cast program, into a transport stream; a set-top box (STB) operable to receive the transport stream, the STB including: a receiver for receiving the transport stream; a demultiplexer for de-multiplexing the received transport stream into a first portion containing regular programming and a second portion containing co-cast programming; a channel for delivery of the de-multiplexed first portion to a television set; and a wireless transmitter for transmitting the de-multiplexed second portion. 7. The system of claim 6, wherein the set top box is operable to deliver co-cast index information, the transmitted second portion being a function of a selection from the co-cast index information that is made at the hand-held device. 8. The system of claim 6, wherein the wirelessly-transmitted second portion includes multiple co-cast programs. 9. The system of claim 6, wherein the wirelessly-transmitted second portion includes regular programs. 10. The system of claim 6, further including a hand-held device to which the second portion is wireless delivered. 11. The system of claim 8, further including a hand-held device to which the second portion is wireless delivered, the hand-held device including a de-multiplexer to de-multiple the multiple co-cast programs. 12. The system of claim 9, further including a hand-held device to which the second portion is wireless delivered, the hand-held device including a de-multiplexer to de-multiple the co-cast program. | 2,400 |
7,514 | 7,514 | 13,997,562 | 2,447 | The present invention provides a communications network comprising a plurality of routers and a plurality of communications links which interconnect the routers and a global network management module. The global network management module is in communication with a number of diverse information sources, such as television listings, social networking sites, user preferences, historical data relating to the accessing of content, etc. such that the global network management module can make a prediction as to the likelihood that a particular network traffic event will occur and the demands that such a traffic event will place on the network. If the global network management module decides that the traffic event will occur then it will pre-configure the communications network such that the traffic generated by the event can be carried across the network within pre-defined quality thresholds and without unnecessarily effecting other traffic being carried on the network. | 1. A communications network comprising a plurality of routers; a plurality of communications links, the communications links interconnecting each of the plurality of routers to one or more other routers; a network management database, the network management database storing operational data regarding each of the plurality of routers; and a global network management module, the global network management module being in communication with the network management database and with a plurality of information sources wherein, in operation, the global network management module:
a) analyses information received from one or more of the plurality of information sources; b) determines the probability that a traffic event will occur, based on the information received in step a); c) reconfigures the communications network if it decides that the traffic event will occur, the network reconfiguration occurring before the occurrence of the traffic event. 2. A communications network according to claim 1, wherein the global network management module reconfigures the communications network in accordance with the characteristics of the traffic that will be generated by the traffic event. 3. A communications network according to claim 1, wherein the global network management module reconfigures the communications network in accordance quality of service measures that relate to the traffic that will be generated by the traffic event. 4. A communications network according to claim 1, wherein the reconfiguration of the communications network comprises re-routing existing network traffic to other communications links in order to leave one or more communications links that are substantially unloaded in order to be able to carry the traffic that will be generated by the traffic event. 5. A communications network according to claim 1, wherein the reconfiguration of the communications network comprises re-routing existing network traffic to other communications links such that the traffic carried by each of the plurality of communications links is substantially equal. 6. A communications network according to claim 1, wherein the communications network further comprises a plurality of segments, each network segment comprising one or more routers, one or more communications links and a network segment management module. 7. A communications network according to claim 6, wherein the communications network further comprises one or more supervisory management modules, wherein each of the network segment management module is uniquely associated with one of the supervisory management modules. 8. A method of operating a communications network, wherein the communications network comprises: a plurality of routers; a plurality of communications links, the communications links interconnecting each of the plurality of routers to one or more other routers; a network management database, the network management database storing operational data regarding each of the plurality of routers; and a global network management module, the global network management module being in communication with the network management database and with a plurality of information sources, the method comprising the steps of
i) the global network management module analysing information received from one or more of the plurality of information sources; ii) the global network management determining the probability that a traffic event will occur, based on the information received in step i); and iii) the global network management module reconfiguring the communications network if it decides that the traffic event will occur, the network reconfiguration occurring before the occurrence of the traffic event. 9. A data carrier device comprising computer executable code for performing a method according to claim 8. | The present invention provides a communications network comprising a plurality of routers and a plurality of communications links which interconnect the routers and a global network management module. The global network management module is in communication with a number of diverse information sources, such as television listings, social networking sites, user preferences, historical data relating to the accessing of content, etc. such that the global network management module can make a prediction as to the likelihood that a particular network traffic event will occur and the demands that such a traffic event will place on the network. If the global network management module decides that the traffic event will occur then it will pre-configure the communications network such that the traffic generated by the event can be carried across the network within pre-defined quality thresholds and without unnecessarily effecting other traffic being carried on the network.1. A communications network comprising a plurality of routers; a plurality of communications links, the communications links interconnecting each of the plurality of routers to one or more other routers; a network management database, the network management database storing operational data regarding each of the plurality of routers; and a global network management module, the global network management module being in communication with the network management database and with a plurality of information sources wherein, in operation, the global network management module:
a) analyses information received from one or more of the plurality of information sources; b) determines the probability that a traffic event will occur, based on the information received in step a); c) reconfigures the communications network if it decides that the traffic event will occur, the network reconfiguration occurring before the occurrence of the traffic event. 2. A communications network according to claim 1, wherein the global network management module reconfigures the communications network in accordance with the characteristics of the traffic that will be generated by the traffic event. 3. A communications network according to claim 1, wherein the global network management module reconfigures the communications network in accordance quality of service measures that relate to the traffic that will be generated by the traffic event. 4. A communications network according to claim 1, wherein the reconfiguration of the communications network comprises re-routing existing network traffic to other communications links in order to leave one or more communications links that are substantially unloaded in order to be able to carry the traffic that will be generated by the traffic event. 5. A communications network according to claim 1, wherein the reconfiguration of the communications network comprises re-routing existing network traffic to other communications links such that the traffic carried by each of the plurality of communications links is substantially equal. 6. A communications network according to claim 1, wherein the communications network further comprises a plurality of segments, each network segment comprising one or more routers, one or more communications links and a network segment management module. 7. A communications network according to claim 6, wherein the communications network further comprises one or more supervisory management modules, wherein each of the network segment management module is uniquely associated with one of the supervisory management modules. 8. A method of operating a communications network, wherein the communications network comprises: a plurality of routers; a plurality of communications links, the communications links interconnecting each of the plurality of routers to one or more other routers; a network management database, the network management database storing operational data regarding each of the plurality of routers; and a global network management module, the global network management module being in communication with the network management database and with a plurality of information sources, the method comprising the steps of
i) the global network management module analysing information received from one or more of the plurality of information sources; ii) the global network management determining the probability that a traffic event will occur, based on the information received in step i); and iii) the global network management module reconfiguring the communications network if it decides that the traffic event will occur, the network reconfiguration occurring before the occurrence of the traffic event. 9. A data carrier device comprising computer executable code for performing a method according to claim 8. | 2,400 |
7,515 | 7,515 | 13,997,445 | 2,447 | The present invention provides a communications network which is divided into a plurality of segments, with each segment comprising one or more routers and one or more communications links that connect the routers. Quality of service (QoS) thresholds can be defined for each of the segments and if it is predicted that one of these thresholds is to be breached in one of the segments, for example due to a communications link or a router being overloaded, then a segment management module associated with that segment will re-route the traffic. | 1. A communications network comprising a plurality of network segments, each of the plurality of the network segments comprising:
a) a segment management module; b) a plurality of network elements; and c) a plurality of communications links, wherein the plurality of network elements are interconnected by the plurality of communications links,
the communications network being configured such that, in operation:
i) each of the segment management modules receives operational data from the plurality of network elements in its respective network segment;
ii) on the basis of operational data received from the plurality of network elements, each segment management module determines the future performance of the plurality of network elements in the respective network segment; and
iii) if a segment management module determines that the future performance of one or more of the plurality of network elements in the respective network segment will be less than a threshold value, re-routing one or more data flows. 2. A network according to claim 1, wherein in step iii), the segment management module, in operation, determines that future performance of one of the plurality of communications links will be less than a threshold value. 3. A network according to claim 2, wherein the segment management module, in operation, polls a router preceding the communications link which was determined to have an inadequate performance to determine one or more alternative communications links over which one or more data flows may be routed. 4. A network according to claim 1, wherein in step iii), the segment management module, in operation, re-routes one or more data flows within the network segment associated with that segment management module. 5. A network according to claim 1, wherein in step iii), the segment management module, in operation, re-routes one or more data flows to one or more further network segments. 6. A network according to claim 1, wherein the threshold value for the predicted performance of a network element is derived from one or more quality of service values. 7. A method of operating a communications network, wherein the communications network comprises a plurality of network segments, each of the plurality of the network segments comprising: a segment management module; a plurality of network elements; and a plurality of communications links, wherein the plurality of network elements are interconnected by the plurality of communications links, the method comprising the steps of:
i) each of the segment management modules receiving operational data from the plurality of network elements in its respective network segment; ii) each segment management module determining, on the basis of operational data received from the plurality of network elements, the future performance of the plurality of network elements in the respective network segment; and iii) if a segment management module determines that the future performance of one or more of the plurality of network elements in the respective network segment will be less than a threshold value, re-routing one or more data flows. 8. A data carrier device comprising computer executable code for performing a method according to claim 7. | The present invention provides a communications network which is divided into a plurality of segments, with each segment comprising one or more routers and one or more communications links that connect the routers. Quality of service (QoS) thresholds can be defined for each of the segments and if it is predicted that one of these thresholds is to be breached in one of the segments, for example due to a communications link or a router being overloaded, then a segment management module associated with that segment will re-route the traffic.1. A communications network comprising a plurality of network segments, each of the plurality of the network segments comprising:
a) a segment management module; b) a plurality of network elements; and c) a plurality of communications links, wherein the plurality of network elements are interconnected by the plurality of communications links,
the communications network being configured such that, in operation:
i) each of the segment management modules receives operational data from the plurality of network elements in its respective network segment;
ii) on the basis of operational data received from the plurality of network elements, each segment management module determines the future performance of the plurality of network elements in the respective network segment; and
iii) if a segment management module determines that the future performance of one or more of the plurality of network elements in the respective network segment will be less than a threshold value, re-routing one or more data flows. 2. A network according to claim 1, wherein in step iii), the segment management module, in operation, determines that future performance of one of the plurality of communications links will be less than a threshold value. 3. A network according to claim 2, wherein the segment management module, in operation, polls a router preceding the communications link which was determined to have an inadequate performance to determine one or more alternative communications links over which one or more data flows may be routed. 4. A network according to claim 1, wherein in step iii), the segment management module, in operation, re-routes one or more data flows within the network segment associated with that segment management module. 5. A network according to claim 1, wherein in step iii), the segment management module, in operation, re-routes one or more data flows to one or more further network segments. 6. A network according to claim 1, wherein the threshold value for the predicted performance of a network element is derived from one or more quality of service values. 7. A method of operating a communications network, wherein the communications network comprises a plurality of network segments, each of the plurality of the network segments comprising: a segment management module; a plurality of network elements; and a plurality of communications links, wherein the plurality of network elements are interconnected by the plurality of communications links, the method comprising the steps of:
i) each of the segment management modules receiving operational data from the plurality of network elements in its respective network segment; ii) each segment management module determining, on the basis of operational data received from the plurality of network elements, the future performance of the plurality of network elements in the respective network segment; and iii) if a segment management module determines that the future performance of one or more of the plurality of network elements in the respective network segment will be less than a threshold value, re-routing one or more data flows. 8. A data carrier device comprising computer executable code for performing a method according to claim 7. | 2,400 |
7,516 | 7,516 | 13,997,485 | 2,447 | The present invention provides a communications network which is divided into a plurality of segments, with, each segment comprising one or more routers and one or more communications links that connect the routers. Each of the segments also comprises a segment management module. Each of the segment management modules reports to a supervisory management module and the communications network may comprise one or more supervisory management modules. If a segment management module predicts that a QoS threshold will be breached then it may re-route a data flow within that segment. If such a re-route is not possible then it will send a request to its supervisory management module to initiate a re-routing to a further segment. | 1. A communications network comprising: a plurality of network segments, each of the plurality of network segments comprising a segment management module, one or more routers and a plurality of communications links, the communications links connecting each router to one or more other routers; and a supervisory management module, wherein, in use, the network is configured such that each segment management module predicts the performance of the or each router in its respective segment that carries a data flow based on operational data reported by the or each router and if the predicted performance exceeds a threshold value, that segment management module will
a) identify an alternative routing for the data flow within that network segment; or b) if an alternative routing for the data flow cannot be found within that network segment, send a report to the supervisory management module, the supervisory management module being configured to, in use, take action in response to the report. 2. A communications network according to claim 1, wherein the supervisory management module will increase the threshold value for the network segment which generated the report. 3. A communications network according to claim 2, wherein the supervisory management module will decrease the threshold value for one or more other network segments which carry the data flow. 4. A communications network according to claim 3, wherein the total of the threshold values for all the network segments carrying the data flow remains constant. 5. A communications network according to claim 1, wherein the supervisory management module permits the segment management module which generated the report to re-route the data flow to a further network segment. 6. A communications network according to claim 5, wherein the supervisory management module identifies a router within a further network segment for the data flow to be re-routed to. 7. A communications network according to claim 6, wherein the supervisory management module further identifies additional re-routings such that the data flow can be routed to its original destination. 8. A communications network according to claim 1, wherein the supervisory management module instructs the segment management module which generated the report to ignore that the predicted performance will exceed the threshold value. 9. A communications network according to claim 8, wherein the supervisory management module will send the instruction to the segment management module if the predicted performance will exceed the threshold value for a limited period of time. 10. A communications network according to claim 8, wherein the supervisory management module will send the instruction to the segment management module if the predicted performance will only exceed the threshold value by a limited amount. 11. A method of managing a communications network, the communications network comprising: a plurality of network segments, each of the plurality of network segments comprising a segment management module, one or more routers and a plurality of communications links, the communications links connecting each router to one or more other routers; and a supervisory management module,
the method comprising the steps of: i) each segment management module predicting the performance of the or each router in its respective segment that carries a data flow based on operational data reported by the or each router and if the predicted performance exceeds a threshold value, that segment management module will ii) identify an alternative routing for the data flow within that network segment; or iii) if an alternative routing for the data flow cannot be found within that network segment, send a report to the supervisory management module, the supervisory management module being configured to, in use, take action in response to the report. 12. A data carrier device comprising computer executable code for performing a method according to claim 11. | The present invention provides a communications network which is divided into a plurality of segments, with, each segment comprising one or more routers and one or more communications links that connect the routers. Each of the segments also comprises a segment management module. Each of the segment management modules reports to a supervisory management module and the communications network may comprise one or more supervisory management modules. If a segment management module predicts that a QoS threshold will be breached then it may re-route a data flow within that segment. If such a re-route is not possible then it will send a request to its supervisory management module to initiate a re-routing to a further segment.1. A communications network comprising: a plurality of network segments, each of the plurality of network segments comprising a segment management module, one or more routers and a plurality of communications links, the communications links connecting each router to one or more other routers; and a supervisory management module, wherein, in use, the network is configured such that each segment management module predicts the performance of the or each router in its respective segment that carries a data flow based on operational data reported by the or each router and if the predicted performance exceeds a threshold value, that segment management module will
a) identify an alternative routing for the data flow within that network segment; or b) if an alternative routing for the data flow cannot be found within that network segment, send a report to the supervisory management module, the supervisory management module being configured to, in use, take action in response to the report. 2. A communications network according to claim 1, wherein the supervisory management module will increase the threshold value for the network segment which generated the report. 3. A communications network according to claim 2, wherein the supervisory management module will decrease the threshold value for one or more other network segments which carry the data flow. 4. A communications network according to claim 3, wherein the total of the threshold values for all the network segments carrying the data flow remains constant. 5. A communications network according to claim 1, wherein the supervisory management module permits the segment management module which generated the report to re-route the data flow to a further network segment. 6. A communications network according to claim 5, wherein the supervisory management module identifies a router within a further network segment for the data flow to be re-routed to. 7. A communications network according to claim 6, wherein the supervisory management module further identifies additional re-routings such that the data flow can be routed to its original destination. 8. A communications network according to claim 1, wherein the supervisory management module instructs the segment management module which generated the report to ignore that the predicted performance will exceed the threshold value. 9. A communications network according to claim 8, wherein the supervisory management module will send the instruction to the segment management module if the predicted performance will exceed the threshold value for a limited period of time. 10. A communications network according to claim 8, wherein the supervisory management module will send the instruction to the segment management module if the predicted performance will only exceed the threshold value by a limited amount. 11. A method of managing a communications network, the communications network comprising: a plurality of network segments, each of the plurality of network segments comprising a segment management module, one or more routers and a plurality of communications links, the communications links connecting each router to one or more other routers; and a supervisory management module,
the method comprising the steps of: i) each segment management module predicting the performance of the or each router in its respective segment that carries a data flow based on operational data reported by the or each router and if the predicted performance exceeds a threshold value, that segment management module will ii) identify an alternative routing for the data flow within that network segment; or iii) if an alternative routing for the data flow cannot be found within that network segment, send a report to the supervisory management module, the supervisory management module being configured to, in use, take action in response to the report. 12. A data carrier device comprising computer executable code for performing a method according to claim 11. | 2,400 |
7,517 | 7,517 | 13,725,041 | 2,483 | A robust system and method for estimating camera rotation in image sequences. A rotation-based reconstruction technique is described that is directed to performing reconstruction for image sequences with a zero or near-zero translation component. The technique may estimate only the rotation component of the camera motion in an image sequence, and may also estimate the camera intrinsic parameters if not known. Input to the technique may include an image sequence, and output may include the camera intrinsic parameters and the rotation parameters for all the images in the sequence. By only estimating a rotation component of camera motion, the assumption is made that the camera is not moving throughout the entire sequence. However, the camera is allowed to rotate and zoom arbitrarily. The technique may support both the case where the camera intrinsic parameters are known and the case where the camera intrinsic parameters are not known. | 1. A method, comprising:
reconstructing, by one or more computing devices, a rotation component of camera motion for a plurality of frames in an image sequence, wherein said reconstructing comprises:
obtaining a plurality of point trajectories for the image sequence, each point trajectory tracking a feature across two or more of the frames;
determining a subset of the plurality of frames as keyframes in the image sequence;
generating, according to the point trajectories, an initial reconstruction including two or more of the keyframes and covering a subset of the plurality of frames, wherein said generating estimates the rotation component of camera motion for each of the two or more keyframes in the initial reconstruction;
determining and reconstructing additional keyframes to cover the image sequence, wherein the additional keyframes are added to each end of the initial reconstruction, and wherein the rotation component of camera motion is estimated for each of the additional keyframes;
reconstructing one or more frames from the plurality of frames that have not yet been included in the reconstruction, wherein the one or more frames are added to the current reconstruction to complete the reconstruction, and wherein the rotation component of camera motion is estimated for each of the one or more reconstructed frames. 2. The method as recited in claim 1, wherein camera motion of the image sequence has a zero or near-zero translation component. 3. The method as recited in claim 1, wherein the keyframes are temporally spaced frames in the plurality of frames of the image sequence. 4. The method as recited in claim 1, wherein said generating an initial reconstruction comprises:
selecting a pair of the keyframes as initial keyframes for the image sequence according to the set of point trajectories; and generating the initial reconstruction according to the pair of initial keyframes and at least a portion of the set of point trajectories. 5. The method as recited in claim 1, wherein said determining and reconstructing additional keyframes comprises:
selecting a next keyframe to be added to the current reconstruction from a set of the keyframes that are not covered by the current reconstruction; reconstructing the rotation component of camera motion for the selected frame according to the current reconstruction and at least a portion of a set of point trajectories for the image sequence, wherein said reconstructing adds the selected frame to the current reconstruction; and repeating said selecting and said reconstructing until the current reconstruction covers the image sequence. 6. The method as recited in claim 1, further comprising globally optimizing the reconstruction after adding one or more additional frames to the reconstruction, wherein said globally optimizing the reconstruction comprises refining the reconstruction according to a nonlinear optimization technique applied globally to the reconstruction. 7. The method as recited in claim 1, further comprising:
determining one or more outlier points in the reconstruction and removing the determined outlier points from the reconstruction, wherein the outlier points are added to a set of current outlier points; and determining one or more inlier points from the set of outlier points and adding the determined inlier points to the current reconstruction. 8. The method as recited in claim 1, wherein camera intrinsic parameters are known for the input image sequence. 9. The method as recited in claim 1, wherein one or more camera intrinsic parameters are not known for the input image sequence, the method further comprising estimating the one or more camera intrinsic parameters for each frame added to the reconstruction. 10. A system, comprising:
one or more processors; and a memory comprising program instructions, wherein the program instructions are executable by at least one of the one or more processors to reconstruct a rotation component of camera motion for a plurality of frames in an image sequence, wherein, to reconstruct the rotation component for the plurality of frames, the program instructions are executable by at least one of the one or more processors to:
obtain a plurality of point trajectories for the image sequence, each point trajectory tracking a feature across two or more of the frames;
determine a subset of the plurality of frames as keyframes in the image sequence;
generate, according to the point trajectories, an initial reconstruction including two or more of the keyframes and covering a subset of the plurality of frames, wherein said generating estimates the rotation component of camera motion for each of the two or more keyframes in the initial reconstruction;
determine and reconstructing additional keyframes to cover the image sequence, wherein the additional keyframes are added to each end of the initial reconstruction, and wherein the rotation component of camera motion is estimated for each of the additional keyframes;
reconstruct one or more frames from the plurality of frames that have not yet been included in the reconstruction, wherein the one or more frames are added to the current reconstruction to complete the reconstruction, and wherein the rotation component of camera motion is estimated for each of the one or more reconstructed frames. 11. The system as recited in claim 10, wherein camera motion of the image sequence has a zero or near-zero translation component. 12. The system as recited in claim 10, wherein, to generate an initial reconstruction, the program instructions are executable by at least one of the one or more processors to:
select a pair of the keyframes as initial keyframes for the image sequence according to the set of point trajectories; and generate the initial reconstruction according to the pair of initial keyframes and at least a portion of the set of point trajectories. 13. The system as recited in claim 10, wherein, to determine and reconstruct additional keyframes, the program instructions are executable by at least one of the one or more processors to:
select a next keyframe to be added to the current reconstruction from a set of the keyframes that are not covered by the current reconstruction; reconstruct the rotation component of camera motion for the selected frame according to the current reconstruction and at least a portion of a set of point trajectories for the image sequence, wherein said reconstructing adds the selected frame to the current reconstruction; and repeat said selecting and said reconstructing until the current reconstruction covers the image sequence. 14. The system as recited in claim 10, wherein the program instructions are further executable by at least one of the one or more processors to globally optimize the reconstruction after adding one or more additional frames to the reconstruction by refining the reconstruction according to a nonlinear optimization technique applied globally to the reconstruction. 15. The system as recited in claim 10, wherein one or more camera intrinsic parameters are not known for the input image sequence, wherein the program instructions are further executable by at least one of the one or more processors to estimate the one or more camera intrinsic parameters for each frame added to the reconstruction. 16. A non-transitory computer-readable storage medium storing program instructions, wherein the program instructions are computer-executable to implement:
reconstructing a rotation component of camera motion for a plurality of frames in an image sequence, wherein, in said reconstructing, the program instructions are computer-executable to implement:
obtaining a plurality of point trajectories for the image sequence, each point trajectory tracking a feature across two or more of the frames;
determining a subset of the plurality of frames as keyframes in the image sequence;
generating, according to the point trajectories, an initial reconstruction including two or more of the keyframes and covering a subset of the plurality of frames, wherein said generating estimates the rotation component of camera motion for each of the two or more keyframes in the initial reconstruction;
determining and reconstructing additional keyframes to cover the image sequence, wherein the additional keyframes are added to each end of the initial reconstruction, and wherein the rotation component of camera motion is estimated for each of the additional keyframes;
reconstructing one or more frames from the plurality of frames that have not yet been included in the reconstruction, wherein the one or more frames are added to the current reconstruction to complete the reconstruction, and wherein the rotation component of camera motion is estimated for each of the one or more reconstructed frames. 17. The non-transitory computer-readable storage medium as recited in claim 16, wherein camera motion of the image sequence has a zero or near-zero translation component. 18. The non-transitory computer-readable storage medium as recited in claim 16, wherein, in said determining and reconstructing additional keyframes, the program instructions are computer-executable to implement:
selecting a next keyframe to be added to the current reconstruction from a set of the keyframes that are not covered by the current reconstruction; reconstructing the rotation component of camera motion for the selected frame according to the current reconstruction and at least a portion of a set of point trajectories for the image sequence, wherein said reconstructing adds the selected frame to the current reconstruction; and repeating said selecting and said reconstructing until the current reconstruction covers the image sequence. 19. The non-transitory computer-readable storage medium as recited in claim 16, wherein the program instructions are further computer-executable to implement globally optimizing the reconstruction after adding one or more additional frames to the reconstruction, wherein said globally optimizing the reconstruction refines the reconstruction according to a nonlinear optimization technique applied globally to the reconstruction. 20. The non-transitory computer-readable storage medium as recited in claim 16, wherein the program instructions are further computer-executable to implement:
determining one or more outlier points in the reconstruction and removing the determined outlier points from the reconstruction, wherein the outlier points are added to a set of current outlier points; and determining one or more inlier points from the set of outlier points and adding the determined inlier points to the current reconstruction. 21. The non-transitory computer-readable storage medium as recited in claim 16, wherein one or more camera intrinsic parameters are not known for the input image sequence, wherein the program instructions are further computer-executable to implement estimating the one or more camera intrinsic parameters for each frame added to the reconstruction. | A robust system and method for estimating camera rotation in image sequences. A rotation-based reconstruction technique is described that is directed to performing reconstruction for image sequences with a zero or near-zero translation component. The technique may estimate only the rotation component of the camera motion in an image sequence, and may also estimate the camera intrinsic parameters if not known. Input to the technique may include an image sequence, and output may include the camera intrinsic parameters and the rotation parameters for all the images in the sequence. By only estimating a rotation component of camera motion, the assumption is made that the camera is not moving throughout the entire sequence. However, the camera is allowed to rotate and zoom arbitrarily. The technique may support both the case where the camera intrinsic parameters are known and the case where the camera intrinsic parameters are not known.1. A method, comprising:
reconstructing, by one or more computing devices, a rotation component of camera motion for a plurality of frames in an image sequence, wherein said reconstructing comprises:
obtaining a plurality of point trajectories for the image sequence, each point trajectory tracking a feature across two or more of the frames;
determining a subset of the plurality of frames as keyframes in the image sequence;
generating, according to the point trajectories, an initial reconstruction including two or more of the keyframes and covering a subset of the plurality of frames, wherein said generating estimates the rotation component of camera motion for each of the two or more keyframes in the initial reconstruction;
determining and reconstructing additional keyframes to cover the image sequence, wherein the additional keyframes are added to each end of the initial reconstruction, and wherein the rotation component of camera motion is estimated for each of the additional keyframes;
reconstructing one or more frames from the plurality of frames that have not yet been included in the reconstruction, wherein the one or more frames are added to the current reconstruction to complete the reconstruction, and wherein the rotation component of camera motion is estimated for each of the one or more reconstructed frames. 2. The method as recited in claim 1, wherein camera motion of the image sequence has a zero or near-zero translation component. 3. The method as recited in claim 1, wherein the keyframes are temporally spaced frames in the plurality of frames of the image sequence. 4. The method as recited in claim 1, wherein said generating an initial reconstruction comprises:
selecting a pair of the keyframes as initial keyframes for the image sequence according to the set of point trajectories; and generating the initial reconstruction according to the pair of initial keyframes and at least a portion of the set of point trajectories. 5. The method as recited in claim 1, wherein said determining and reconstructing additional keyframes comprises:
selecting a next keyframe to be added to the current reconstruction from a set of the keyframes that are not covered by the current reconstruction; reconstructing the rotation component of camera motion for the selected frame according to the current reconstruction and at least a portion of a set of point trajectories for the image sequence, wherein said reconstructing adds the selected frame to the current reconstruction; and repeating said selecting and said reconstructing until the current reconstruction covers the image sequence. 6. The method as recited in claim 1, further comprising globally optimizing the reconstruction after adding one or more additional frames to the reconstruction, wherein said globally optimizing the reconstruction comprises refining the reconstruction according to a nonlinear optimization technique applied globally to the reconstruction. 7. The method as recited in claim 1, further comprising:
determining one or more outlier points in the reconstruction and removing the determined outlier points from the reconstruction, wherein the outlier points are added to a set of current outlier points; and determining one or more inlier points from the set of outlier points and adding the determined inlier points to the current reconstruction. 8. The method as recited in claim 1, wherein camera intrinsic parameters are known for the input image sequence. 9. The method as recited in claim 1, wherein one or more camera intrinsic parameters are not known for the input image sequence, the method further comprising estimating the one or more camera intrinsic parameters for each frame added to the reconstruction. 10. A system, comprising:
one or more processors; and a memory comprising program instructions, wherein the program instructions are executable by at least one of the one or more processors to reconstruct a rotation component of camera motion for a plurality of frames in an image sequence, wherein, to reconstruct the rotation component for the plurality of frames, the program instructions are executable by at least one of the one or more processors to:
obtain a plurality of point trajectories for the image sequence, each point trajectory tracking a feature across two or more of the frames;
determine a subset of the plurality of frames as keyframes in the image sequence;
generate, according to the point trajectories, an initial reconstruction including two or more of the keyframes and covering a subset of the plurality of frames, wherein said generating estimates the rotation component of camera motion for each of the two or more keyframes in the initial reconstruction;
determine and reconstructing additional keyframes to cover the image sequence, wherein the additional keyframes are added to each end of the initial reconstruction, and wherein the rotation component of camera motion is estimated for each of the additional keyframes;
reconstruct one or more frames from the plurality of frames that have not yet been included in the reconstruction, wherein the one or more frames are added to the current reconstruction to complete the reconstruction, and wherein the rotation component of camera motion is estimated for each of the one or more reconstructed frames. 11. The system as recited in claim 10, wherein camera motion of the image sequence has a zero or near-zero translation component. 12. The system as recited in claim 10, wherein, to generate an initial reconstruction, the program instructions are executable by at least one of the one or more processors to:
select a pair of the keyframes as initial keyframes for the image sequence according to the set of point trajectories; and generate the initial reconstruction according to the pair of initial keyframes and at least a portion of the set of point trajectories. 13. The system as recited in claim 10, wherein, to determine and reconstruct additional keyframes, the program instructions are executable by at least one of the one or more processors to:
select a next keyframe to be added to the current reconstruction from a set of the keyframes that are not covered by the current reconstruction; reconstruct the rotation component of camera motion for the selected frame according to the current reconstruction and at least a portion of a set of point trajectories for the image sequence, wherein said reconstructing adds the selected frame to the current reconstruction; and repeat said selecting and said reconstructing until the current reconstruction covers the image sequence. 14. The system as recited in claim 10, wherein the program instructions are further executable by at least one of the one or more processors to globally optimize the reconstruction after adding one or more additional frames to the reconstruction by refining the reconstruction according to a nonlinear optimization technique applied globally to the reconstruction. 15. The system as recited in claim 10, wherein one or more camera intrinsic parameters are not known for the input image sequence, wherein the program instructions are further executable by at least one of the one or more processors to estimate the one or more camera intrinsic parameters for each frame added to the reconstruction. 16. A non-transitory computer-readable storage medium storing program instructions, wherein the program instructions are computer-executable to implement:
reconstructing a rotation component of camera motion for a plurality of frames in an image sequence, wherein, in said reconstructing, the program instructions are computer-executable to implement:
obtaining a plurality of point trajectories for the image sequence, each point trajectory tracking a feature across two or more of the frames;
determining a subset of the plurality of frames as keyframes in the image sequence;
generating, according to the point trajectories, an initial reconstruction including two or more of the keyframes and covering a subset of the plurality of frames, wherein said generating estimates the rotation component of camera motion for each of the two or more keyframes in the initial reconstruction;
determining and reconstructing additional keyframes to cover the image sequence, wherein the additional keyframes are added to each end of the initial reconstruction, and wherein the rotation component of camera motion is estimated for each of the additional keyframes;
reconstructing one or more frames from the plurality of frames that have not yet been included in the reconstruction, wherein the one or more frames are added to the current reconstruction to complete the reconstruction, and wherein the rotation component of camera motion is estimated for each of the one or more reconstructed frames. 17. The non-transitory computer-readable storage medium as recited in claim 16, wherein camera motion of the image sequence has a zero or near-zero translation component. 18. The non-transitory computer-readable storage medium as recited in claim 16, wherein, in said determining and reconstructing additional keyframes, the program instructions are computer-executable to implement:
selecting a next keyframe to be added to the current reconstruction from a set of the keyframes that are not covered by the current reconstruction; reconstructing the rotation component of camera motion for the selected frame according to the current reconstruction and at least a portion of a set of point trajectories for the image sequence, wherein said reconstructing adds the selected frame to the current reconstruction; and repeating said selecting and said reconstructing until the current reconstruction covers the image sequence. 19. The non-transitory computer-readable storage medium as recited in claim 16, wherein the program instructions are further computer-executable to implement globally optimizing the reconstruction after adding one or more additional frames to the reconstruction, wherein said globally optimizing the reconstruction refines the reconstruction according to a nonlinear optimization technique applied globally to the reconstruction. 20. The non-transitory computer-readable storage medium as recited in claim 16, wherein the program instructions are further computer-executable to implement:
determining one or more outlier points in the reconstruction and removing the determined outlier points from the reconstruction, wherein the outlier points are added to a set of current outlier points; and determining one or more inlier points from the set of outlier points and adding the determined inlier points to the current reconstruction. 21. The non-transitory computer-readable storage medium as recited in claim 16, wherein one or more camera intrinsic parameters are not known for the input image sequence, wherein the program instructions are further computer-executable to implement estimating the one or more camera intrinsic parameters for each frame added to the reconstruction. | 2,400 |
7,518 | 7,518 | 14,864,646 | 2,424 | In one aspect, an example method includes: receiving an instruction to apply a first particular DVE to a temporal portion of a video segment; making a determination that no temporal portion of the video segment satisfies each condition in a condition set; and based, at least in part, on the received instruction and the determination, transmitting to a DVE system a first instruction that causes the DVE system to apply a second particular DVE to a particular temporal portion of the video segment, thereby causing first content within a region of the particular temporal portion of the video segment to be removed, and a second instruction that causes the DVE system to apply the first particular DVE to at least part of the particular temporal portion of the video segment, thereby causing second content to be overlaid within the region of at least part of the particular temporal portion. | 1. A method for use in a video-broadcast system having a digital video-effect (DVE) system, the method comprising:
receiving an instruction to apply a first particular DVE of a particular overlay-DVE type to a temporal portion of a video segment based, at least in part, on the temporal portion of the video segment being suitable for having a DVE of the particular overlay-DVE type applied thereto, wherein the first particular DVE corresponds to a duration; making a determination that no temporal portion of the video segment satisfies each condition in a condition set, wherein the condition set comprises (i) a first condition that the temporal portion of the video segment has been identified as being suitable for having a DVE of the particular overlay-DVE type applied thereto, and (ii) a second condition that the temporal portion of the video segment is of at least the duration; and based, at least in part, on the received instruction and the determination, transmitting to the DVE system (i) a first instruction that causes the DVE system to apply a second particular DVE to a particular temporal portion of the video segment, thereby causing first content within a region of the particular temporal portion of the video segment to be removed from the region, and (ii) a second instruction that causes the DVE system to apply the first particular DVE to at least part of the particular temporal portion of the video segment, thereby causing second content to be overlaid within the region of at least part of the particular temporal portion. 2. The method of claim 1, wherein receiving the instruction comprises (i) receiving a portion of a traffic schedule and (ii) extracting the instruction from the received portion of the traffic schedule. 3. The method of claim 1, wherein the first particular DVE comprises a particular ticker DVE. 4. The method of claim 1, wherein the condition set further comprises a third condition that a start time of the temporal portion of the video is within a period of time associated with the received instruction. 5. The method of claim 1, wherein the determination is a first determination, the method further comprising:
making a second determination that the second content has a particular property, wherein based, at least in part, on the received instruction and the first determination comprises based, at least in part, on the received instruction, the first determination, and the second determination. 6. The method of claim 1, wherein the DVE system comprises a stunt switcher. 7. The method of claim 1, wherein the second particular DVE comprises a particular pull-back DVE. 8. A non-transitory computer-readable medium having stored thereon program instructions that when executed cause performance of a set of acts comprising:
receiving an instruction to apply a first particular DVE of a particular overlay-DVE type to a temporal portion of a video segment based, at least in part, on the temporal portion of the video segment being suitable for having a DVE of the particular overlay-DVE type applied thereto, wherein the first particular DVE corresponds to a duration; making a determination that no temporal portion of the video segment satisfies each condition in a condition set, wherein the condition set comprises (i) a first condition that the temporal portion of the video segment has been identified as being suitable for having a DVE of the particular overlay-DVE type applied thereto, and (ii) a second condition that the temporal portion of the video segment is of at least the duration; and based, at least in part, on the received instruction and the determination, transmitting to a DVE system (i) a first instruction that causes the DVE system to apply a second particular DVE to a particular temporal portion of the video segment, thereby causing first content within a region of the particular temporal portion of the video segment to be removed from the region, and (ii) a second instruction that causes the DVE system to apply the first particular DVE to at least part of the particular temporal portion of the video segment, thereby causing second content to be overlaid within the region of at least part of the particular temporal portion. 9. The non-transitory computer-readable medium of claim 8, wherein receiving the instruction comprises (i) receiving a portion of a traffic schedule and (ii) extracting the instruction from the received portion of the traffic schedule. 10. The non-transitory computer-readable medium of claim 8, wherein the first particular DVE comprises a particular ticker DVE. 11. The non-transitory computer-readable medium of claim 8, wherein the condition set further comprises a third condition that a start time of the temporal portion of the video is within a period of time associated with the received instruction. 12. The non-transitory computer-readable medium of claim 8, wherein the determination is a first determination, the set of acts further comprising:
making a second determination that the second content has a particular property, wherein based, at least in part, on the received instruction and the first determination comprises based, at least in part, on the received instruction, the first determination, and the second determination. 13. The non-transitory computer-readable medium of claim 8, wherein the DVE system comprises a stunt switcher. 14. The non-transitory computer-readable medium of claim 8, wherein the second particular DVE comprises a particular pull-back DVE. 15. A video-broadcast system comprising:
an automation system; a communication network; and a digital video-effect (DVE) system connected to the automation system via the communication network, wherein the automation system is configured for performing a set of acts comprising:
receiving an instruction to apply a first particular DVE of a particular overlay-DVE type to a temporal portion of a video segment based, at least in part, on the temporal portion of the video segment being suitable for having a DVE of the particular overlay-DVE type applied thereto, wherein the first particular DVE corresponds to a duration;
making a determination that no temporal portion of the video segment satisfies each condition in a condition set, wherein the condition set comprises (i) a first condition that the temporal portion of the video segment has been identified as being suitable for having a DVE of the particular overlay-DVE type applied thereto, and (ii) a second condition that the temporal portion of the video segment is of at least the duration; and
based, at least in part, on the received instruction and the determination, transmitting to the DVE system via the communication network (i) a first instruction that causes the DVE system to apply a second particular DVE to a particular temporal portion of the video segment, thereby causing first content within a region of the particular temporal portion of the video segment to be removed from the region, and (ii) a second instruction that causes the DVE system to apply the first particular DVE to at least part of the particular temporal portion of the video segment, thereby causing second content to be overlaid within the region of at least part of the particular temporal portion. 16. The video-broadcast system of claim 15, wherein receiving the instruction comprises (i) receiving a portion of a traffic schedule and (ii) extracting the instruction from the received portion of the traffic schedule. 17. The video-broadcast system of claim 15, wherein the first particular DVE comprises a particular ticker DVE. 18. The video-broadcast system of claim 15, wherein the determination is a first determination, the set of acts further comprising:
making a second determination that the second content has a particular property, wherein based, at least in part, on the received instruction and the first determination comprises based, at least in part, on the received instruction, the first determination, and the second determination. 19. The video-broadcast system of claim 15, wherein the DVE system comprises a stunt switcher. 20. The video-broadcast system of claim 15, wherein the second particular DVE comprises a particular pull-back DVE. | In one aspect, an example method includes: receiving an instruction to apply a first particular DVE to a temporal portion of a video segment; making a determination that no temporal portion of the video segment satisfies each condition in a condition set; and based, at least in part, on the received instruction and the determination, transmitting to a DVE system a first instruction that causes the DVE system to apply a second particular DVE to a particular temporal portion of the video segment, thereby causing first content within a region of the particular temporal portion of the video segment to be removed, and a second instruction that causes the DVE system to apply the first particular DVE to at least part of the particular temporal portion of the video segment, thereby causing second content to be overlaid within the region of at least part of the particular temporal portion.1. A method for use in a video-broadcast system having a digital video-effect (DVE) system, the method comprising:
receiving an instruction to apply a first particular DVE of a particular overlay-DVE type to a temporal portion of a video segment based, at least in part, on the temporal portion of the video segment being suitable for having a DVE of the particular overlay-DVE type applied thereto, wherein the first particular DVE corresponds to a duration; making a determination that no temporal portion of the video segment satisfies each condition in a condition set, wherein the condition set comprises (i) a first condition that the temporal portion of the video segment has been identified as being suitable for having a DVE of the particular overlay-DVE type applied thereto, and (ii) a second condition that the temporal portion of the video segment is of at least the duration; and based, at least in part, on the received instruction and the determination, transmitting to the DVE system (i) a first instruction that causes the DVE system to apply a second particular DVE to a particular temporal portion of the video segment, thereby causing first content within a region of the particular temporal portion of the video segment to be removed from the region, and (ii) a second instruction that causes the DVE system to apply the first particular DVE to at least part of the particular temporal portion of the video segment, thereby causing second content to be overlaid within the region of at least part of the particular temporal portion. 2. The method of claim 1, wherein receiving the instruction comprises (i) receiving a portion of a traffic schedule and (ii) extracting the instruction from the received portion of the traffic schedule. 3. The method of claim 1, wherein the first particular DVE comprises a particular ticker DVE. 4. The method of claim 1, wherein the condition set further comprises a third condition that a start time of the temporal portion of the video is within a period of time associated with the received instruction. 5. The method of claim 1, wherein the determination is a first determination, the method further comprising:
making a second determination that the second content has a particular property, wherein based, at least in part, on the received instruction and the first determination comprises based, at least in part, on the received instruction, the first determination, and the second determination. 6. The method of claim 1, wherein the DVE system comprises a stunt switcher. 7. The method of claim 1, wherein the second particular DVE comprises a particular pull-back DVE. 8. A non-transitory computer-readable medium having stored thereon program instructions that when executed cause performance of a set of acts comprising:
receiving an instruction to apply a first particular DVE of a particular overlay-DVE type to a temporal portion of a video segment based, at least in part, on the temporal portion of the video segment being suitable for having a DVE of the particular overlay-DVE type applied thereto, wherein the first particular DVE corresponds to a duration; making a determination that no temporal portion of the video segment satisfies each condition in a condition set, wherein the condition set comprises (i) a first condition that the temporal portion of the video segment has been identified as being suitable for having a DVE of the particular overlay-DVE type applied thereto, and (ii) a second condition that the temporal portion of the video segment is of at least the duration; and based, at least in part, on the received instruction and the determination, transmitting to a DVE system (i) a first instruction that causes the DVE system to apply a second particular DVE to a particular temporal portion of the video segment, thereby causing first content within a region of the particular temporal portion of the video segment to be removed from the region, and (ii) a second instruction that causes the DVE system to apply the first particular DVE to at least part of the particular temporal portion of the video segment, thereby causing second content to be overlaid within the region of at least part of the particular temporal portion. 9. The non-transitory computer-readable medium of claim 8, wherein receiving the instruction comprises (i) receiving a portion of a traffic schedule and (ii) extracting the instruction from the received portion of the traffic schedule. 10. The non-transitory computer-readable medium of claim 8, wherein the first particular DVE comprises a particular ticker DVE. 11. The non-transitory computer-readable medium of claim 8, wherein the condition set further comprises a third condition that a start time of the temporal portion of the video is within a period of time associated with the received instruction. 12. The non-transitory computer-readable medium of claim 8, wherein the determination is a first determination, the set of acts further comprising:
making a second determination that the second content has a particular property, wherein based, at least in part, on the received instruction and the first determination comprises based, at least in part, on the received instruction, the first determination, and the second determination. 13. The non-transitory computer-readable medium of claim 8, wherein the DVE system comprises a stunt switcher. 14. The non-transitory computer-readable medium of claim 8, wherein the second particular DVE comprises a particular pull-back DVE. 15. A video-broadcast system comprising:
an automation system; a communication network; and a digital video-effect (DVE) system connected to the automation system via the communication network, wherein the automation system is configured for performing a set of acts comprising:
receiving an instruction to apply a first particular DVE of a particular overlay-DVE type to a temporal portion of a video segment based, at least in part, on the temporal portion of the video segment being suitable for having a DVE of the particular overlay-DVE type applied thereto, wherein the first particular DVE corresponds to a duration;
making a determination that no temporal portion of the video segment satisfies each condition in a condition set, wherein the condition set comprises (i) a first condition that the temporal portion of the video segment has been identified as being suitable for having a DVE of the particular overlay-DVE type applied thereto, and (ii) a second condition that the temporal portion of the video segment is of at least the duration; and
based, at least in part, on the received instruction and the determination, transmitting to the DVE system via the communication network (i) a first instruction that causes the DVE system to apply a second particular DVE to a particular temporal portion of the video segment, thereby causing first content within a region of the particular temporal portion of the video segment to be removed from the region, and (ii) a second instruction that causes the DVE system to apply the first particular DVE to at least part of the particular temporal portion of the video segment, thereby causing second content to be overlaid within the region of at least part of the particular temporal portion. 16. The video-broadcast system of claim 15, wherein receiving the instruction comprises (i) receiving a portion of a traffic schedule and (ii) extracting the instruction from the received portion of the traffic schedule. 17. The video-broadcast system of claim 15, wherein the first particular DVE comprises a particular ticker DVE. 18. The video-broadcast system of claim 15, wherein the determination is a first determination, the set of acts further comprising:
making a second determination that the second content has a particular property, wherein based, at least in part, on the received instruction and the first determination comprises based, at least in part, on the received instruction, the first determination, and the second determination. 19. The video-broadcast system of claim 15, wherein the DVE system comprises a stunt switcher. 20. The video-broadcast system of claim 15, wherein the second particular DVE comprises a particular pull-back DVE. | 2,400 |
7,519 | 7,519 | 15,396,512 | 2,468 | A method for weighted data traffic routing can include receiving a data packet at data switch, where the data switch includes a plurality of egress ports. The method can also include, for each of the egress ports, generating an independent hash value based on one or more fields of the data packet and generating a weighted hash value by scaling the hash value using a scaling factor. The scaling factor can be based on at least two traffic routing weights of a plurality of respective traffic routing weights associated with the plurality of egress ports. The method can further include selecting an egress port of the plurality of egress ports based on the weighted hash value for each of the egress ports and transmitting the data packet using the selected egress port. | 1. A method comprising:
receiving, at a data switch, a data packet, the data switch including a plurality of egress ports; for a given received data packet:
for each of the egress ports:
generating, by the data switch, a hash value based on one or more fields of the given received data packet using a hash function associated with the egress port, wherein the hash function associated with each egress port is independent and different from the hash functions associated with the other egress ports;
selecting, for the given received data packet, an egress port of the plurality of egress ports based on the respective hash values for each of the egress ports generated for the given received data packet; and
transmitting, by the data switch, the given received data packet using the selected egress port. 2. The method of claim 1, wherein the one or more fields of the given received data packet include one or more fields of a header of the given received data packet, the one or more fields of the header of the given received data packet having fixed values for each received data packet of a data flow associated with the given received data packet. 3. The method of claim 1, wherein generating a hash value includes generating a weighted hash value based on a plurality of respective traffic routing weights associated with the plurality of egress ports. 4. The method of claim 3, wherein generating a weighted hash value for a given egress port of the plurality of egress ports is based on a ratio of a probability of the given egress port being selected in a joint probability distribution for the plurality of egress ports with a probability of an egress port with a lowest routing weight of the respective traffic routing weights being selected in the joint probability distribution, the probability of the given egress port being selected in the joint probability distribution being proportional with a routing weight associated with the given egress port. 5. The method of claim 3, wherein the plurality of routing weights are normalized such that a smallest routing weight of the plurality of routing weights has a normalized value of 1. 6. The method of claim 3, wherein generating, for the given received data packet the hash value for each of egress ports includes normalizing a plurality of respective hash values for the plurality of egress ports to respective values in a range of 0 to 1. 7. The method of claim 1, wherein selecting, for the given received data packet, the egress port of the plurality of egress ports includes selecting an egress port of the plurality of egress ports having a highest respective hash value. 8. A data switch including a plurality of egress ports, the data switch comprising:
at least one memory that is configured to store instructions; and at least one processor that is operably coupled to the at least one memory and that is configured to process the instructions to cause the data switch to:
receive a data packet;
for a given received data packet:
for each of the egress ports:
generate a hash value based on one or more fields of the given received data packet using a hash function associated with the egress port, wherein the hash function associated with each egress port is independent and different from the hash functions associated with the other egress ports;
select, for the given received data packet, an egress port of the plurality of egress ports based on the respective hash values for each of the egress ports generated for the given received data packet; and
transmit the given received data packet using the selected egress port. 9. The data switch of claim 8, wherein the one or more fields of the given received data packet include one or more fields of a header of the given received data packet, the one or more fields of the header of the given received data packet having fixed values for each data packet of a data flow associated with the given received data packet. 10. The data switch of claim 8, wherein the generated hash value includes a weighted hash value generated based on a plurality of respective traffic routing weights associated with the plurality of egress ports. 11. The data switch of claim 10, wherein a weighted hash value for a given egress port of the plurality of egress ports is based on a ratio of a probability of the given egress port being selected in a joint probability distribution for the plurality of egress ports with a probability of an egress port with a lowest routing weight of the respective traffic routing weights being selected in the joint probability distribution, the probability of the given egress port being selected in the joint probability distribution being proportional with a routing weight associated with the given egress port. 12. The method of claim 10, wherein the plurality of routing weights are normalized such that a smallest routing weight of the plurality of routing weights has a normalized value of 1. 13. The data switch of claim 10, wherein generating, for the given received data packet, the hash value for each of egress ports includes normalizing a plurality of respective hash values for the plurality of egress ports to respective values in a range of 0 to 1. 14. The data switch of claim 8, wherein selecting, for the given received data packet, the egress port of the plurality of egress ports includes selecting an egress port of the plurality of egress ports having a highest respective hash value. | A method for weighted data traffic routing can include receiving a data packet at data switch, where the data switch includes a plurality of egress ports. The method can also include, for each of the egress ports, generating an independent hash value based on one or more fields of the data packet and generating a weighted hash value by scaling the hash value using a scaling factor. The scaling factor can be based on at least two traffic routing weights of a plurality of respective traffic routing weights associated with the plurality of egress ports. The method can further include selecting an egress port of the plurality of egress ports based on the weighted hash value for each of the egress ports and transmitting the data packet using the selected egress port.1. A method comprising:
receiving, at a data switch, a data packet, the data switch including a plurality of egress ports; for a given received data packet:
for each of the egress ports:
generating, by the data switch, a hash value based on one or more fields of the given received data packet using a hash function associated with the egress port, wherein the hash function associated with each egress port is independent and different from the hash functions associated with the other egress ports;
selecting, for the given received data packet, an egress port of the plurality of egress ports based on the respective hash values for each of the egress ports generated for the given received data packet; and
transmitting, by the data switch, the given received data packet using the selected egress port. 2. The method of claim 1, wherein the one or more fields of the given received data packet include one or more fields of a header of the given received data packet, the one or more fields of the header of the given received data packet having fixed values for each received data packet of a data flow associated with the given received data packet. 3. The method of claim 1, wherein generating a hash value includes generating a weighted hash value based on a plurality of respective traffic routing weights associated with the plurality of egress ports. 4. The method of claim 3, wherein generating a weighted hash value for a given egress port of the plurality of egress ports is based on a ratio of a probability of the given egress port being selected in a joint probability distribution for the plurality of egress ports with a probability of an egress port with a lowest routing weight of the respective traffic routing weights being selected in the joint probability distribution, the probability of the given egress port being selected in the joint probability distribution being proportional with a routing weight associated with the given egress port. 5. The method of claim 3, wherein the plurality of routing weights are normalized such that a smallest routing weight of the plurality of routing weights has a normalized value of 1. 6. The method of claim 3, wherein generating, for the given received data packet the hash value for each of egress ports includes normalizing a plurality of respective hash values for the plurality of egress ports to respective values in a range of 0 to 1. 7. The method of claim 1, wherein selecting, for the given received data packet, the egress port of the plurality of egress ports includes selecting an egress port of the plurality of egress ports having a highest respective hash value. 8. A data switch including a plurality of egress ports, the data switch comprising:
at least one memory that is configured to store instructions; and at least one processor that is operably coupled to the at least one memory and that is configured to process the instructions to cause the data switch to:
receive a data packet;
for a given received data packet:
for each of the egress ports:
generate a hash value based on one or more fields of the given received data packet using a hash function associated with the egress port, wherein the hash function associated with each egress port is independent and different from the hash functions associated with the other egress ports;
select, for the given received data packet, an egress port of the plurality of egress ports based on the respective hash values for each of the egress ports generated for the given received data packet; and
transmit the given received data packet using the selected egress port. 9. The data switch of claim 8, wherein the one or more fields of the given received data packet include one or more fields of a header of the given received data packet, the one or more fields of the header of the given received data packet having fixed values for each data packet of a data flow associated with the given received data packet. 10. The data switch of claim 8, wherein the generated hash value includes a weighted hash value generated based on a plurality of respective traffic routing weights associated with the plurality of egress ports. 11. The data switch of claim 10, wherein a weighted hash value for a given egress port of the plurality of egress ports is based on a ratio of a probability of the given egress port being selected in a joint probability distribution for the plurality of egress ports with a probability of an egress port with a lowest routing weight of the respective traffic routing weights being selected in the joint probability distribution, the probability of the given egress port being selected in the joint probability distribution being proportional with a routing weight associated with the given egress port. 12. The method of claim 10, wherein the plurality of routing weights are normalized such that a smallest routing weight of the plurality of routing weights has a normalized value of 1. 13. The data switch of claim 10, wherein generating, for the given received data packet, the hash value for each of egress ports includes normalizing a plurality of respective hash values for the plurality of egress ports to respective values in a range of 0 to 1. 14. The data switch of claim 8, wherein selecting, for the given received data packet, the egress port of the plurality of egress ports includes selecting an egress port of the plurality of egress ports having a highest respective hash value. | 2,400 |
7,520 | 7,520 | 14,770,838 | 2,424 | A host device ( 300 ) provides wireless docking to a dockee device ( 250 ). The host has a remote client unit ( 210 ) for providing at least one audio/video (AV) rendering function to an application ( 252 ) via remote server unit ( 251 ) in a dockee device. The host has transfer units ( 211,212,213,214 ) arranged for enabling transmitting a downstream of first AV data to the dockee device, and receiving an upstream of second AV data to be rendered from the dockee device. The host device has an AV router ( 310 ) for processing the downstream and the upstream so as to replace first AV data in the downstream by a predetermined pattern before transmitting the downstream to the dockee device, and replace at least part of second AV data in the upstream by at least part of the first AV data after receiving the upstream from the dockee device and before rendering the AV data. Advantageously required bandwidth is reduced in the wireless communication. | 1. A wireless docking system comprising a host device and a dockee device, the host device being configured for rendering audio or video (AV) data, the host device comprising:
a host communication unit for accommodating wireless communication, a remote client unit for providing at least one AV rendering function to a remote client for enabling rendering the audio data via a sound device or the video data via a screen, at least one transfer unit configured for enabling transmitting a downstream of first AV data to the dockee device, and receiving an upstream of second AV data to be rendered from the dockee device, the dockee device comprising: a dockee communication unit for accommodating the wireless communication, a remote server unit for cooperating with the remote client unit for enabling said AV rendering function, an application unit for receiving of the downstream and for generating of the upstream, and the host device further comprising an AV router configured for processing the downstream and the upstream so as to replace the first AV data by a predetermined pattern before transmitting the downstream to the dockee device, and replace at least part of the second AV data by at least part of the first AV data after receiving the upstream from the dockee device and before rendering via said AV rendering function, wherein the dockee device is configured for including, as part of the upstream, the predetermined pattern into the second AV data, and the AV router is configured for recognizing the predetermined pattern in the second AV data, and replacing the second AV data corresponding to said recognized pattern by at least part of the first AV data. 2. A host device for use in a wireless docking system, the host device being configured for rendering audio or video (AV) data, the host device comprising:
a host communication unit for accommodating wireless communication, a remote client unit for providing at least one AV rendering function to a remote client for enabling rendering the audio data via a sound device or the video data via a screen, at least one transfer unit configured for enabling transmitting a downstream of first AV data to the dockee device, and receiving an upstream of second AV data to be rendered from the dockee device, and an AV router configured for processing the downstream and the upstream so as to replace the first AV data by a predetermined pattern before transmitting the downstream to the dockee device, and replace at least part of the second AV data by at least part of the first AV data after receiving the upstream from the dockee device and before rendering via said AV rendering function, wherein the AV router is configured for recognizing the predetermined pattern in the second AV data, and replacing the second AV data corresponding to said recognized pattern by at least part of the first AV data. 3. The device as claimed in claim 2, wherein the AV router is configured for receiving rendering commands from the dockee device, the rendering commands being indicative of said replacing at least part of the second AV data by at least part of the first AV data. 4. The device claimed in claim 3, wherein the rendering commands comprise video rendering commands, comprising at least one of:
a command indicative of an area of the screen for rendering the first AV data; a command indicative of an area of the first AV data to be rendered on the screen; a command indicative of a AV pattern in the second AV data indicative of the rendering area; a command indicative of a reference point for rendering the first AV data; a command indicative of a visual marker to be detected in the second AV data for positioning the first AV data; a command indicative of an indicator for selecting a predefined location for rendering the first AV data. 5. The device as claimed in claim 3, wherein the rendering commands comprise graphical rendering commands, comprising at least one of:
a command indicative of a graphical element to be rendered; a command indicative of a location of rendering a graphical element; a command indicative of a time indication for starting, stopping or temporarily displaying a graphical element; a command indicative of a graphical user interface for enabling interaction with a user; a command indicative of graphical control elements for enabling control via a user action. 6. The device as claimed in claim 3, wherein the rendering commands comprise audio rendering commands, comprising at least one of:
a command indicative of a gain factor for rendering audio data of the first AV data; a command indicative of a mixing ratio for rendering a combination of audio data of the first AV data and audio data of the second AV data. 7. The device as claimed in claim 1, wherein the AV router is configured for
recognizing the predetermined pattern in a patterned area of display output data in the second AV data, replacing the second AV data in the patterned area by a corresponding amount of the first AV data. 8. The device as claimed in claim 7, wherein the AV router is configured for:
recognizing in the patterned area, a scaling and/or cropping of the predetermined pattern, processing the first AV data corresponding to the scaling and/or cropping, and replacing the second AV data in the patterned area by a corresponding amount of the processed first AV data. 9. The device as claimed in claim 1, wherein the predetermined pattern includes at least one of:
a video pattern having a single background color; a video pattern having one or more geometrical objects; a video pattern having a single foreground color; an audio pattern of silence; an audio pattern of a sequence of predetermined sounds; temporal information, and the AV router is configured for using the temporal information to determine a delay between sending the first data in the downstream and receiving the second AV data in the upstream, and delaying AV content from the AV input device according to the determined delay before said replacing so as to synchronize the AV data to be rendered. 10. A dockee device for use in a wireless docking system, the dockee device comprising:
a dockee communication unit for accommodating said wireless communication, a remote server unit for cooperating with the remote client unit for enabling said AV rendering function, an application unit for receiving of the downstream and for generating of the upstream, wherein the dockee device is configured for including, as part of the upstream, the predetermined pattern into the second AV data. 11. The dockee device as claimed in claim 10, wherein the dockee device further comprises a dockee router for routing and processing the upstream and the downstream in the dockee device, and for communicating with the AV router so as to exchange routing control commands so as to determine the AV routing as supported by the host device. 12. The dockee device as claimed in claim 11, wherein the dockee router comprises:
a virtual webcam driver to provide to a dockee application a first predetermined pattern; or a virtual external AV driver to provide to a dockee application a second predetermined pattern. 13. A method of wireless docking for a host device in a wireless docking system, the method comprising:
providing at least one AV rendering function to a remote client for enabling rendering the audio data via a sound device or the video data via a screen, processing the downstream and the upstream so as to replace the first AV data by a predetermined pattern before transmitting the downstream to the dockee device; and after receiving the upstream from the dockee device and before rendering via said AV rendering function; recognizing the predetermined pattern in the second AV data, and replacing the second AV data corresponding to said recognized pattern by at least part of the first AV data. 14. The method of claim 13, the method comprising:
cooperating with the remote client unit for enabling said AV rendering function, generating of the upstream, and including, as part of the upstream, the predetermined pattern into the second AV data. 15. (canceled) 16. A non-transitory computer-readable medium having one or more executable instructions stored thereon, which when executed by a processor, cause the processor to perform a method for carrying out a wireless docking for a host device in a wireless docking system, the method comprising:
providing at least one AV rendering function to a remote client for enabling rendering the audio data via a sound device or the video data via a screen; processing the downstream and the upstream so as to replace the first AV data by a predetermined pattern before transmitting the downstream to the dockee device; and
after receiving the upstream from the dockee device and before rendering via said AV rendering function;
recognizing the predetermined pattern in the second AV data; and
replacing the second AV data corresponding to said recognized pattern by at least part of the first AV data. | A host device ( 300 ) provides wireless docking to a dockee device ( 250 ). The host has a remote client unit ( 210 ) for providing at least one audio/video (AV) rendering function to an application ( 252 ) via remote server unit ( 251 ) in a dockee device. The host has transfer units ( 211,212,213,214 ) arranged for enabling transmitting a downstream of first AV data to the dockee device, and receiving an upstream of second AV data to be rendered from the dockee device. The host device has an AV router ( 310 ) for processing the downstream and the upstream so as to replace first AV data in the downstream by a predetermined pattern before transmitting the downstream to the dockee device, and replace at least part of second AV data in the upstream by at least part of the first AV data after receiving the upstream from the dockee device and before rendering the AV data. Advantageously required bandwidth is reduced in the wireless communication.1. A wireless docking system comprising a host device and a dockee device, the host device being configured for rendering audio or video (AV) data, the host device comprising:
a host communication unit for accommodating wireless communication, a remote client unit for providing at least one AV rendering function to a remote client for enabling rendering the audio data via a sound device or the video data via a screen, at least one transfer unit configured for enabling transmitting a downstream of first AV data to the dockee device, and receiving an upstream of second AV data to be rendered from the dockee device, the dockee device comprising: a dockee communication unit for accommodating the wireless communication, a remote server unit for cooperating with the remote client unit for enabling said AV rendering function, an application unit for receiving of the downstream and for generating of the upstream, and the host device further comprising an AV router configured for processing the downstream and the upstream so as to replace the first AV data by a predetermined pattern before transmitting the downstream to the dockee device, and replace at least part of the second AV data by at least part of the first AV data after receiving the upstream from the dockee device and before rendering via said AV rendering function, wherein the dockee device is configured for including, as part of the upstream, the predetermined pattern into the second AV data, and the AV router is configured for recognizing the predetermined pattern in the second AV data, and replacing the second AV data corresponding to said recognized pattern by at least part of the first AV data. 2. A host device for use in a wireless docking system, the host device being configured for rendering audio or video (AV) data, the host device comprising:
a host communication unit for accommodating wireless communication, a remote client unit for providing at least one AV rendering function to a remote client for enabling rendering the audio data via a sound device or the video data via a screen, at least one transfer unit configured for enabling transmitting a downstream of first AV data to the dockee device, and receiving an upstream of second AV data to be rendered from the dockee device, and an AV router configured for processing the downstream and the upstream so as to replace the first AV data by a predetermined pattern before transmitting the downstream to the dockee device, and replace at least part of the second AV data by at least part of the first AV data after receiving the upstream from the dockee device and before rendering via said AV rendering function, wherein the AV router is configured for recognizing the predetermined pattern in the second AV data, and replacing the second AV data corresponding to said recognized pattern by at least part of the first AV data. 3. The device as claimed in claim 2, wherein the AV router is configured for receiving rendering commands from the dockee device, the rendering commands being indicative of said replacing at least part of the second AV data by at least part of the first AV data. 4. The device claimed in claim 3, wherein the rendering commands comprise video rendering commands, comprising at least one of:
a command indicative of an area of the screen for rendering the first AV data; a command indicative of an area of the first AV data to be rendered on the screen; a command indicative of a AV pattern in the second AV data indicative of the rendering area; a command indicative of a reference point for rendering the first AV data; a command indicative of a visual marker to be detected in the second AV data for positioning the first AV data; a command indicative of an indicator for selecting a predefined location for rendering the first AV data. 5. The device as claimed in claim 3, wherein the rendering commands comprise graphical rendering commands, comprising at least one of:
a command indicative of a graphical element to be rendered; a command indicative of a location of rendering a graphical element; a command indicative of a time indication for starting, stopping or temporarily displaying a graphical element; a command indicative of a graphical user interface for enabling interaction with a user; a command indicative of graphical control elements for enabling control via a user action. 6. The device as claimed in claim 3, wherein the rendering commands comprise audio rendering commands, comprising at least one of:
a command indicative of a gain factor for rendering audio data of the first AV data; a command indicative of a mixing ratio for rendering a combination of audio data of the first AV data and audio data of the second AV data. 7. The device as claimed in claim 1, wherein the AV router is configured for
recognizing the predetermined pattern in a patterned area of display output data in the second AV data, replacing the second AV data in the patterned area by a corresponding amount of the first AV data. 8. The device as claimed in claim 7, wherein the AV router is configured for:
recognizing in the patterned area, a scaling and/or cropping of the predetermined pattern, processing the first AV data corresponding to the scaling and/or cropping, and replacing the second AV data in the patterned area by a corresponding amount of the processed first AV data. 9. The device as claimed in claim 1, wherein the predetermined pattern includes at least one of:
a video pattern having a single background color; a video pattern having one or more geometrical objects; a video pattern having a single foreground color; an audio pattern of silence; an audio pattern of a sequence of predetermined sounds; temporal information, and the AV router is configured for using the temporal information to determine a delay between sending the first data in the downstream and receiving the second AV data in the upstream, and delaying AV content from the AV input device according to the determined delay before said replacing so as to synchronize the AV data to be rendered. 10. A dockee device for use in a wireless docking system, the dockee device comprising:
a dockee communication unit for accommodating said wireless communication, a remote server unit for cooperating with the remote client unit for enabling said AV rendering function, an application unit for receiving of the downstream and for generating of the upstream, wherein the dockee device is configured for including, as part of the upstream, the predetermined pattern into the second AV data. 11. The dockee device as claimed in claim 10, wherein the dockee device further comprises a dockee router for routing and processing the upstream and the downstream in the dockee device, and for communicating with the AV router so as to exchange routing control commands so as to determine the AV routing as supported by the host device. 12. The dockee device as claimed in claim 11, wherein the dockee router comprises:
a virtual webcam driver to provide to a dockee application a first predetermined pattern; or a virtual external AV driver to provide to a dockee application a second predetermined pattern. 13. A method of wireless docking for a host device in a wireless docking system, the method comprising:
providing at least one AV rendering function to a remote client for enabling rendering the audio data via a sound device or the video data via a screen, processing the downstream and the upstream so as to replace the first AV data by a predetermined pattern before transmitting the downstream to the dockee device; and after receiving the upstream from the dockee device and before rendering via said AV rendering function; recognizing the predetermined pattern in the second AV data, and replacing the second AV data corresponding to said recognized pattern by at least part of the first AV data. 14. The method of claim 13, the method comprising:
cooperating with the remote client unit for enabling said AV rendering function, generating of the upstream, and including, as part of the upstream, the predetermined pattern into the second AV data. 15. (canceled) 16. A non-transitory computer-readable medium having one or more executable instructions stored thereon, which when executed by a processor, cause the processor to perform a method for carrying out a wireless docking for a host device in a wireless docking system, the method comprising:
providing at least one AV rendering function to a remote client for enabling rendering the audio data via a sound device or the video data via a screen; processing the downstream and the upstream so as to replace the first AV data by a predetermined pattern before transmitting the downstream to the dockee device; and
after receiving the upstream from the dockee device and before rendering via said AV rendering function;
recognizing the predetermined pattern in the second AV data; and
replacing the second AV data corresponding to said recognized pattern by at least part of the first AV data. | 2,400 |
7,521 | 7,521 | 14,116,955 | 2,413 | A method in a first radio communication node ( 110, 310, 710, 1010 ) and a first radio communication node ( 110, 310, 710, 1010 ) for scheduling a data transmission in a first time frame using one of a plurality of modulation and coding schemes are provided. The data transmission is to be transmitted between the first radio communication node ( 110, 310, 710, 1010 ) and a second radio communication node ( 120, 320, 720, 1020 ). The first radio communication node ( 110, 310, 710, 1010 ) obtains ( 301, 701, 1001, 1401 ) a first indication about channel quality for the first time frame. The first radio communication node ( 110, 310, 710, 1010 ) obtains ( 302, 702, 1002, 1402 ) second indication about a possible upcoming transmission failure. The possible upcoming transmission failure relates to a feedback information to be transmitted in a second time frame. The feedback information is associated with the data transmission in the first time frame. The first radio communication node ( 110, 310, 710, 1010 ) selects ( 303, 703, 1003, 1403 ) a modulation and coding scheme out of said plurality of modulation and coding schemes based on the first indication and the second indication. Next, the first radio communication node ( 110, 310, 710, 1010 ) schedules ( 304, 704, 1004, 1404 ) the data transmission using the selected modulation and coding scheme. | 1-14. (canceled) 15. A method in a first radio communication node for scheduling a data transmission in a first time frame, using one of a plurality of modulation and coding schemes, wherein the data transmission is to be transmitted between the first radio communication node and a second radio communication node, wherein the first radio communication node and the second radio communication node are comprised in a radio communication system, the method comprising:
obtaining a first indication about channel quality for the first time frame; obtaining a second indication about a possible upcoming transmission failure in a second time frame, wherein the possible upcoming transmission failure relates to feedback information to be transmitted in the second time frame, wherein the feedback information is associated with the data transmission in the first time frame; selecting a modulation and coding scheme out of said plurality of modulation and coding schemes, based on the first indication and the second indication; and scheduling the data transmission in the first time frame using the selected modulation and coding scheme. 16. The method of claim 15, wherein the second radio communication node is a user equipment, and wherein the obtaining of the first indication about channel quality comprises receiving the first indication from the user equipment, wherein the first indication comprises a channel quality index (CQI) information or signal-to-interference-ratio (SIR) information. 17. The method of claim 16, wherein the obtaining of the second indication is performed by:
predicting that the second time frame for transmission of the feedback information occurs during transmission of a further data transmission to a third radio communication node, and wherein the action of selecting the modulation and coding scheme is performed when the transmission of the feedback information is predicted to occur during the transmission of the further data transmission. 18. The method of claim 17, wherein the first radio communication node is a relay node, the second radio communication node is a user equipment, the third radio communication node is a radio network node comprised in the radio communication system, and the feedback information is to be transmitted by the user equipment. 19. The method of claim 18, wherein the first radio communication node is a user equipment, wherein the third radio communication node is a radio network node, wherein the feedback information is to be sent by the second radio communication node, and wherein the third radio communication node is capable of communicating with the user equipment using a different radio communication technology than the radio communication technology used by the radio communication system. 20. The method of claim 15, wherein a first block-error-rate (BLER) is smaller than a second block-error-rate (BLER) and wherein the selecting comprises:
selecting, if the second indication is below a channel quality threshold value, the modulation and coding scheme to target the first BLER for the data transmission, or selecting, if the second indication is above the channel quality threshold value, the modulation and coding scheme to target the second BLER for the data transmission. 21. The method of claim 20, wherein the obtaining of the second indication about a possible upcoming transmission failure is performed by:
predicting that the second time frame for transmission of the feedback information occurs during a measurement gap period, wherein the user equipment is capable of measuring channel quality towards neighboring radio network nodes during the measurement gap period, wherein the neighboring radio network nodes are neighbors to the first radio communication node; and wherein the selecting the modulation and coding scheme is performed when the transmission of the feedback information is predicted to occur during the measurement gap period. 22. The method of claim 21, wherein the obtaining of the second indication is performed by:
predicting that the second time frame for transmission of the feedback information occurs during transmission of a further data transmission to a third radio communication node, and wherein the action of selecting the modulation and coding scheme is performed when the transmission of the feedback information is predicted to occur during the transmission of the further data transmission. 23. The method of claim 22, wherein the first radio communication node is a relay node, the second radio communication node is a user equipment, the third radio communication node is a radio network node comprised in the radio communication system, and the feedback information is to be transmitted by the user equipment. 24. The method of claim 22, wherein the first radio communication node is a user equipment, wherein the third radio communication node is a radio network node, wherein the feedback information is to be sent by the second radio communication node, and wherein the third radio communication node is capable of communicating with the user equipment using a different radio communication technology than the radio communication technology used by the radio communication system. 25. A first radio communication node for scheduling a data transmission in a first time frame using one of a plurality of modulation and coding schemes, wherein the data transmission is to be transmitted between the first radio communication node and a second radio communication node, wherein the first and second radio communication nodes are comprised in a radio communication system, and wherein the first radio communication node comprises a processing circuit configured to:
obtain a first indication about channel quality for the first time frame; obtain a second indication about a possible upcoming transmission failure in a second time frame, wherein the possible upcoming transmission failure relates to a feedback information to be transmitted in the second time frame, wherein the feedback is associated with the data transmission in the first time frame; select a modulation and coding scheme out of said plurality of modulation and coding schemes based on the first indication and the second indication; and schedule the data transmission in the first time frame using the modulation and coding scheme. 26. The first radio communication node of claim 25, wherein the second radio communication node is a user equipment, wherein the first radio communication node comprises a receiver configured to receive the first indication from the user equipment, and wherein the first indication comprises a channel quality index (CQI) information, or a signal-to-interference-ratio (SIR) information. 27. The first radio communication node of claim 25, wherein a first block-error-rate (BLER) is smaller than a second block-error-rate (BLER) and wherein the processing circuit is configured to:
select, if the second indication is below a channel quality threshold value, the modulation and coding scheme to target the first BLER for the data transmission; or select, if the second indication is above the channel quality threshold value, the modulation and coding scheme to target the second BLER for the data transmission. 28. The first radio communication node of claim 25, wherein the processing circuit further is configured to predict when the second time frame for transmission of the feedback information occurs during a measurement gap period, wherein the user equipment is capable of measuring channel quality towards neighboring radio network nodes during the measurement gap period, wherein the neighboring radio network nodes are neighbors to the first radio communication node; and to perform the selection of the modulation and coding scheme when the transmission of the feedback information occurs during the measurement gap period. 29. The first radio communication node of claim 25, wherein the processing circuit further is configured to predict that the second time frame for transmission of the feedback information occurs during transmission of a further data transmission to a third radio communication node; and to perform the selection of the modulation and coding scheme when the transmission of the feedback information is predicted to occur during the transmission of the further data transmission. 30. The first radio communication node of claim 29, wherein the first radio communication node is a relay node, the second radio communication node is a user equipment, the third radio communication node is a radio network node comprised in the radio communication system, and the user equipment is configured to send the feedback information. 31. The first radio communication node of claim 29, wherein the first radio communication node is a user equipment, wherein the third radio communication node is a radio network node, wherein the second radio communication node is configured to send the feedback information, and wherein the third radio communication node is configured for communicating with the user equipment using a different radio communication technology than the radio communication technology used by the radio communication system. | A method in a first radio communication node ( 110, 310, 710, 1010 ) and a first radio communication node ( 110, 310, 710, 1010 ) for scheduling a data transmission in a first time frame using one of a plurality of modulation and coding schemes are provided. The data transmission is to be transmitted between the first radio communication node ( 110, 310, 710, 1010 ) and a second radio communication node ( 120, 320, 720, 1020 ). The first radio communication node ( 110, 310, 710, 1010 ) obtains ( 301, 701, 1001, 1401 ) a first indication about channel quality for the first time frame. The first radio communication node ( 110, 310, 710, 1010 ) obtains ( 302, 702, 1002, 1402 ) second indication about a possible upcoming transmission failure. The possible upcoming transmission failure relates to a feedback information to be transmitted in a second time frame. The feedback information is associated with the data transmission in the first time frame. The first radio communication node ( 110, 310, 710, 1010 ) selects ( 303, 703, 1003, 1403 ) a modulation and coding scheme out of said plurality of modulation and coding schemes based on the first indication and the second indication. Next, the first radio communication node ( 110, 310, 710, 1010 ) schedules ( 304, 704, 1004, 1404 ) the data transmission using the selected modulation and coding scheme.1-14. (canceled) 15. A method in a first radio communication node for scheduling a data transmission in a first time frame, using one of a plurality of modulation and coding schemes, wherein the data transmission is to be transmitted between the first radio communication node and a second radio communication node, wherein the first radio communication node and the second radio communication node are comprised in a radio communication system, the method comprising:
obtaining a first indication about channel quality for the first time frame; obtaining a second indication about a possible upcoming transmission failure in a second time frame, wherein the possible upcoming transmission failure relates to feedback information to be transmitted in the second time frame, wherein the feedback information is associated with the data transmission in the first time frame; selecting a modulation and coding scheme out of said plurality of modulation and coding schemes, based on the first indication and the second indication; and scheduling the data transmission in the first time frame using the selected modulation and coding scheme. 16. The method of claim 15, wherein the second radio communication node is a user equipment, and wherein the obtaining of the first indication about channel quality comprises receiving the first indication from the user equipment, wherein the first indication comprises a channel quality index (CQI) information or signal-to-interference-ratio (SIR) information. 17. The method of claim 16, wherein the obtaining of the second indication is performed by:
predicting that the second time frame for transmission of the feedback information occurs during transmission of a further data transmission to a third radio communication node, and wherein the action of selecting the modulation and coding scheme is performed when the transmission of the feedback information is predicted to occur during the transmission of the further data transmission. 18. The method of claim 17, wherein the first radio communication node is a relay node, the second radio communication node is a user equipment, the third radio communication node is a radio network node comprised in the radio communication system, and the feedback information is to be transmitted by the user equipment. 19. The method of claim 18, wherein the first radio communication node is a user equipment, wherein the third radio communication node is a radio network node, wherein the feedback information is to be sent by the second radio communication node, and wherein the third radio communication node is capable of communicating with the user equipment using a different radio communication technology than the radio communication technology used by the radio communication system. 20. The method of claim 15, wherein a first block-error-rate (BLER) is smaller than a second block-error-rate (BLER) and wherein the selecting comprises:
selecting, if the second indication is below a channel quality threshold value, the modulation and coding scheme to target the first BLER for the data transmission, or selecting, if the second indication is above the channel quality threshold value, the modulation and coding scheme to target the second BLER for the data transmission. 21. The method of claim 20, wherein the obtaining of the second indication about a possible upcoming transmission failure is performed by:
predicting that the second time frame for transmission of the feedback information occurs during a measurement gap period, wherein the user equipment is capable of measuring channel quality towards neighboring radio network nodes during the measurement gap period, wherein the neighboring radio network nodes are neighbors to the first radio communication node; and wherein the selecting the modulation and coding scheme is performed when the transmission of the feedback information is predicted to occur during the measurement gap period. 22. The method of claim 21, wherein the obtaining of the second indication is performed by:
predicting that the second time frame for transmission of the feedback information occurs during transmission of a further data transmission to a third radio communication node, and wherein the action of selecting the modulation and coding scheme is performed when the transmission of the feedback information is predicted to occur during the transmission of the further data transmission. 23. The method of claim 22, wherein the first radio communication node is a relay node, the second radio communication node is a user equipment, the third radio communication node is a radio network node comprised in the radio communication system, and the feedback information is to be transmitted by the user equipment. 24. The method of claim 22, wherein the first radio communication node is a user equipment, wherein the third radio communication node is a radio network node, wherein the feedback information is to be sent by the second radio communication node, and wherein the third radio communication node is capable of communicating with the user equipment using a different radio communication technology than the radio communication technology used by the radio communication system. 25. A first radio communication node for scheduling a data transmission in a first time frame using one of a plurality of modulation and coding schemes, wherein the data transmission is to be transmitted between the first radio communication node and a second radio communication node, wherein the first and second radio communication nodes are comprised in a radio communication system, and wherein the first radio communication node comprises a processing circuit configured to:
obtain a first indication about channel quality for the first time frame; obtain a second indication about a possible upcoming transmission failure in a second time frame, wherein the possible upcoming transmission failure relates to a feedback information to be transmitted in the second time frame, wherein the feedback is associated with the data transmission in the first time frame; select a modulation and coding scheme out of said plurality of modulation and coding schemes based on the first indication and the second indication; and schedule the data transmission in the first time frame using the modulation and coding scheme. 26. The first radio communication node of claim 25, wherein the second radio communication node is a user equipment, wherein the first radio communication node comprises a receiver configured to receive the first indication from the user equipment, and wherein the first indication comprises a channel quality index (CQI) information, or a signal-to-interference-ratio (SIR) information. 27. The first radio communication node of claim 25, wherein a first block-error-rate (BLER) is smaller than a second block-error-rate (BLER) and wherein the processing circuit is configured to:
select, if the second indication is below a channel quality threshold value, the modulation and coding scheme to target the first BLER for the data transmission; or select, if the second indication is above the channel quality threshold value, the modulation and coding scheme to target the second BLER for the data transmission. 28. The first radio communication node of claim 25, wherein the processing circuit further is configured to predict when the second time frame for transmission of the feedback information occurs during a measurement gap period, wherein the user equipment is capable of measuring channel quality towards neighboring radio network nodes during the measurement gap period, wherein the neighboring radio network nodes are neighbors to the first radio communication node; and to perform the selection of the modulation and coding scheme when the transmission of the feedback information occurs during the measurement gap period. 29. The first radio communication node of claim 25, wherein the processing circuit further is configured to predict that the second time frame for transmission of the feedback information occurs during transmission of a further data transmission to a third radio communication node; and to perform the selection of the modulation and coding scheme when the transmission of the feedback information is predicted to occur during the transmission of the further data transmission. 30. The first radio communication node of claim 29, wherein the first radio communication node is a relay node, the second radio communication node is a user equipment, the third radio communication node is a radio network node comprised in the radio communication system, and the user equipment is configured to send the feedback information. 31. The first radio communication node of claim 29, wherein the first radio communication node is a user equipment, wherein the third radio communication node is a radio network node, wherein the second radio communication node is configured to send the feedback information, and wherein the third radio communication node is configured for communicating with the user equipment using a different radio communication technology than the radio communication technology used by the radio communication system. | 2,400 |
7,522 | 7,522 | 14,348,107 | 2,488 | Said method comprises using a processing means for quantizing a transform of a residual of an intra-prediction of the image block using a quantization parameter value and encoding the quantized transform, further using a processing means for determining that an already used quantization parameter value and corresponding coding cost are available in a storage means, the already used a quantization parameter value being already used for quantizing of a further image block close resembling the image block or having a same complexity as the image block and using the already used quantization parameter value, the corresponding coding cost and a target bit rate for determining the quantization parameter value. Thus, the quantization parameter value can be determined such that flickering artifacts are avoided. | 1-10. (canceled) 11. Method for intra-encoding of an image block, comprising
quantizing a transform of a residual of an intra-prediction of the image block using a quantization parameter value and encoding the quantized transform, and using an already used quantization parameter value, the corresponding coding cost and a target bit rate for determining the quantization parameter value, said already used quantization parameter value and corresponding costs being already used for quantizing of a further image block close resembling the image block to be encoded or having a same complexity as the image block to be encoded. 12. Method of claim 11 further comprising determining whether the corresponding coding cost exceeds the target bit rate and determining the quantization parameter value not larger than the already used quantization parameter value in case the corresponding coding cost does not exceed the target bit rate and determining the quantization parameter value larger than the already used quantization parameter value in case the corresponding coding cost exceeds the target bit rate. 13. Method of claim 11 wherein a further already used quantization parameter value and corresponding further coding cost are further available and wherein, of said already used quantization parameter values available, one is corresponding to coding cost exceeding the target bit rate and the other is corresponding to coding cost not exceeding the target bit rate, the method further comprising determining the quantization parameter value larger than the one already used quantization parameter value and not larger than the other already used quantization parameter value. 14. Method of claim 13 wherein the already used quantization parameter value and the further already used quantization parameter value differ by one. 15. Method of claim 11, further comprising storing the encoded quantized transform on a non-transitory storage medium. 16. Device for intra-encoding of an image block, comprising
quantizing means for quantization of a transform of a residual of an intra-prediction of the image block using a quantization parameter value and encoding means for encoding the quantized transform, further comprising storage means for storing an already used quantization parameter value and corresponding coding cost, the already used quantization parameter value being already used for quantizing of at least one further image block close resembling the image block or having a same complexity as the image block, and determining means for determining that the already used quantization parameter value and corresponding coding cost are stored wherein the device comprises processing means adapted for using the already used quantization parameter value, the corresponding coding cost and a target bit rate for determining the quantization parameter value. 17. Device of claim 16 wherein the storage means are adapted for storing a further already used quantization parameter value and corresponding further coding cost, the processing means being adapted for further using the further already used quantization parameter value and the further corresponding coding cost for determining the quantization parameter value. 18. Non-transitory storage medium carrying instructions of program code for executing steps of the method according the claim 15, when said program is executed on a computing device. | Said method comprises using a processing means for quantizing a transform of a residual of an intra-prediction of the image block using a quantization parameter value and encoding the quantized transform, further using a processing means for determining that an already used quantization parameter value and corresponding coding cost are available in a storage means, the already used a quantization parameter value being already used for quantizing of a further image block close resembling the image block or having a same complexity as the image block and using the already used quantization parameter value, the corresponding coding cost and a target bit rate for determining the quantization parameter value. Thus, the quantization parameter value can be determined such that flickering artifacts are avoided.1-10. (canceled) 11. Method for intra-encoding of an image block, comprising
quantizing a transform of a residual of an intra-prediction of the image block using a quantization parameter value and encoding the quantized transform, and using an already used quantization parameter value, the corresponding coding cost and a target bit rate for determining the quantization parameter value, said already used quantization parameter value and corresponding costs being already used for quantizing of a further image block close resembling the image block to be encoded or having a same complexity as the image block to be encoded. 12. Method of claim 11 further comprising determining whether the corresponding coding cost exceeds the target bit rate and determining the quantization parameter value not larger than the already used quantization parameter value in case the corresponding coding cost does not exceed the target bit rate and determining the quantization parameter value larger than the already used quantization parameter value in case the corresponding coding cost exceeds the target bit rate. 13. Method of claim 11 wherein a further already used quantization parameter value and corresponding further coding cost are further available and wherein, of said already used quantization parameter values available, one is corresponding to coding cost exceeding the target bit rate and the other is corresponding to coding cost not exceeding the target bit rate, the method further comprising determining the quantization parameter value larger than the one already used quantization parameter value and not larger than the other already used quantization parameter value. 14. Method of claim 13 wherein the already used quantization parameter value and the further already used quantization parameter value differ by one. 15. Method of claim 11, further comprising storing the encoded quantized transform on a non-transitory storage medium. 16. Device for intra-encoding of an image block, comprising
quantizing means for quantization of a transform of a residual of an intra-prediction of the image block using a quantization parameter value and encoding means for encoding the quantized transform, further comprising storage means for storing an already used quantization parameter value and corresponding coding cost, the already used quantization parameter value being already used for quantizing of at least one further image block close resembling the image block or having a same complexity as the image block, and determining means for determining that the already used quantization parameter value and corresponding coding cost are stored wherein the device comprises processing means adapted for using the already used quantization parameter value, the corresponding coding cost and a target bit rate for determining the quantization parameter value. 17. Device of claim 16 wherein the storage means are adapted for storing a further already used quantization parameter value and corresponding further coding cost, the processing means being adapted for further using the further already used quantization parameter value and the further corresponding coding cost for determining the quantization parameter value. 18. Non-transitory storage medium carrying instructions of program code for executing steps of the method according the claim 15, when said program is executed on a computing device. | 2,400 |
7,523 | 7,523 | 13,104,932 | 2,441 | Embodiments include systems and methods comprising a gateway located at a premise forming at least one network on the premise that includes a plurality of premise devices. A sensor user interface (SUI) is coupled to the gateway and presented to a user via a remote device. The SUI includes at least one display element. The at least one display element includes a floor plan display that represents at least one floor of the premise. The floor plan display visually and separately indicates a location and a current state of each premise device of the plurality of premise devices. | 1. A system comprising:
a gateway located at a premise and forming at least one network on the premise that includes a plurality of premise devices; and a sensor user interface (SUI) coupled to the gateway and presented to a user via a remote device, wherein the SUI includes at least one display element, wherein the at least one display element includes a floor plan display that represents at least one floor of the premise, wherein the floor plan display visually and separately indicates a location and a current state of each premise device of the plurality of premise devices. 2. A system comprising:
a security network comprising a gateway at a premise coupled to a security system that includes security system components located at the premise, wherein the gateway is coupled to a remote network; a subnetwork coupled to the gateway, wherein the subnetwork comprises a plurality of premise devices located at the premise; and a sensor user interface (SUI) coupled to the gateway and presented to a user via a plurality of remote client devices, wherein the SUI includes at least one display element, wherein the at least one display element includes a floor plan display that represents at least one floor of the premise, wherein the at least one display element includes a plurality of system icons displayed on the floor plan display and representing location and current status of the security system components and the plurality of premise devices. 3. The system of claim 2, wherein the floor plan display comprises a color that visually indicates a state of the security system. 4. The system of claim 3, wherein the color is a background color. 5. The system of claim 3, wherein the color is a color of a border of the at least one floor displayed on the floor plan display. 6. The system of claim 2, wherein each color of a plurality of colors represents each state of a plurality of states of the security system. 7. The system of claim 2, wherein the SUI comprises text presented with floor plan display. 8. The system of claim 7, wherein the text comprises a text description of the state of the security system. 9. The system of claim 7, wherein the text comprises a plurality of phrases. 10. The system of claim 9, wherein the text comprises a first phrase describing an arm state of the security system. 11. The system of claim 10, wherein the text comprises a second phrase describing a sensor status of at least one sensor of the security system components. 12. The system of claim 2, wherein the SUI includes a security button that enables control of the security system. 13. The system of claim 12, wherein the security button enables arming of the security system. 14. The system of claim 12, wherein the security button enables disarming of the security system. 15. The system of claim 2, wherein the floor plan display comprises a plurality of tiles. 16. The system of claim 15, wherein at least one rendered state of the security system components and the plurality of premise devices comprises the plurality of tiles. 17. The system of claim 16, wherein the at least one rendered state comprises an alarmed state. 18. The system of claim 16, wherein the at least one rendered state comprises an offline state. 19. The system of claim 2, wherein the at least one floor comprises a plurality of floors. 20. The system of claim 19, wherein the SUI comprises a plurality of floor icons, wherein each floor icon corresponds to one of the plurality of floors. 21. The system of claim 19, wherein the plurality of floors corresponds to a number of floors of the premise. 22. The system of claim 19, wherein the plurality of floors corresponds to at least one floor of the premise and at least one floor of an outbuilding corresponding to the premise. 23. The system of claim 2, wherein the plurality of system icons includes a plurality of sensor icons, wherein each sensor icon represents a location and a state of a security system component corresponding to the sensor icon. 24. The system of claim 23, wherein the security system component is a door sensor. 25. The system of claim 23, wherein the security system component is a window sensor. 26. The system of claim 23, wherein the security system component is a motion sensor. 27. The system of claim 23, wherein the security system component is a fire sensor. 28. The system of claim 23, wherein the security system component is a smoke sensor. 29. The system of claim 23, wherein the security system component is a glass-break sensor. 30. The system of claim 23, wherein the security system component is a flood sensor. 31. The system of claim 23, wherein the state comprises an alarmed state. 32. The system of claim 23, wherein the state comprises a tripped state. 33. The system of claim 23, wherein the state comprises a tampered state. 34. The system of claim 23, wherein the state comprises a low-battery state. 35. The system of claim 23, wherein the state comprises an offline state. 36. The system of claim 23, wherein the state comprises an unknown state. 37. The system of claim 23, wherein the state comprises an installing state. 38. The system of claim 23, wherein the state comprises an open door state. 39. The system of claim 23, wherein the state comprises an open window state. 40. The system of claim 23, wherein the state comprises a motion sensor active state. 41. The system of claim 23, wherein the state comprises a quiet state. 42. The system of claim 41, wherein the quiet state comprises an inactive state. 43. The system of claim 41, wherein the quiet state comprises a closed state. 44. The system of claim 41, wherein the quiet state comprises an untriggered state. 45. The system of claim 41, wherein the quiet state comprises an untripped state. 46. The system of claim 23, wherein the plurality of system icons includes a plurality of device icons, wherein each device icon represents a location and a state of a premise device corresponding to the device icon. 47. The system of claim 46, wherein the premise device is a light. 48. The system of claim 46, wherein the premise device is a thermostat. 49. The system of claim 46, wherein the premise device is a camera. 50. The system of claim 46, wherein the premise device is a lock. 51. The system of claim 46, wherein the premise device is an energy device. 52. The system of claim 46, wherein the state comprises an installing state. 53. The system of claim 46, wherein the state comprises an active state. 54. The system of claim 46, wherein the state comprises a quiet state. 55. The system of claim 46, comprising a popup display that is displayed in response to a touch of a system icon of the plurality of system icons. 56. The system of claim 55, wherein the popup display includes a name of the security system component corresponding to the sensor icon that was touched. 57. The system of claim 56, wherein the popup display includes detailed information of the security system component. 58. The system of claim 57, wherein the detailed information comprises text describing a status of the security system component. 59. The system of claim 57, wherein the detailed information comprises data of a last event of the security system component. 60. The system of claim 55, wherein the popup display includes a name of the premise device corresponding to the device icon that was touched. 61. The system of claim 60, wherein the popup display includes a link to information of the premise device. 62. The system of claim 61, wherein the link activates presentation of live video of the premise device when the premise device is a camera. 63. The system of claim 61, wherein the link activates presentation of a control screen comprising controls for the premise device. 64. The system of claim 2, comprising an edit mode, wherein the SUI presents the edit mode for use in generating the floor plan display and placing the plurality of system icons on the floor plan display. 65. The system of claim 64, wherein the edit mode comprises a plurality of floor plans, wherein each floor plan of the plurality of floor plans defines a perimeter shape of a floor and corresponds to a floor plan icon that is selectable by a user for the floor plan display. 66. The system of claim 64, wherein the edit mode presents a grid comprising a plurality of tiles on the floor plan display. 67. The system of claim 66, wherein the edit mode comprises at least one of adding walls and deleting walls. 68. The system of claim 67, wherein the edit mode comprises adding a wall on the floor plan display, wherein the adding of the wall comprises forming the wall to have a length and placing the wall at a location on the floor plan display. 69. The system of claim 67, wherein the edit mode comprises deleting at least a portion of a wall from the floor plan display. 70. The system of claim 67, wherein the edit mode comprises placing the plurality of system icons on the floor plan display. 71. The system of claim 70, wherein the edit mode comprises a dock region that includes the plurality of system icons. 72. The system of claim 71, wherein each system icon is dragged from the dock region to a tile of the floor plan display representative of a location in the premise of the security system components and the plurality of premise devices. 73. The system of claim 67, wherein the edit mode differentiates premise exteriors from premise interiors based on a location of a tile. 74. The system of claim 73, wherein the edit mode automatically identifies interior tiles as tiles on a first side of a perimeter wall of the floor plan display and exterior tiles as tiles on a second side of the perimeter wall. 75. The system of claim 74, wherein the edit mode comprises a fill option that renders tiles on the first side of the perimeter wall as filled and renders tiles on the second side of the perimeter wall as transparent. 76. The system of claim 64, wherein the at least one floor comprises a plurality of floors, wherein the edit mode comprises at least one control that controls addition and deletion of floors. 77. The system of claim 76, wherein the at least one control comprises an add selector that controls addition of a floor. 78. The system of claim 76, wherein the at least one control comprises a delete selector that controls deletion of a floor. 79. The system of claim 76, wherein the at least one control comprises an add above selector that controls adding a new floor above an existing floor. 80. The system of claim 79, wherein the at least one control comprises an add below selector that controls adding a new floor below an existing floor. 81. The system of claim 2, wherein the at least one display element includes an orb icon that visually indicates an arm state of the security system. 82. The system of claim 81, wherein the orb icon visually indicates a sensor status of at least one sensor of a plurality of sensors, wherein the security system components comprise the plurality of sensors. 83. The system of claim 81, wherein the at least one display element includes orb text presented with the orb icon, wherein the orb text comprises a text description of the arm state, wherein the orb text comprises a first phrase describing the arm state and a second phrase describing a sensor status of at least one sensor of a plurality of sensors, wherein the security system components comprise the plurality of sensors. 84. The system of claim 2, wherein the at least one display element includes at least one system warning that is an informational warning of the security system. 85. The system of claim 84, wherein the at least one display element includes at least one device warning that is an information warning of at least one of a security system component and a premise device. 86. The system of claim 85, wherein the at least one device warning corresponds to at least one of a camera device, a lighting device, and a thermostat device. 87. The system of claim 85, wherein the at least one system warning and the at least one device warning is disassociated with at least one sensor of a plurality of sensors, wherein the security system components comprise the plurality of sensors. 88. The system of claim 85, wherein a plurality of system warnings comprise the at least one system warning and the at least one device warning, wherein the plurality of system warnings are cumulative. 89. The system of claim 2, wherein the at least one display element includes a list of interesting sensors that identifies at least one security system component corresponding to an interesting state. 90. The system of claim 89, wherein the at least one display element comprises a plurality of interesting state icons corresponding to a plurality of interesting states, wherein the interesting state comprises at least one of a triggered state, a tampered state, a tripped state, an offline state, an installing state, a low-battery state, and a bypassed state. 91. The system of claim 89, wherein the at least one display element includes a list of quiet sensors that identifies at least one security system component corresponding to a quiet state, wherein the quiet state comprises at least one of an inactive state, a closed state, an untriggered state, and an untripped state. 92. The system of claim 2, wherein the SUI comprises a summary page, the summary page including an orb icon that visually indicates an arm state of the security system and a security button that enables control of the security system. 93. The system of claim 92, wherein the summary page includes orb text presented with the orb icon, wherein the orb text comprises a text description of an arm state of the security system and a sensor status of at least one sensor of a plurality of sensors, wherein the security system components comprise the plurality of sensors. 94. The system of claim 92, wherein the security button enables arming and disarming of the security system. 95. The system of claim 92, wherein the summary page includes at least one icon representing at least one device warning, wherein the device warning is an information warning of at least one of a premise device. 96. The system of claim 92, wherein the summary page comprises at least one icon enabling a transfer of content to and from the remote network, wherein the content includes interactive content in the form of internet widgets. 97. The system of claim 92, wherein the summary page comprises at least one icon enabling communication and control of the premise devices coupled to the subnetwork, and access to live video from a camera, wherein the camera is an Internet Protocol (IP) camera. 98. The system of claim 92, wherein the SUI comprises a sensor status page, wherein the sensor status page includes a set of display elements of the at least one display element, wherein the set of display elements includes a list of interesting sensors that identifies at least one security system component corresponding to an interesting state. 99. The system of claim 98, wherein the set of display elements comprises a plurality of interesting state icons corresponding to a plurality of interesting states. 100. The system of claim 99, wherein the set of display elements includes a list of quiet sensors that identifies at least one security system component corresponding to a quiet state. 101. The system of claim 100, wherein the sensor status page includes at least one system warning that is an informational warning of the security system. 102. The system of claim 2, wherein the subnetwork is formed by the gateway and is external to the gateway. 103. The system of claim 2, wherein the gateway electronically integrates communications and functions of the plurality of premise devices and the security system components into the security network and controls communications between the security system, the subnetwork and the remote network. 104. The system of claim 2, wherein the SUI includes at least one display element for managing and receiving data of the premise devices agnostically across the plurality of remote client devices. 105. The system of claim 2, wherein the plurality of remote client devices include at least one of a touchscreen device, a mobile telephone, a cellular telephone, a client device coupled to the gateway via a mobile portal, and a client device coupled to the gateway via a web portal. 106. A system comprising:
a gateway at a premise, the gateway forming a security network with a security system that includes security system components located at a premise and forming a subnetwork that includes a plurality of premise devices located at the premise; and a sensor user interface (SUI) coupled to the gateway and presented to a user via a remote device, wherein the SUI includes at least one display element, wherein the at least one display element includes a floor plan display that represents at least one floor of the premise, wherein the floor plan display visually indicates a state of the security system and location and current status of the security system components and the plurality of premise devices. 107. A system comprising:
a gateway at a premise, the gateway forming a security network with a security system that includes security system components located at a premise and forming a subnetwork that includes a plurality of premise devices located at the premise; and a sensor user interface (SUI) coupled to the gateway and presented to a user via a remote device, wherein the SUI includes at least one display element, wherein the at least one display element includes a floor plan display that represents at least one floor of the premise, wherein the at least one display element includes a plurality of system icons displayed on the floor plan display, wherein the plurality of system icons includes a plurality of sensor icons that each represent a location and a state of a security system component corresponding to the sensor icon, wherein the plurality of system icons includes a plurality of device icons that each represent a location and a state of a premise device corresponding to the device icon. 108. A system comprising:
a gateway at a premise, the gateway forming a security network with a security system that includes security system components located at a premise and forming a subnetwork that includes a plurality of premise devices located at the premise; and a sensor user interface (SUI) coupled to the gateway and presented to a user via a remote device, wherein the SUI includes at least one display element, wherein the at least one display element includes at least one of an orb icon and a floor plan display that represents at least one floor of the premise, wherein the at least one display element visually indicates a state of the security system and location and current status of the security system components and the plurality of premise devices. | Embodiments include systems and methods comprising a gateway located at a premise forming at least one network on the premise that includes a plurality of premise devices. A sensor user interface (SUI) is coupled to the gateway and presented to a user via a remote device. The SUI includes at least one display element. The at least one display element includes a floor plan display that represents at least one floor of the premise. The floor plan display visually and separately indicates a location and a current state of each premise device of the plurality of premise devices.1. A system comprising:
a gateway located at a premise and forming at least one network on the premise that includes a plurality of premise devices; and a sensor user interface (SUI) coupled to the gateway and presented to a user via a remote device, wherein the SUI includes at least one display element, wherein the at least one display element includes a floor plan display that represents at least one floor of the premise, wherein the floor plan display visually and separately indicates a location and a current state of each premise device of the plurality of premise devices. 2. A system comprising:
a security network comprising a gateway at a premise coupled to a security system that includes security system components located at the premise, wherein the gateway is coupled to a remote network; a subnetwork coupled to the gateway, wherein the subnetwork comprises a plurality of premise devices located at the premise; and a sensor user interface (SUI) coupled to the gateway and presented to a user via a plurality of remote client devices, wherein the SUI includes at least one display element, wherein the at least one display element includes a floor plan display that represents at least one floor of the premise, wherein the at least one display element includes a plurality of system icons displayed on the floor plan display and representing location and current status of the security system components and the plurality of premise devices. 3. The system of claim 2, wherein the floor plan display comprises a color that visually indicates a state of the security system. 4. The system of claim 3, wherein the color is a background color. 5. The system of claim 3, wherein the color is a color of a border of the at least one floor displayed on the floor plan display. 6. The system of claim 2, wherein each color of a plurality of colors represents each state of a plurality of states of the security system. 7. The system of claim 2, wherein the SUI comprises text presented with floor plan display. 8. The system of claim 7, wherein the text comprises a text description of the state of the security system. 9. The system of claim 7, wherein the text comprises a plurality of phrases. 10. The system of claim 9, wherein the text comprises a first phrase describing an arm state of the security system. 11. The system of claim 10, wherein the text comprises a second phrase describing a sensor status of at least one sensor of the security system components. 12. The system of claim 2, wherein the SUI includes a security button that enables control of the security system. 13. The system of claim 12, wherein the security button enables arming of the security system. 14. The system of claim 12, wherein the security button enables disarming of the security system. 15. The system of claim 2, wherein the floor plan display comprises a plurality of tiles. 16. The system of claim 15, wherein at least one rendered state of the security system components and the plurality of premise devices comprises the plurality of tiles. 17. The system of claim 16, wherein the at least one rendered state comprises an alarmed state. 18. The system of claim 16, wherein the at least one rendered state comprises an offline state. 19. The system of claim 2, wherein the at least one floor comprises a plurality of floors. 20. The system of claim 19, wherein the SUI comprises a plurality of floor icons, wherein each floor icon corresponds to one of the plurality of floors. 21. The system of claim 19, wherein the plurality of floors corresponds to a number of floors of the premise. 22. The system of claim 19, wherein the plurality of floors corresponds to at least one floor of the premise and at least one floor of an outbuilding corresponding to the premise. 23. The system of claim 2, wherein the plurality of system icons includes a plurality of sensor icons, wherein each sensor icon represents a location and a state of a security system component corresponding to the sensor icon. 24. The system of claim 23, wherein the security system component is a door sensor. 25. The system of claim 23, wherein the security system component is a window sensor. 26. The system of claim 23, wherein the security system component is a motion sensor. 27. The system of claim 23, wherein the security system component is a fire sensor. 28. The system of claim 23, wherein the security system component is a smoke sensor. 29. The system of claim 23, wherein the security system component is a glass-break sensor. 30. The system of claim 23, wherein the security system component is a flood sensor. 31. The system of claim 23, wherein the state comprises an alarmed state. 32. The system of claim 23, wherein the state comprises a tripped state. 33. The system of claim 23, wherein the state comprises a tampered state. 34. The system of claim 23, wherein the state comprises a low-battery state. 35. The system of claim 23, wherein the state comprises an offline state. 36. The system of claim 23, wherein the state comprises an unknown state. 37. The system of claim 23, wherein the state comprises an installing state. 38. The system of claim 23, wherein the state comprises an open door state. 39. The system of claim 23, wherein the state comprises an open window state. 40. The system of claim 23, wherein the state comprises a motion sensor active state. 41. The system of claim 23, wherein the state comprises a quiet state. 42. The system of claim 41, wherein the quiet state comprises an inactive state. 43. The system of claim 41, wherein the quiet state comprises a closed state. 44. The system of claim 41, wherein the quiet state comprises an untriggered state. 45. The system of claim 41, wherein the quiet state comprises an untripped state. 46. The system of claim 23, wherein the plurality of system icons includes a plurality of device icons, wherein each device icon represents a location and a state of a premise device corresponding to the device icon. 47. The system of claim 46, wherein the premise device is a light. 48. The system of claim 46, wherein the premise device is a thermostat. 49. The system of claim 46, wherein the premise device is a camera. 50. The system of claim 46, wherein the premise device is a lock. 51. The system of claim 46, wherein the premise device is an energy device. 52. The system of claim 46, wherein the state comprises an installing state. 53. The system of claim 46, wherein the state comprises an active state. 54. The system of claim 46, wherein the state comprises a quiet state. 55. The system of claim 46, comprising a popup display that is displayed in response to a touch of a system icon of the plurality of system icons. 56. The system of claim 55, wherein the popup display includes a name of the security system component corresponding to the sensor icon that was touched. 57. The system of claim 56, wherein the popup display includes detailed information of the security system component. 58. The system of claim 57, wherein the detailed information comprises text describing a status of the security system component. 59. The system of claim 57, wherein the detailed information comprises data of a last event of the security system component. 60. The system of claim 55, wherein the popup display includes a name of the premise device corresponding to the device icon that was touched. 61. The system of claim 60, wherein the popup display includes a link to information of the premise device. 62. The system of claim 61, wherein the link activates presentation of live video of the premise device when the premise device is a camera. 63. The system of claim 61, wherein the link activates presentation of a control screen comprising controls for the premise device. 64. The system of claim 2, comprising an edit mode, wherein the SUI presents the edit mode for use in generating the floor plan display and placing the plurality of system icons on the floor plan display. 65. The system of claim 64, wherein the edit mode comprises a plurality of floor plans, wherein each floor plan of the plurality of floor plans defines a perimeter shape of a floor and corresponds to a floor plan icon that is selectable by a user for the floor plan display. 66. The system of claim 64, wherein the edit mode presents a grid comprising a plurality of tiles on the floor plan display. 67. The system of claim 66, wherein the edit mode comprises at least one of adding walls and deleting walls. 68. The system of claim 67, wherein the edit mode comprises adding a wall on the floor plan display, wherein the adding of the wall comprises forming the wall to have a length and placing the wall at a location on the floor plan display. 69. The system of claim 67, wherein the edit mode comprises deleting at least a portion of a wall from the floor plan display. 70. The system of claim 67, wherein the edit mode comprises placing the plurality of system icons on the floor plan display. 71. The system of claim 70, wherein the edit mode comprises a dock region that includes the plurality of system icons. 72. The system of claim 71, wherein each system icon is dragged from the dock region to a tile of the floor plan display representative of a location in the premise of the security system components and the plurality of premise devices. 73. The system of claim 67, wherein the edit mode differentiates premise exteriors from premise interiors based on a location of a tile. 74. The system of claim 73, wherein the edit mode automatically identifies interior tiles as tiles on a first side of a perimeter wall of the floor plan display and exterior tiles as tiles on a second side of the perimeter wall. 75. The system of claim 74, wherein the edit mode comprises a fill option that renders tiles on the first side of the perimeter wall as filled and renders tiles on the second side of the perimeter wall as transparent. 76. The system of claim 64, wherein the at least one floor comprises a plurality of floors, wherein the edit mode comprises at least one control that controls addition and deletion of floors. 77. The system of claim 76, wherein the at least one control comprises an add selector that controls addition of a floor. 78. The system of claim 76, wherein the at least one control comprises a delete selector that controls deletion of a floor. 79. The system of claim 76, wherein the at least one control comprises an add above selector that controls adding a new floor above an existing floor. 80. The system of claim 79, wherein the at least one control comprises an add below selector that controls adding a new floor below an existing floor. 81. The system of claim 2, wherein the at least one display element includes an orb icon that visually indicates an arm state of the security system. 82. The system of claim 81, wherein the orb icon visually indicates a sensor status of at least one sensor of a plurality of sensors, wherein the security system components comprise the plurality of sensors. 83. The system of claim 81, wherein the at least one display element includes orb text presented with the orb icon, wherein the orb text comprises a text description of the arm state, wherein the orb text comprises a first phrase describing the arm state and a second phrase describing a sensor status of at least one sensor of a plurality of sensors, wherein the security system components comprise the plurality of sensors. 84. The system of claim 2, wherein the at least one display element includes at least one system warning that is an informational warning of the security system. 85. The system of claim 84, wherein the at least one display element includes at least one device warning that is an information warning of at least one of a security system component and a premise device. 86. The system of claim 85, wherein the at least one device warning corresponds to at least one of a camera device, a lighting device, and a thermostat device. 87. The system of claim 85, wherein the at least one system warning and the at least one device warning is disassociated with at least one sensor of a plurality of sensors, wherein the security system components comprise the plurality of sensors. 88. The system of claim 85, wherein a plurality of system warnings comprise the at least one system warning and the at least one device warning, wherein the plurality of system warnings are cumulative. 89. The system of claim 2, wherein the at least one display element includes a list of interesting sensors that identifies at least one security system component corresponding to an interesting state. 90. The system of claim 89, wherein the at least one display element comprises a plurality of interesting state icons corresponding to a plurality of interesting states, wherein the interesting state comprises at least one of a triggered state, a tampered state, a tripped state, an offline state, an installing state, a low-battery state, and a bypassed state. 91. The system of claim 89, wherein the at least one display element includes a list of quiet sensors that identifies at least one security system component corresponding to a quiet state, wherein the quiet state comprises at least one of an inactive state, a closed state, an untriggered state, and an untripped state. 92. The system of claim 2, wherein the SUI comprises a summary page, the summary page including an orb icon that visually indicates an arm state of the security system and a security button that enables control of the security system. 93. The system of claim 92, wherein the summary page includes orb text presented with the orb icon, wherein the orb text comprises a text description of an arm state of the security system and a sensor status of at least one sensor of a plurality of sensors, wherein the security system components comprise the plurality of sensors. 94. The system of claim 92, wherein the security button enables arming and disarming of the security system. 95. The system of claim 92, wherein the summary page includes at least one icon representing at least one device warning, wherein the device warning is an information warning of at least one of a premise device. 96. The system of claim 92, wherein the summary page comprises at least one icon enabling a transfer of content to and from the remote network, wherein the content includes interactive content in the form of internet widgets. 97. The system of claim 92, wherein the summary page comprises at least one icon enabling communication and control of the premise devices coupled to the subnetwork, and access to live video from a camera, wherein the camera is an Internet Protocol (IP) camera. 98. The system of claim 92, wherein the SUI comprises a sensor status page, wherein the sensor status page includes a set of display elements of the at least one display element, wherein the set of display elements includes a list of interesting sensors that identifies at least one security system component corresponding to an interesting state. 99. The system of claim 98, wherein the set of display elements comprises a plurality of interesting state icons corresponding to a plurality of interesting states. 100. The system of claim 99, wherein the set of display elements includes a list of quiet sensors that identifies at least one security system component corresponding to a quiet state. 101. The system of claim 100, wherein the sensor status page includes at least one system warning that is an informational warning of the security system. 102. The system of claim 2, wherein the subnetwork is formed by the gateway and is external to the gateway. 103. The system of claim 2, wherein the gateway electronically integrates communications and functions of the plurality of premise devices and the security system components into the security network and controls communications between the security system, the subnetwork and the remote network. 104. The system of claim 2, wherein the SUI includes at least one display element for managing and receiving data of the premise devices agnostically across the plurality of remote client devices. 105. The system of claim 2, wherein the plurality of remote client devices include at least one of a touchscreen device, a mobile telephone, a cellular telephone, a client device coupled to the gateway via a mobile portal, and a client device coupled to the gateway via a web portal. 106. A system comprising:
a gateway at a premise, the gateway forming a security network with a security system that includes security system components located at a premise and forming a subnetwork that includes a plurality of premise devices located at the premise; and a sensor user interface (SUI) coupled to the gateway and presented to a user via a remote device, wherein the SUI includes at least one display element, wherein the at least one display element includes a floor plan display that represents at least one floor of the premise, wherein the floor plan display visually indicates a state of the security system and location and current status of the security system components and the plurality of premise devices. 107. A system comprising:
a gateway at a premise, the gateway forming a security network with a security system that includes security system components located at a premise and forming a subnetwork that includes a plurality of premise devices located at the premise; and a sensor user interface (SUI) coupled to the gateway and presented to a user via a remote device, wherein the SUI includes at least one display element, wherein the at least one display element includes a floor plan display that represents at least one floor of the premise, wherein the at least one display element includes a plurality of system icons displayed on the floor plan display, wherein the plurality of system icons includes a plurality of sensor icons that each represent a location and a state of a security system component corresponding to the sensor icon, wherein the plurality of system icons includes a plurality of device icons that each represent a location and a state of a premise device corresponding to the device icon. 108. A system comprising:
a gateway at a premise, the gateway forming a security network with a security system that includes security system components located at a premise and forming a subnetwork that includes a plurality of premise devices located at the premise; and a sensor user interface (SUI) coupled to the gateway and presented to a user via a remote device, wherein the SUI includes at least one display element, wherein the at least one display element includes at least one of an orb icon and a floor plan display that represents at least one floor of the premise, wherein the at least one display element visually indicates a state of the security system and location and current status of the security system components and the plurality of premise devices. | 2,400 |
7,524 | 7,524 | 13,948,035 | 2,486 | A correlative drift correction system can include a sample stage for supporting a sample and a cover slip. The system can include an infrared light source for emitting infrared light to be reflected at the cover slip and an optical sensor for detecting the reflected infrared light. The system can detect drift of the sample using reflected infrared light data from the optical sensor and can determine a drift correction to apply to image data of the sample. | 1. A correlative drift correction system, comprising:
a sample stage configured to support a sample and a cover slip; an infrared light source configured to emit infrared light to be reflected at the cover slip; an optical sensor for detecting the reflected infrared light; a drift detection module configured to detect drift of the sample using reflected infrared light data from the optical sensor; and a drift correction module configured to determine a drift correction to apply to image data associated with the sample. 2. The system of claim 1, further comprising:
an optical observation system for use in observing the sample on the sample stage; a visible light source configured to illuminate the sample with visible light for observation; and a second optical sensor for detecting the light from the sample as the image data. 3. The system of claim 1, further comprising:
an optical observation system for use in observing the sample on the sample stage; and a visible light source configured to illuminate the sample with visible light for observation; wherein the optical sensor is further positioned and configured to detect the light from the sample as the image data. 4. The system of claim 1, further comprising a second optical sensor configured to capture the image data, the drift correction module being further configured to apply the drift correction to the image data. 5. The system of claim 1, wherein the sample stage is moveable in three dimensions. 6. The system of claim 5, whereby the sample stage is moveable in a z direction during imaging to acquire a data stack. 7. The system of claim 1, further comprising a focusing module configured to adjust an optical focus on the sample based on the detected drift. 8. The system of claim 7, further comprising an optical observation system for use in observing the sample on the sample stage, and wherein the optical focus is adjusted by physically moving the sample stage. 9. The system of claim 1, further comprising an objective and a medium between the objective and the cover slip, and wherein the infrared light is reflected at an interface between the cover slip and the medium. 10. A correlative drift correction system, comprising:
a sample stage configured to support a sample and a cover slip; an optical observation system for use in observing the sample on the sample stage; a visible light source configured to illuminate the sample with visible light for observation; an infrared light source configured emit infrared light to be reflected at the cover slip; a first optical sensor for detecting reflected infrared light data; a second optical sensor for capturing image data of the sample by detecting the visible light; a drift detection module configured to detect drift of the sample using the reflected infrared light data; and a drift correction module configured to determine a drift correction to apply to the image data. 11. The system of claim 10, wherein the infrared light source and the visible light source are positioned to originate two original different beam paths, the system further comprising a beam manipulation device for combining the infrared light and the visible light into a single beam path and for subsequently splitting the infrared light and the visible light into multiple different beam paths. 12. The system of claim 11, wherein the multiple different beam paths are respectively directed parallel to the two original different beam paths. 13. The system of claim 11, wherein the multiple different beam paths respectively include a visible light filter and an infrared filter to filter visible light from an infrared light beam path and to filter infrared light from a visible light beam path. 14. A method for correlative drift correction, comprising:
directing infrared light from an infrared light source toward a sample stage supporting a sample and a cover slip; detecting the infrared light reflected at the cover slip using an optical sensor; directing visible light from a visible light source toward the sample stage; capturing visible light image data of the sample; detecting drift of the sample using reflected infrared light data from the optical sensor; and applying a drift correction to the visible light image data based on the drift. 15. The method of claim 14, further comprising correcting an optical focus on the sample based on the drift. 16. The method of claim 14, wherein detecting the drift comprises detecting the drift in three dimensions. 17. The method of claim 14, wherein the steps of detecting the drift and applying the drift correction are post-processing steps completed after completion of capturing the visible light image data of the sample. 18. The method of claim 14, wherein capturing the visible light image data of the sample comprises capturing the visible light image data of the sample using the optical sensor. 19. The method of claim 14, wherein capturing the visible light image data of the sample comprises capturing the visible light image data of the sample using a second optical sensor. 20. The method of claim 14, further comprising moving the sample stage in a z direction while capturing the visible light image data of the sample to acquire a data stack. | A correlative drift correction system can include a sample stage for supporting a sample and a cover slip. The system can include an infrared light source for emitting infrared light to be reflected at the cover slip and an optical sensor for detecting the reflected infrared light. The system can detect drift of the sample using reflected infrared light data from the optical sensor and can determine a drift correction to apply to image data of the sample.1. A correlative drift correction system, comprising:
a sample stage configured to support a sample and a cover slip; an infrared light source configured to emit infrared light to be reflected at the cover slip; an optical sensor for detecting the reflected infrared light; a drift detection module configured to detect drift of the sample using reflected infrared light data from the optical sensor; and a drift correction module configured to determine a drift correction to apply to image data associated with the sample. 2. The system of claim 1, further comprising:
an optical observation system for use in observing the sample on the sample stage; a visible light source configured to illuminate the sample with visible light for observation; and a second optical sensor for detecting the light from the sample as the image data. 3. The system of claim 1, further comprising:
an optical observation system for use in observing the sample on the sample stage; and a visible light source configured to illuminate the sample with visible light for observation; wherein the optical sensor is further positioned and configured to detect the light from the sample as the image data. 4. The system of claim 1, further comprising a second optical sensor configured to capture the image data, the drift correction module being further configured to apply the drift correction to the image data. 5. The system of claim 1, wherein the sample stage is moveable in three dimensions. 6. The system of claim 5, whereby the sample stage is moveable in a z direction during imaging to acquire a data stack. 7. The system of claim 1, further comprising a focusing module configured to adjust an optical focus on the sample based on the detected drift. 8. The system of claim 7, further comprising an optical observation system for use in observing the sample on the sample stage, and wherein the optical focus is adjusted by physically moving the sample stage. 9. The system of claim 1, further comprising an objective and a medium between the objective and the cover slip, and wherein the infrared light is reflected at an interface between the cover slip and the medium. 10. A correlative drift correction system, comprising:
a sample stage configured to support a sample and a cover slip; an optical observation system for use in observing the sample on the sample stage; a visible light source configured to illuminate the sample with visible light for observation; an infrared light source configured emit infrared light to be reflected at the cover slip; a first optical sensor for detecting reflected infrared light data; a second optical sensor for capturing image data of the sample by detecting the visible light; a drift detection module configured to detect drift of the sample using the reflected infrared light data; and a drift correction module configured to determine a drift correction to apply to the image data. 11. The system of claim 10, wherein the infrared light source and the visible light source are positioned to originate two original different beam paths, the system further comprising a beam manipulation device for combining the infrared light and the visible light into a single beam path and for subsequently splitting the infrared light and the visible light into multiple different beam paths. 12. The system of claim 11, wherein the multiple different beam paths are respectively directed parallel to the two original different beam paths. 13. The system of claim 11, wherein the multiple different beam paths respectively include a visible light filter and an infrared filter to filter visible light from an infrared light beam path and to filter infrared light from a visible light beam path. 14. A method for correlative drift correction, comprising:
directing infrared light from an infrared light source toward a sample stage supporting a sample and a cover slip; detecting the infrared light reflected at the cover slip using an optical sensor; directing visible light from a visible light source toward the sample stage; capturing visible light image data of the sample; detecting drift of the sample using reflected infrared light data from the optical sensor; and applying a drift correction to the visible light image data based on the drift. 15. The method of claim 14, further comprising correcting an optical focus on the sample based on the drift. 16. The method of claim 14, wherein detecting the drift comprises detecting the drift in three dimensions. 17. The method of claim 14, wherein the steps of detecting the drift and applying the drift correction are post-processing steps completed after completion of capturing the visible light image data of the sample. 18. The method of claim 14, wherein capturing the visible light image data of the sample comprises capturing the visible light image data of the sample using the optical sensor. 19. The method of claim 14, wherein capturing the visible light image data of the sample comprises capturing the visible light image data of the sample using a second optical sensor. 20. The method of claim 14, further comprising moving the sample stage in a z direction while capturing the visible light image data of the sample to acquire a data stack. | 2,400 |
7,525 | 7,525 | 15,175,962 | 2,425 | Embodiments of the present invention relate to methods and systems for ordering, communicating and applying pixel intra-prediction modes. | 1. An image decoding system for decoding a digital image comprising:
a) means for decoding each of blocks into which an image is divided; b) intra-prediction means for predicting values of pixels located along a specified direction in a target block to be decoded; and c) prediction mode estimating means for estimating a prediction mode for the target block; wherein d) the intra-prediction means uses at least a DC prediction mode using a prediction value being an average of the pixel values of first block located adjacent to and above the target block and a second block located adjacent to and to the left side of the target block, a Diagonal Down/Left prediction mode using the specified direction being diagonally downward to the left at approximately a 45 degree angle, a Diagonal Down/Right prediction mode using the specified direction being diagonally downward to the right at approximately a 45 degree angle as the prediction mode, e) the prediction modes are numbered with increasingly larger numbers, in order of the DC prediction mode, the Diagonal Down/Left, prediction mode and the Diagonal Down/Right prediction mode, and f) the prediction mode estimating means determines a prediction mode to have the lower mode number among the prediction mode of a first block located adjacent to and above the target block and the prediction mode of a second block located adjacent to the left side of the target block as the prediction mode for the target block. | Embodiments of the present invention relate to methods and systems for ordering, communicating and applying pixel intra-prediction modes.1. An image decoding system for decoding a digital image comprising:
a) means for decoding each of blocks into which an image is divided; b) intra-prediction means for predicting values of pixels located along a specified direction in a target block to be decoded; and c) prediction mode estimating means for estimating a prediction mode for the target block; wherein d) the intra-prediction means uses at least a DC prediction mode using a prediction value being an average of the pixel values of first block located adjacent to and above the target block and a second block located adjacent to and to the left side of the target block, a Diagonal Down/Left prediction mode using the specified direction being diagonally downward to the left at approximately a 45 degree angle, a Diagonal Down/Right prediction mode using the specified direction being diagonally downward to the right at approximately a 45 degree angle as the prediction mode, e) the prediction modes are numbered with increasingly larger numbers, in order of the DC prediction mode, the Diagonal Down/Left, prediction mode and the Diagonal Down/Right prediction mode, and f) the prediction mode estimating means determines a prediction mode to have the lower mode number among the prediction mode of a first block located adjacent to and above the target block and the prediction mode of a second block located adjacent to the left side of the target block as the prediction mode for the target block. | 2,400 |
7,526 | 7,526 | 14,137,921 | 2,457 | Embodiments of the present invention provide an improvement over known approaches for monitoring of and taking action on observations associated with distributed applications. Application event reporting and application resource monitoring is unified in a manner that significantly reduces storage and aggregation overhead. For example, embodiments of the present invention can employ hardware and/or software support that reduces storage and aggregation overhead. In addition to providing for fine-grained, continuous, decentralized monitoring of application activity and resource consumption, embodiments of the present invention can also provide for decentralized filtering, statistical analysis, and derived data streaming. Furthermore, embodiments of the present invention are securely implemented (e.g., for use solely under the control of an operator) and can use a separate security domain for network traffic. | 1. A method of monitoring application-driven activity in an application central processing unit of a data processing node, comprising:
receiving at least one resource monitor command at an application monitoring services module of a data processing node, wherein a management processor unit of the data processing node comprises the application monitoring services module and is coupled to an application central processing unit of the data processing node; in response to receiving the at least one monitor command, the application monitoring services module configuring an assessment protocol thereof dependent upon a resource assessment specification provided in the at least one monitor command; in accordance with the assessment protocol, the application monitoring services module assessing activity of the application central processing unit that arise from execution of an application running thereon; and the application monitoring services module outputting information derived from said activity to a recipient. 2. The method of claim 1 wherein:
the at least one monitor command includes a threshold value for a particular system resource utilized by the application central processing unit; and
configuring the assessment protocol includes configuring an assessment parameter using the threshold value. 3. The method of claim 1 wherein:
outputting the information includes applying a time stamp to each one of a plurality of events that arise from execution of the application running;
the data processing node is one node within a cluster of interconnected nodes; and
the time stamp applied to each one of the events is based upon a global time to which a local time of each one of the nodes is synchronized. 4. The method of claim 1 wherein assessing activity of the application central processing unit is performed out-of-band of processes of the application. 5. The method of claim 4 wherein:
outputting the information includes applying a time stamp to each one of a plurality of events that arise from execution of the application running;
the data processing node is one node within a cluster of interconnected nodes; and
the time stamp applied to each one of the events is based upon a global time to which a local time of each one of the nodes is synchronized. 6. The method of claim 1 wherein:
the at least one monitor command includes event filter information;
configuring the assessment protocol includes configuring a filter function using the filter information;
assessing activity of the application central processing unit includes using the filter function to manipulate execution of the application running thereon for causing events that arise from execution of the application running thereon to be generated; and
outputting the information derived from said activity includes transmitting the events for reception by a target. 7. The method of claim 6 wherein assessing activity of the application central processing unit is performed out-of-band of processes of the application. 8. The method of claim 7 wherein:
outputting the information includes applying a time stamp to each one of the events;
the data processing node is one node within a cluster of interconnected nodes; and
the time stamp applied to each one of the events is based upon a global time to which a local time of each one of the nodes is synchronized. 9. The method of claim 8 wherein:
the at least one monitor command includes a threshold value for a particular system resource utilized by the application central processing unit; and
configuring the assessment protocol includes configuring an assessment parameter using the threshold value. 10. A data processing node, comprising:
a plurality of application central processing units each having a respective application running thereon; and a management processor unit coupled to each one of the application central processing units, wherein the management processor unit comprises an application monitoring services module including a resource assessor and an event reporter, wherein the management processor unit comprises dedicated system resources with respect to the application central processing units such that processes implemented by the application monitoring services module are out-of-band of application processes carried out on each one of the application central processing units, wherein the application monitoring services module is configured to selectively implement one or more processes for assessing activity of a particular one of the application central processing units that arise from execution of the respective application running thereon and is configured to selectively implement one or more processes for outputting events generated by a particular one of the application central processing units that arise from execution of the respective application running thereon. 11. The data processing node of claim 10 wherein:
outputting the events includes applying a time stamp to each one of the events;
the data processing node is one node within a cluster of interconnected nodes; and
the time stamp applied to each one of the events is based upon a global time to which a local time of each one of the nodes is synchronized. 12. The data processing node of claim 10 wherein outputting the events includes transmitting the events for reception by a remote security domain thereby enabling the events to be monitored by an entity not having access permission to interact with the respective application by which the events were generated. 13. The data processing node of claim 10 wherein outputting the events includes outputting the events using a messaging functionality of an application level context. 14. The data processing node of claim 10 wherein:
the application monitoring services module performs processes for assessing activity of the particular one of the application central processing units that arise from execution of the respective application running thereon; and
assessing activity of the particular one of the application central processing units includes using a filter function to manipulate execution of the respective application running thereon to influence a manner in which the events are generated. 15. The data processing node of claim 14 wherein:
outputting the events includes applying a time stamp to each one of the events;
the data processing node is one node within a cluster of interconnected nodes; and
the time stamp applied to each one of the events is based upon a global time to which a local time of each one of the nodes is synchronized. 16. A data processing system, comprising:
a plurality of data processing nodes coupled to each other through an interconnect fabric, wherein each one of the data processing nodes comprises an application central processing unit and a management processor unit coupled to the application central processing unit, wherein the application central processing unit of each one of the data processing nodes has an instance of a particular application running thereon, wherein the management processor unit of each one of the data processing nodes comprises an application monitoring services module, and wherein the application monitoring services module of each one of the data processing nodes outputs a respective stream of time-stamped events that arise from execution of the instance of the particular application running on the application central processing unit thereof; and a target node that receives the respective stream of time-stamped events from each one of the data processing nodes and that generates a composite stream of events from the time-stamped events of at least a portion of the respective streams thereof, wherein the composite stream of events is time-sequenced dependent upon global time-stamp information of each one of the time-stamped events. 17. The data processing system of claim 16 wherein the management processor unit of each one of the data processing nodes comprises dedicated system resources with respect to the application central processing units such that processes implemented by the application monitoring services module are out-of-band of application processes carried out on the application central processing unit coupled thereto. 18. The data processing system of claim 16 wherein the target node is one of the data processing nodes. 19. The data processing node of claim 16 wherein:
the application monitoring services module of each one of the data processing nodes performs processes for assessing activity of the application central processing unit thereof that arise from execution of the instance of the particular application running thereon; and
assessing activity of the application central processing unit thereof includes using a filter function to manipulate execution of the application running thereon to influence a manner in which the events are generated. 20. The data processing system of claim 19 wherein the management processor unit of each one of the data processing nodes comprises dedicated system resources with respect to the application central processing units such that processes implemented by the application monitoring services module are out-of-band of application processes carried out on the application central processing unit coupled thereto. | Embodiments of the present invention provide an improvement over known approaches for monitoring of and taking action on observations associated with distributed applications. Application event reporting and application resource monitoring is unified in a manner that significantly reduces storage and aggregation overhead. For example, embodiments of the present invention can employ hardware and/or software support that reduces storage and aggregation overhead. In addition to providing for fine-grained, continuous, decentralized monitoring of application activity and resource consumption, embodiments of the present invention can also provide for decentralized filtering, statistical analysis, and derived data streaming. Furthermore, embodiments of the present invention are securely implemented (e.g., for use solely under the control of an operator) and can use a separate security domain for network traffic.1. A method of monitoring application-driven activity in an application central processing unit of a data processing node, comprising:
receiving at least one resource monitor command at an application monitoring services module of a data processing node, wherein a management processor unit of the data processing node comprises the application monitoring services module and is coupled to an application central processing unit of the data processing node; in response to receiving the at least one monitor command, the application monitoring services module configuring an assessment protocol thereof dependent upon a resource assessment specification provided in the at least one monitor command; in accordance with the assessment protocol, the application monitoring services module assessing activity of the application central processing unit that arise from execution of an application running thereon; and the application monitoring services module outputting information derived from said activity to a recipient. 2. The method of claim 1 wherein:
the at least one monitor command includes a threshold value for a particular system resource utilized by the application central processing unit; and
configuring the assessment protocol includes configuring an assessment parameter using the threshold value. 3. The method of claim 1 wherein:
outputting the information includes applying a time stamp to each one of a plurality of events that arise from execution of the application running;
the data processing node is one node within a cluster of interconnected nodes; and
the time stamp applied to each one of the events is based upon a global time to which a local time of each one of the nodes is synchronized. 4. The method of claim 1 wherein assessing activity of the application central processing unit is performed out-of-band of processes of the application. 5. The method of claim 4 wherein:
outputting the information includes applying a time stamp to each one of a plurality of events that arise from execution of the application running;
the data processing node is one node within a cluster of interconnected nodes; and
the time stamp applied to each one of the events is based upon a global time to which a local time of each one of the nodes is synchronized. 6. The method of claim 1 wherein:
the at least one monitor command includes event filter information;
configuring the assessment protocol includes configuring a filter function using the filter information;
assessing activity of the application central processing unit includes using the filter function to manipulate execution of the application running thereon for causing events that arise from execution of the application running thereon to be generated; and
outputting the information derived from said activity includes transmitting the events for reception by a target. 7. The method of claim 6 wherein assessing activity of the application central processing unit is performed out-of-band of processes of the application. 8. The method of claim 7 wherein:
outputting the information includes applying a time stamp to each one of the events;
the data processing node is one node within a cluster of interconnected nodes; and
the time stamp applied to each one of the events is based upon a global time to which a local time of each one of the nodes is synchronized. 9. The method of claim 8 wherein:
the at least one monitor command includes a threshold value for a particular system resource utilized by the application central processing unit; and
configuring the assessment protocol includes configuring an assessment parameter using the threshold value. 10. A data processing node, comprising:
a plurality of application central processing units each having a respective application running thereon; and a management processor unit coupled to each one of the application central processing units, wherein the management processor unit comprises an application monitoring services module including a resource assessor and an event reporter, wherein the management processor unit comprises dedicated system resources with respect to the application central processing units such that processes implemented by the application monitoring services module are out-of-band of application processes carried out on each one of the application central processing units, wherein the application monitoring services module is configured to selectively implement one or more processes for assessing activity of a particular one of the application central processing units that arise from execution of the respective application running thereon and is configured to selectively implement one or more processes for outputting events generated by a particular one of the application central processing units that arise from execution of the respective application running thereon. 11. The data processing node of claim 10 wherein:
outputting the events includes applying a time stamp to each one of the events;
the data processing node is one node within a cluster of interconnected nodes; and
the time stamp applied to each one of the events is based upon a global time to which a local time of each one of the nodes is synchronized. 12. The data processing node of claim 10 wherein outputting the events includes transmitting the events for reception by a remote security domain thereby enabling the events to be monitored by an entity not having access permission to interact with the respective application by which the events were generated. 13. The data processing node of claim 10 wherein outputting the events includes outputting the events using a messaging functionality of an application level context. 14. The data processing node of claim 10 wherein:
the application monitoring services module performs processes for assessing activity of the particular one of the application central processing units that arise from execution of the respective application running thereon; and
assessing activity of the particular one of the application central processing units includes using a filter function to manipulate execution of the respective application running thereon to influence a manner in which the events are generated. 15. The data processing node of claim 14 wherein:
outputting the events includes applying a time stamp to each one of the events;
the data processing node is one node within a cluster of interconnected nodes; and
the time stamp applied to each one of the events is based upon a global time to which a local time of each one of the nodes is synchronized. 16. A data processing system, comprising:
a plurality of data processing nodes coupled to each other through an interconnect fabric, wherein each one of the data processing nodes comprises an application central processing unit and a management processor unit coupled to the application central processing unit, wherein the application central processing unit of each one of the data processing nodes has an instance of a particular application running thereon, wherein the management processor unit of each one of the data processing nodes comprises an application monitoring services module, and wherein the application monitoring services module of each one of the data processing nodes outputs a respective stream of time-stamped events that arise from execution of the instance of the particular application running on the application central processing unit thereof; and a target node that receives the respective stream of time-stamped events from each one of the data processing nodes and that generates a composite stream of events from the time-stamped events of at least a portion of the respective streams thereof, wherein the composite stream of events is time-sequenced dependent upon global time-stamp information of each one of the time-stamped events. 17. The data processing system of claim 16 wherein the management processor unit of each one of the data processing nodes comprises dedicated system resources with respect to the application central processing units such that processes implemented by the application monitoring services module are out-of-band of application processes carried out on the application central processing unit coupled thereto. 18. The data processing system of claim 16 wherein the target node is one of the data processing nodes. 19. The data processing node of claim 16 wherein:
the application monitoring services module of each one of the data processing nodes performs processes for assessing activity of the application central processing unit thereof that arise from execution of the instance of the particular application running thereon; and
assessing activity of the application central processing unit thereof includes using a filter function to manipulate execution of the application running thereon to influence a manner in which the events are generated. 20. The data processing system of claim 19 wherein the management processor unit of each one of the data processing nodes comprises dedicated system resources with respect to the application central processing units such that processes implemented by the application monitoring services module are out-of-band of application processes carried out on the application central processing unit coupled thereto. | 2,400 |
7,527 | 7,527 | 14,759,817 | 2,465 | For ensuring reliability of data transmission in a wireless mesh network, while reducing the data overhead of the transmissions, a node for a wireless mesh network and a method of controlling the same are provided, wherein the node is configured to decide about retransmission of a message received from a transmitting node, based on whether the transmitting node has at least one layout element of a layout plan in common with the node, at least one of the layout elements of the layout plan being associated with the node. | 1. A node of a wireless network, comprising
a control unit configured to decide about retransmission of a message received from a transmitting node, based on whether the transmitting node has at least one layout element of a layout plan in common with the node, wherein the layout plan relates to a spatial arrangement of the wireless network, and wherein the at least one layout element of the layout plan is associated with the node. 2. The node according to claim 1, wherein the control unit is configured to retransmit the received message, only if the node and the transmitting node have at least one layout element in common. 3. The node according to claim 1, wherein the decision about retransmission is based on at least one of
an indicator included in the received message, layout parameters of the node and stored neighborhood information. 4. The node according to claim 3, wherein the indicator includes at least one of
an identifier of the transmitting node, at least one identifier of a layout element associated with the transmitting node, and additional layout information about the transmitting node. 5. The node according to claim 1, wherein the control unit is configured to decide about a retransmission mode of the received message, based on a number of layout elements associated with the node. 6. The node according to claim 5, wherein the retransmission mode includes at least one of:
a probabilistic approach and a counter-based approach, based on a number of retransmissions of the message by neighbor nodes received within a predetermined time. 7. The node according to claim 6, wherein at least one of a probability for retransmission and the predetermined time is set based on at least one of a distance to a layout element, a distance to an originator node, which has first sent the message, and a number of layout elements associated with the node. 8. The node according to claim 1, wherein when at least one of a retransmission and transmission of the message is received from a neighbor node, the control unit is configured to decide about retransmitting the message, based on a coverage of at least one of the retransmission and transmission of the neighbor node. 9. The node according to claim 8, wherein the coverage is determined based on at least one of:
an indicator included in the retransmitted message and stored neighborhood information. 10. The node according to claim 8, wherein if the node is associated with more than one layout element, only retransmissions of neighbor nodes that are also associated with more than one layout element are considered. 11. The node according to claim 1, wherein the control unit is configured to determine at least one of a geographical and covered distance travelled by the received message, using the layout plan, and to decide to retransmit the message if the determined distance is within a predetermined distance limit. 12. The node according to claim 1, wherein the node further comprises a spatial unit for determining at least one of an absolute position of the node and a relative position of the node. 13. The node according to claim 1, wherein the layout plan includes at least one of: a city map, a plant layout, and a floor plan, and wherein the layout element includes at least one of a floor, a corridor, a room, a street, a crossing and a park area. 14. The node according to claim 1, wherein the node is included in a luminaire of at least one of an outdoor and indoor lighting system. 15. A method of controlling a node of a wireless network, the method comprising the steps of:
receiving a message from a transmitting node by a receiving node; deciding about retransmission of the received message, based on whether the transmitting node and the receiving node have at least one layout element of a layout plan in common, wherein the layout plan relates to a spatial arrangement of the wireless network, and wherein at least one of the layout elements of the layout plan is associated with the node. | For ensuring reliability of data transmission in a wireless mesh network, while reducing the data overhead of the transmissions, a node for a wireless mesh network and a method of controlling the same are provided, wherein the node is configured to decide about retransmission of a message received from a transmitting node, based on whether the transmitting node has at least one layout element of a layout plan in common with the node, at least one of the layout elements of the layout plan being associated with the node.1. A node of a wireless network, comprising
a control unit configured to decide about retransmission of a message received from a transmitting node, based on whether the transmitting node has at least one layout element of a layout plan in common with the node, wherein the layout plan relates to a spatial arrangement of the wireless network, and wherein the at least one layout element of the layout plan is associated with the node. 2. The node according to claim 1, wherein the control unit is configured to retransmit the received message, only if the node and the transmitting node have at least one layout element in common. 3. The node according to claim 1, wherein the decision about retransmission is based on at least one of
an indicator included in the received message, layout parameters of the node and stored neighborhood information. 4. The node according to claim 3, wherein the indicator includes at least one of
an identifier of the transmitting node, at least one identifier of a layout element associated with the transmitting node, and additional layout information about the transmitting node. 5. The node according to claim 1, wherein the control unit is configured to decide about a retransmission mode of the received message, based on a number of layout elements associated with the node. 6. The node according to claim 5, wherein the retransmission mode includes at least one of:
a probabilistic approach and a counter-based approach, based on a number of retransmissions of the message by neighbor nodes received within a predetermined time. 7. The node according to claim 6, wherein at least one of a probability for retransmission and the predetermined time is set based on at least one of a distance to a layout element, a distance to an originator node, which has first sent the message, and a number of layout elements associated with the node. 8. The node according to claim 1, wherein when at least one of a retransmission and transmission of the message is received from a neighbor node, the control unit is configured to decide about retransmitting the message, based on a coverage of at least one of the retransmission and transmission of the neighbor node. 9. The node according to claim 8, wherein the coverage is determined based on at least one of:
an indicator included in the retransmitted message and stored neighborhood information. 10. The node according to claim 8, wherein if the node is associated with more than one layout element, only retransmissions of neighbor nodes that are also associated with more than one layout element are considered. 11. The node according to claim 1, wherein the control unit is configured to determine at least one of a geographical and covered distance travelled by the received message, using the layout plan, and to decide to retransmit the message if the determined distance is within a predetermined distance limit. 12. The node according to claim 1, wherein the node further comprises a spatial unit for determining at least one of an absolute position of the node and a relative position of the node. 13. The node according to claim 1, wherein the layout plan includes at least one of: a city map, a plant layout, and a floor plan, and wherein the layout element includes at least one of a floor, a corridor, a room, a street, a crossing and a park area. 14. The node according to claim 1, wherein the node is included in a luminaire of at least one of an outdoor and indoor lighting system. 15. A method of controlling a node of a wireless network, the method comprising the steps of:
receiving a message from a transmitting node by a receiving node; deciding about retransmission of the received message, based on whether the transmitting node and the receiving node have at least one layout element of a layout plan in common, wherein the layout plan relates to a spatial arrangement of the wireless network, and wherein at least one of the layout elements of the layout plan is associated with the node. | 2,400 |
7,528 | 7,528 | 14,964,260 | 2,469 | The present disclosure discloses mechanisms for extending a local area network of a customer premises of a customer outside of the customer premises and into a private data network. The extension of a local area network of a customer premises outside of the customer premises and into a private data network may be provided using a customer bridge associated with the customer local area network, a customer bridging domain hosted on a network gateway device for the customer, and a switching element hosted in the private data network for the customer. The network gateway device may be configured to receive, at the customer bridging domain of the customer via a first tunnel associated with the customer bridging domain, a packet including a destination address and determine, based on the destination address, whether to forward the packet via a second tunnel associated with the customer bridging domain or whether to forward the packet toward a public data network. | 1. An apparatus, comprising:
a processor and a memory communicatively connected to the processor, the processor configured to:
receive, at a customer bridging domain of a customer via a first tunnel between the customer bridging domain and a customer bridge of a customer premises of the customer, a packet comprising a destination address; and
determine, based on the destination address of the packet, whether to forward the packet toward a private data network via a second tunnel between the customer bridging domain and a switching element hosted for the customer within the private data network or whether to forward the packet toward a public data network. 2. The apparatus of claim 1, wherein the customer bridging domain of the customer comprises a Layer 2 virtual bridge. 3. The apparatus of claim 1, wherein the destination address of the packet comprises a Layer 2 address. 4. The apparatus of claim 3, wherein the Layer 2 address comprises a Media Access Control (MAC) address. 5. The apparatus of claim 1, wherein the processor is configured to:
forward the packet toward the private data network via the second tunnel based on a determination that the destination address of the packet is an address of a customer component hosted in the private data network for the customer. 6. The apparatus of claim 5, wherein the customer component comprises a physical server, a virtual server, or a virtual machine (VM). 7. The apparatus of claim 1, wherein the processor is configured to:
forward the packet toward the public data network based on a determination that the destination address of the packet is a default gateway address of the apparatus. 8. The apparatus of claim 1, wherein the first tunnel comprises a virtual local area network (VLAN) tunnel or an Internet Protocol (IP) tunnel. 9. The apparatus of claim 1, wherein the second tunnel comprises a virtual extensible local area network (VXLAN) tunnel or a Multi-Protocol Label Switching (MPLS) tunnel. 10. The apparatus of claim 1, wherein the processor is configured to:
exchange, with a controller hosted within the private data network, information configured to support bridging of customer traffic of the customer between a customer local area network (LAN) of the customer premises and a customer component hosted within the private data network for the customer. 11. An apparatus, comprising:
a processor and a memory communicatively connected to the processor, the processor configured to:
receive, at a customer bridging domain of a customer via a first tunnel between the customer bridging domain and a switching element hosted for the customer within the private data network, a packet comprising a destination address; and
determine, based on the destination address of the packet, whether to forward the packet toward a customer bridge of a customer premises of the customer via a second tunnel between the customer bridging domain and the customer bridge or whether to forward the packet toward a public data network. 12. The apparatus of claim 11, wherein the customer bridging domain of the customer comprises a Layer 2 virtual bridge. 13. The apparatus of claim 11, wherein the destination address of the packet comprises a Layer 2 address. 14. The apparatus of claim 13, wherein the Layer 2 address comprises a Media Access Control (MAC) address. 15. The apparatus of claim 11, wherein the processor is configured to:
forward the packet toward the customer bridge of the customer premises via the second tunnel based on a determination that the destination address of the packet is an address of a customer device of a customer local area network (LAN) associated with the customer bridge. 16. The apparatus of claim 15, wherein the customer device comprises a computer, a printer, a smartphone, a television, a server, a switch, or a router. 17. The apparatus of claim 11, wherein the processor is configured to:
forward the packet toward the public data network based on a determination that the destination address of the packet is a default gateway address of the apparatus. 18. The apparatus of claim 11, wherein the first tunnel comprises a virtual extensible local area network (VXLAN) tunnel or a Multi-Protocol Label Switching (MPLS) tunnel. 19. The apparatus of claim 11, wherein the second tunnel comprises a virtual local area network (VLAN) tunnel or an Internet Protocol (IP) tunnel. 20. The apparatus of claim 11, wherein the processor is configured to:
exchange, with a controller hosted within the private data network, information configured to support bridging of customer traffic of the customer between a customer local area network (LAN) of the customer and a customer component hosted within the private data network for the customer. 21. An apparatus, comprising:
a processor and a memory communicatively connected to the processor, the processor configured to:
receive, at a customer bridging domain of a customer via a first tunnel associated with the customer bridging domain, a packet comprising a destination address; and
determine, based on the destination address of the packet, whether to forward the packet via a second tunnel associated with the customer bridging domain or whether to forward the packet toward a public data network. | The present disclosure discloses mechanisms for extending a local area network of a customer premises of a customer outside of the customer premises and into a private data network. The extension of a local area network of a customer premises outside of the customer premises and into a private data network may be provided using a customer bridge associated with the customer local area network, a customer bridging domain hosted on a network gateway device for the customer, and a switching element hosted in the private data network for the customer. The network gateway device may be configured to receive, at the customer bridging domain of the customer via a first tunnel associated with the customer bridging domain, a packet including a destination address and determine, based on the destination address, whether to forward the packet via a second tunnel associated with the customer bridging domain or whether to forward the packet toward a public data network.1. An apparatus, comprising:
a processor and a memory communicatively connected to the processor, the processor configured to:
receive, at a customer bridging domain of a customer via a first tunnel between the customer bridging domain and a customer bridge of a customer premises of the customer, a packet comprising a destination address; and
determine, based on the destination address of the packet, whether to forward the packet toward a private data network via a second tunnel between the customer bridging domain and a switching element hosted for the customer within the private data network or whether to forward the packet toward a public data network. 2. The apparatus of claim 1, wherein the customer bridging domain of the customer comprises a Layer 2 virtual bridge. 3. The apparatus of claim 1, wherein the destination address of the packet comprises a Layer 2 address. 4. The apparatus of claim 3, wherein the Layer 2 address comprises a Media Access Control (MAC) address. 5. The apparatus of claim 1, wherein the processor is configured to:
forward the packet toward the private data network via the second tunnel based on a determination that the destination address of the packet is an address of a customer component hosted in the private data network for the customer. 6. The apparatus of claim 5, wherein the customer component comprises a physical server, a virtual server, or a virtual machine (VM). 7. The apparatus of claim 1, wherein the processor is configured to:
forward the packet toward the public data network based on a determination that the destination address of the packet is a default gateway address of the apparatus. 8. The apparatus of claim 1, wherein the first tunnel comprises a virtual local area network (VLAN) tunnel or an Internet Protocol (IP) tunnel. 9. The apparatus of claim 1, wherein the second tunnel comprises a virtual extensible local area network (VXLAN) tunnel or a Multi-Protocol Label Switching (MPLS) tunnel. 10. The apparatus of claim 1, wherein the processor is configured to:
exchange, with a controller hosted within the private data network, information configured to support bridging of customer traffic of the customer between a customer local area network (LAN) of the customer premises and a customer component hosted within the private data network for the customer. 11. An apparatus, comprising:
a processor and a memory communicatively connected to the processor, the processor configured to:
receive, at a customer bridging domain of a customer via a first tunnel between the customer bridging domain and a switching element hosted for the customer within the private data network, a packet comprising a destination address; and
determine, based on the destination address of the packet, whether to forward the packet toward a customer bridge of a customer premises of the customer via a second tunnel between the customer bridging domain and the customer bridge or whether to forward the packet toward a public data network. 12. The apparatus of claim 11, wherein the customer bridging domain of the customer comprises a Layer 2 virtual bridge. 13. The apparatus of claim 11, wherein the destination address of the packet comprises a Layer 2 address. 14. The apparatus of claim 13, wherein the Layer 2 address comprises a Media Access Control (MAC) address. 15. The apparatus of claim 11, wherein the processor is configured to:
forward the packet toward the customer bridge of the customer premises via the second tunnel based on a determination that the destination address of the packet is an address of a customer device of a customer local area network (LAN) associated with the customer bridge. 16. The apparatus of claim 15, wherein the customer device comprises a computer, a printer, a smartphone, a television, a server, a switch, or a router. 17. The apparatus of claim 11, wherein the processor is configured to:
forward the packet toward the public data network based on a determination that the destination address of the packet is a default gateway address of the apparatus. 18. The apparatus of claim 11, wherein the first tunnel comprises a virtual extensible local area network (VXLAN) tunnel or a Multi-Protocol Label Switching (MPLS) tunnel. 19. The apparatus of claim 11, wherein the second tunnel comprises a virtual local area network (VLAN) tunnel or an Internet Protocol (IP) tunnel. 20. The apparatus of claim 11, wherein the processor is configured to:
exchange, with a controller hosted within the private data network, information configured to support bridging of customer traffic of the customer between a customer local area network (LAN) of the customer and a customer component hosted within the private data network for the customer. 21. An apparatus, comprising:
a processor and a memory communicatively connected to the processor, the processor configured to:
receive, at a customer bridging domain of a customer via a first tunnel associated with the customer bridging domain, a packet comprising a destination address; and
determine, based on the destination address of the packet, whether to forward the packet via a second tunnel associated with the customer bridging domain or whether to forward the packet toward a public data network. | 2,400 |
7,529 | 7,529 | 12,452,050 | 2,482 | There are provided methods and apparatus at an encoder and decoder for supporting single loop decoding of multi-view coded video. An apparatus includes an encoder for encoding multi-view video content to enable single loop decoding of the multi-view video content when the multi-view video content is encoded using inter-view prediction. Similarly, a method is also described for encoding multi-view video content to support single loop decoding of the multi-view video content when the multi-view video content is encoded using inter-view prediction. Corresponding decoder apparatus and method are also described. | 1. An apparatus, comprising:
a decoder for decoding multi-view video content using single loop decoding when the multi-view video content is encoded using inter-view prediction. 2. The apparatus of claim 1, wherein the multi-view video content includes a reference view and other views, the other views capable of being reconstructed without a complete reconstruction of the reference view. 3. The apparatus of claim 1, wherein the inter-view prediction involves inferring at least one of motion information, inter prediction modes, intra prediction modes, reference indices, residual data, depth information, an illumination compensation offset, a deblocking strength, and disparity data from a reference view of the multi-view video content. 4. The apparatus of claim 1, wherein the inter-view prediction involves inferring information for a given view of the multi-view content from characteristics relating to at least one of at least a portion of at least one picture from a reference view of the multi-view video content with respect to the given view, and decoding information relating to the at least a portion of the at least one picture. 5. The apparatus of claim 1, wherein said decoder determines whether the single loop decoding is enabled for the multi-view video content using a high level syntax element. 6. The apparatus of claim 5, wherein said decoder determines, using the high level syntax, one of whether the single loop decoding is separately enabled for anchor pictures and non-anchor pictures in the multi-view video content using the high level syntax element, whether the single loop decoding is enabled on a view basis, whether the single loop decoding is enabled on a sequence basis, whether the single loop decoding is enabled for only the non-anchor pictures in the multi-view video content. 7. A method, comprising:
decoding multi-view video content using single loop decoding when the multi-view video content is encoded using inter-view prediction. 8. The method of claim 7, wherein the multi-view video content includes a reference view and other views, the other views capable of being reconstructed without a complete reconstruction of the reference view. 9. The method of claim 7, wherein the inter-view prediction involves inferring at least one of motion information, inter prediction modes, intra prediction modes, reference indices, residual data, depth information, an illumination compensation offset, a deblocking strength, and disparity data from a reference view of the multi-view video content. 10. The method of claim 7, wherein the inter-view prediction involves inferring information for a given view of the multi-view content from characteristics relating to at least one of at least a portion of at least one picture from a reference view of the multi-view video content with respect to the given view, and decoding information relating to the at least a portion of the at least one picture. 11. The method of claim 7, wherein said decoding step comprises determining whether the single loop decoding is enabled for the multi-view video content using a high level syntax element. 12. The method of claim 11, wherein said determining step determines, using the high level syntax, one of whether the single loop decoding is separately enabled for anchor pictures and non-anchor pictures in the multi-view video content, whether the single loop decoding is enabled on a view basis, whether the single loop decoding is enabled on a sequence basis, and whether the single loop decoding is enabled for only the non-anchor pictures in the multi-view video content. 13. A video signal structure for video encoding, decoding, and transport; comprising:
multi-view video content encoded to support single loop decoding of the multi-view video content when the multi-view video content is encoded using inter-view prediction. 14. The video signal structure of claim 13, wherein the multi-view video content includes a reference view and other views, the other views capable of being reconstructed without a complete reconstruction of the reference view. 15. The video signal structure of claim 13, wherein the inter-view prediction involves inferring at least one of motion information, inter prediction modes, intra prediction modes, reference indices, residual data, depth information, an illumination compensation offset, a deblocking strength, and disparity data from a reference view of the multi-view video content. 16. The video signal structure of claim 13, wherein the inter-view prediction involves inferring information for a given view of the multi-view content from characteristics relating to at least one of at least a portion of at least one picture from a reference view of the multi-view video content with respect to the given view, and decoding information relating to the at least a portion of the at least one picture. 17. The video signal structure of claim 13, wherein a high level syntax element is used to indicate that the single loop decoding is enabled for the multi-view video content. 18. The video signal structure of claim 17, wherein the high level syntax element one of separately indicates whether the single loop decoding is enabled for anchor pictures and non-anchor pictures in the multi-view video content, indicates on a view basis whether the single loop decoding is enabled, indicates on a sequence basis whether the single loop decoding is enabled, and indicates that the single loop decoding is enabled for only non-anchor pictures in the multi-view video content. | There are provided methods and apparatus at an encoder and decoder for supporting single loop decoding of multi-view coded video. An apparatus includes an encoder for encoding multi-view video content to enable single loop decoding of the multi-view video content when the multi-view video content is encoded using inter-view prediction. Similarly, a method is also described for encoding multi-view video content to support single loop decoding of the multi-view video content when the multi-view video content is encoded using inter-view prediction. Corresponding decoder apparatus and method are also described.1. An apparatus, comprising:
a decoder for decoding multi-view video content using single loop decoding when the multi-view video content is encoded using inter-view prediction. 2. The apparatus of claim 1, wherein the multi-view video content includes a reference view and other views, the other views capable of being reconstructed without a complete reconstruction of the reference view. 3. The apparatus of claim 1, wherein the inter-view prediction involves inferring at least one of motion information, inter prediction modes, intra prediction modes, reference indices, residual data, depth information, an illumination compensation offset, a deblocking strength, and disparity data from a reference view of the multi-view video content. 4. The apparatus of claim 1, wherein the inter-view prediction involves inferring information for a given view of the multi-view content from characteristics relating to at least one of at least a portion of at least one picture from a reference view of the multi-view video content with respect to the given view, and decoding information relating to the at least a portion of the at least one picture. 5. The apparatus of claim 1, wherein said decoder determines whether the single loop decoding is enabled for the multi-view video content using a high level syntax element. 6. The apparatus of claim 5, wherein said decoder determines, using the high level syntax, one of whether the single loop decoding is separately enabled for anchor pictures and non-anchor pictures in the multi-view video content using the high level syntax element, whether the single loop decoding is enabled on a view basis, whether the single loop decoding is enabled on a sequence basis, whether the single loop decoding is enabled for only the non-anchor pictures in the multi-view video content. 7. A method, comprising:
decoding multi-view video content using single loop decoding when the multi-view video content is encoded using inter-view prediction. 8. The method of claim 7, wherein the multi-view video content includes a reference view and other views, the other views capable of being reconstructed without a complete reconstruction of the reference view. 9. The method of claim 7, wherein the inter-view prediction involves inferring at least one of motion information, inter prediction modes, intra prediction modes, reference indices, residual data, depth information, an illumination compensation offset, a deblocking strength, and disparity data from a reference view of the multi-view video content. 10. The method of claim 7, wherein the inter-view prediction involves inferring information for a given view of the multi-view content from characteristics relating to at least one of at least a portion of at least one picture from a reference view of the multi-view video content with respect to the given view, and decoding information relating to the at least a portion of the at least one picture. 11. The method of claim 7, wherein said decoding step comprises determining whether the single loop decoding is enabled for the multi-view video content using a high level syntax element. 12. The method of claim 11, wherein said determining step determines, using the high level syntax, one of whether the single loop decoding is separately enabled for anchor pictures and non-anchor pictures in the multi-view video content, whether the single loop decoding is enabled on a view basis, whether the single loop decoding is enabled on a sequence basis, and whether the single loop decoding is enabled for only the non-anchor pictures in the multi-view video content. 13. A video signal structure for video encoding, decoding, and transport; comprising:
multi-view video content encoded to support single loop decoding of the multi-view video content when the multi-view video content is encoded using inter-view prediction. 14. The video signal structure of claim 13, wherein the multi-view video content includes a reference view and other views, the other views capable of being reconstructed without a complete reconstruction of the reference view. 15. The video signal structure of claim 13, wherein the inter-view prediction involves inferring at least one of motion information, inter prediction modes, intra prediction modes, reference indices, residual data, depth information, an illumination compensation offset, a deblocking strength, and disparity data from a reference view of the multi-view video content. 16. The video signal structure of claim 13, wherein the inter-view prediction involves inferring information for a given view of the multi-view content from characteristics relating to at least one of at least a portion of at least one picture from a reference view of the multi-view video content with respect to the given view, and decoding information relating to the at least a portion of the at least one picture. 17. The video signal structure of claim 13, wherein a high level syntax element is used to indicate that the single loop decoding is enabled for the multi-view video content. 18. The video signal structure of claim 17, wherein the high level syntax element one of separately indicates whether the single loop decoding is enabled for anchor pictures and non-anchor pictures in the multi-view video content, indicates on a view basis whether the single loop decoding is enabled, indicates on a sequence basis whether the single loop decoding is enabled, and indicates that the single loop decoding is enabled for only non-anchor pictures in the multi-view video content. | 2,400 |
7,530 | 7,530 | 15,409,105 | 2,487 | Techniques for encapsulating video streams containing multiple coded views in a media file are described herein. In one example, a method includes parsing a track of video data, wherein the track includes one or more views. The method further includes parsing information to determine whether a texture view or a depth view of a reference view is required for decoding at least one of the one or more views in the track. Another example method includes composing a track of video data, wherein the track includes one or more views and composing information that indicates whether a texture view or a depth view of a reference view is required for decoding at least one of the one or more views in the track. | 1. A method of processing video data, the method comprising:
parsing a track of multiview video data, wherein the track includes one or more views, including only one of a texture view of a particular view and a depth view of the particular view; and parsing a track reference to determine a dependency of the track to a referenced track indicated in the track reference, wherein parsing the track reference includes parsing a track reference type deps' that indicates that the track includes the depth view of the particular view and the reference track includes the texture view of the particular view. 2. A device for processing video data comprising:
a memory configured to store video data; and one or more processors configured to:
parse a track of multiview video data, wherein the track includes one or more views, including only one of a texture view of a particular view and a depth view of the particular view; and
parse a track reference to determine a dependency of the track to a referenced track indicated in the track reference, wherein parsing the track reference includes parsing a track reference type deps' that indicates that the track includes the depth view of the particular view and the reference track includes the texture view of the particular view. 3. The device of claim 2, the device further configured to:
parse a view identifier box from at least one of a sample entry and a multi-view group entry to determine, for each view in the track, whether the view is a texture view or a depth view, wherein the at least one of the sample entry and the multi-view group entry are associated with the track. 4. The device of claim 2, the device further configured to:
parse a view identifier box to determine whether a texture view or a depth view of a reference view is required for decoding a specific view in the track; and/or parse a supplemental enhancement information (SEI) message box to determine a three dimensional scalability information SEI message associated with one or more of the views. 5. The device of claim 2, wherein the track contains the depth view of the particular view, the device further configured to:
parse a 3VC Depth Resolution box to determine a spatial resolution the depth view of the particular view. 6. The device of claim 2, wherein the track contains the depth view of the particular view, the device further configured to:
parse a three-dimensional video coding (3VC) decoder configuration record to determine a width and a height of the depth view of the particular view. 7. The device of claim 2, wherein the track is a three-dimensional video coding (3VC) track, the device further configured to:
parse a 3VC decoder configuration record, wherein the 3VC decoder configuration record indicates a configuration record for a matching sample entry of the multiview video data. 8. A method of processing video data, the method comprising:
composing a track of multiview video data, wherein the track includes one or more views, including only one of a texture view of a particular view and a depth view of the particular view; and composing a track reference to indicate a dependency of the track to a referenced track indicated in the track reference, wherein composing the track reference includes
composing a track reference type deps' that indicates that the track includes the depth view of the particular view and the reference track includes the texture view of the particular view. 9. A device for processing video data comprising:
a memory configured to store video data; and one or more processors configured to:
compose a track of multiview video data, wherein the track includes one or more views, including only one of a texture view of a particular view and a depth view of the particular view; and
compose a track reference to indicate a dependency of the track to a referenced track indicated in the track reference, wherein composing the track reference includes
composing a track reference type deps' that indicates that the track includes the depth view of the particular view and the reference track includes the texture view of the particular view. 10. The device of claim 9, the one or more processors further configured to:
compose a view identifier box from at least one of a sample entry and a multi-view group entry to indicate, for each view in the track, whether the view is a texture view or a depth view, wherein the at least one of the sample entry and the multi-view group entry are associated with the track. 11. The device of claim 9, the one or more processors further configured to:
compose a view identifier box to indicate whether a texture view or a depth view of a reference view is required for decoding a specific view in the track; and/or compose a supplemental enhancement information (SEI) message box to indicate a three dimensional scalability information SEI message associated with one or more of the views. 12. The device of claim 9, wherein the track contains the depth view of the particular view, the device further configured to:
compose a 3VC Depth Resolution box to indicate a spatial resolution the depth view of the particular view; and/or compose a three-dimensional video coding (3VC) decoder configuration record to indicate a width and a height of the depth view of the particular view. 13. The device of claim 9, wherein the track is a three-dimensional video coding (3VC) track, the device further configured to:
compose a 3VC decoder configuration record, wherein the 3VC decoder configuration record indicates a configuration record for a matching sample entry of the multiview video data. 14. A non-transitory computer-readable storage medium having instructions stored thereon that upon execution cause one or more processors of a video coding device to:
parse a track of multiview video data, wherein the track includes one or more views, including only one of a texture view of a particular view and a depth view of the particular view; and parse a track reference to determine a dependency of the track to a referenced track indicated in the track reference, wherein parsing the track reference includes parsing a track reference type deps' that indicates that the track includes the depth view of the particular view and the reference track includes the texture view of the particular view. 15. A non-transitory computer-readable storage medium having instructions stored thereon that upon execution cause one or more processors of a video coding device to:
compose a track of multiview video data, wherein the track includes one or more views, including only one of a texture view of a particular view and a depth view of the particular view; and compose a track reference to indicate a dependency of the track to a referenced track indicated in the track reference, wherein composing the track reference includes
composing a track reference type deps' that indicates that the track includes the depth view of the particular view and the reference track includes the texture view of the particular view. | Techniques for encapsulating video streams containing multiple coded views in a media file are described herein. In one example, a method includes parsing a track of video data, wherein the track includes one or more views. The method further includes parsing information to determine whether a texture view or a depth view of a reference view is required for decoding at least one of the one or more views in the track. Another example method includes composing a track of video data, wherein the track includes one or more views and composing information that indicates whether a texture view or a depth view of a reference view is required for decoding at least one of the one or more views in the track.1. A method of processing video data, the method comprising:
parsing a track of multiview video data, wherein the track includes one or more views, including only one of a texture view of a particular view and a depth view of the particular view; and parsing a track reference to determine a dependency of the track to a referenced track indicated in the track reference, wherein parsing the track reference includes parsing a track reference type deps' that indicates that the track includes the depth view of the particular view and the reference track includes the texture view of the particular view. 2. A device for processing video data comprising:
a memory configured to store video data; and one or more processors configured to:
parse a track of multiview video data, wherein the track includes one or more views, including only one of a texture view of a particular view and a depth view of the particular view; and
parse a track reference to determine a dependency of the track to a referenced track indicated in the track reference, wherein parsing the track reference includes parsing a track reference type deps' that indicates that the track includes the depth view of the particular view and the reference track includes the texture view of the particular view. 3. The device of claim 2, the device further configured to:
parse a view identifier box from at least one of a sample entry and a multi-view group entry to determine, for each view in the track, whether the view is a texture view or a depth view, wherein the at least one of the sample entry and the multi-view group entry are associated with the track. 4. The device of claim 2, the device further configured to:
parse a view identifier box to determine whether a texture view or a depth view of a reference view is required for decoding a specific view in the track; and/or parse a supplemental enhancement information (SEI) message box to determine a three dimensional scalability information SEI message associated with one or more of the views. 5. The device of claim 2, wherein the track contains the depth view of the particular view, the device further configured to:
parse a 3VC Depth Resolution box to determine a spatial resolution the depth view of the particular view. 6. The device of claim 2, wherein the track contains the depth view of the particular view, the device further configured to:
parse a three-dimensional video coding (3VC) decoder configuration record to determine a width and a height of the depth view of the particular view. 7. The device of claim 2, wherein the track is a three-dimensional video coding (3VC) track, the device further configured to:
parse a 3VC decoder configuration record, wherein the 3VC decoder configuration record indicates a configuration record for a matching sample entry of the multiview video data. 8. A method of processing video data, the method comprising:
composing a track of multiview video data, wherein the track includes one or more views, including only one of a texture view of a particular view and a depth view of the particular view; and composing a track reference to indicate a dependency of the track to a referenced track indicated in the track reference, wherein composing the track reference includes
composing a track reference type deps' that indicates that the track includes the depth view of the particular view and the reference track includes the texture view of the particular view. 9. A device for processing video data comprising:
a memory configured to store video data; and one or more processors configured to:
compose a track of multiview video data, wherein the track includes one or more views, including only one of a texture view of a particular view and a depth view of the particular view; and
compose a track reference to indicate a dependency of the track to a referenced track indicated in the track reference, wherein composing the track reference includes
composing a track reference type deps' that indicates that the track includes the depth view of the particular view and the reference track includes the texture view of the particular view. 10. The device of claim 9, the one or more processors further configured to:
compose a view identifier box from at least one of a sample entry and a multi-view group entry to indicate, for each view in the track, whether the view is a texture view or a depth view, wherein the at least one of the sample entry and the multi-view group entry are associated with the track. 11. The device of claim 9, the one or more processors further configured to:
compose a view identifier box to indicate whether a texture view or a depth view of a reference view is required for decoding a specific view in the track; and/or compose a supplemental enhancement information (SEI) message box to indicate a three dimensional scalability information SEI message associated with one or more of the views. 12. The device of claim 9, wherein the track contains the depth view of the particular view, the device further configured to:
compose a 3VC Depth Resolution box to indicate a spatial resolution the depth view of the particular view; and/or compose a three-dimensional video coding (3VC) decoder configuration record to indicate a width and a height of the depth view of the particular view. 13. The device of claim 9, wherein the track is a three-dimensional video coding (3VC) track, the device further configured to:
compose a 3VC decoder configuration record, wherein the 3VC decoder configuration record indicates a configuration record for a matching sample entry of the multiview video data. 14. A non-transitory computer-readable storage medium having instructions stored thereon that upon execution cause one or more processors of a video coding device to:
parse a track of multiview video data, wherein the track includes one or more views, including only one of a texture view of a particular view and a depth view of the particular view; and parse a track reference to determine a dependency of the track to a referenced track indicated in the track reference, wherein parsing the track reference includes parsing a track reference type deps' that indicates that the track includes the depth view of the particular view and the reference track includes the texture view of the particular view. 15. A non-transitory computer-readable storage medium having instructions stored thereon that upon execution cause one or more processors of a video coding device to:
compose a track of multiview video data, wherein the track includes one or more views, including only one of a texture view of a particular view and a depth view of the particular view; and compose a track reference to indicate a dependency of the track to a referenced track indicated in the track reference, wherein composing the track reference includes
composing a track reference type deps' that indicates that the track includes the depth view of the particular view and the reference track includes the texture view of the particular view. | 2,400 |
7,531 | 7,531 | 15,689,621 | 2,448 | An interactive email experience is customized to the recipient's interests by modifying rich media components provided by the email based on the recipient's interactions with other rich media components from the email. To facilitate the interactive email experience, rich media components are provided by a marketer for an email campaign with mapping information mapping product features to portions of the rich media components. When an email is sent with links to the rich media components, the recipient's interactions with a rich media component is tracked. Product features are ranked based on the recipient's interactions with various portions corresponding with the various product features. The product feature rankings are then used to modify other rich media components from the email to emphasize portions of the other rich media components corresponding with product features of interest to the recipient. | 1. One or more computer storage media storing computer-useable instructions that, when used by one or more computing devices, cause the one or more computing devices to perform operations comprising:
sending an email message to a recipient, the email message containing links to a plurality of pre-existing rich media components; receiving interaction information regarding the recipient interacting with a first rich media component from the plurality of pre-existing rich media components; and modifying a second rich media component from the plurality of pre-existing rich-media components based on the interaction information. 2. The one or more computer storage media of claim 1, wherein the operations further comprise ranking a plurality of product features based on the interaction information to provide product feature rankings; and wherein modifying the second rich media component based on the interaction information comprises modifying the second rich media component based on the product feature rankings. 3. The one or more computer storage media of claim 2, wherein a first product feature is ranked by;
identifying, from the interaction information and mapping information mapping the product features to portions of the first rich media component, one or more specific interactions with one or more portions of the first rich media component corresponding with the first product feature; and ranking the first product feature based on the specific interactions with the one or more portions of the first rich media component corresponding with the first product feature. 4. The one or more computer storage media of claim 2, wherein ranking the plurality of product features based on the interaction information comprises:
accessing mapping information mapping the plurality of product features to portions of the first rich media component; and ranking the plurality of product features based on interactions with portions of the first rich media component identified by the interaction information. 5. The one or more computer storage media of claim 2, wherein the product features are ranked by initializing a weight for each product feature to an initial weight and modifying the weight for each of at least a portion of the product features based on the interaction information. 6. The one or more computer storage media of claim 5, wherein the weight for a first product feature is modified based on a type of interaction with one or more portions of the first rich media component corresponding with the first product feature. 7. The one or more computer storage media of claim 2, wherein the first rich media component comprises a collection of images, and wherein ranking the plurality of product features based on the interaction information comprises:
accessing mapping information mapping the plurality of product features to images within the collection of images; and ranking the plurality of product features based on interactions with images within the collection of images identified by the interaction information. 8. The one or more computer storage media of claim 2, wherein the first rich media component comprises one or more links to one or more webpages, and wherein ranking the plurality of product features based on the interaction information comprises:
determining product features corresponding with at least a portion of the one or more webpages viewed by the recipient as set forth by the user interaction information; and ranking the plurality of product features based on interactions with the at least the portion of the one or more webpages viewed by the recipient as set forth by the interaction information. 9. The one or more computer storage media of claim 2, wherein modifying the second rich media component comprises ordering portions of the second rich media component based on the product feature rankings using mapping information mapping the portions of the second rich media component to the product features. 10. The one or more computer storage media of claim 2, wherein the second rich media component comprises an image collection, and wherein modifying the second rich media component comprises ordering images within the image collection based on the product feature rankings. 11. The one or more computer storage media of claim 2, wherein the second rich media component comprises a link to a landing page, and wherein modifying the second rich media component comprises modifying the link to a second landing page. 12. The one or more computer storage media of claim 1, wherein the second rich media component is modified by at least one selected from the following: reordering one or more portions of the second rich media component; removing one or more portions of the second rich media component; and adding one or more portions to the second rich media component. 13. A computerized method for personalizing an interactive email campaign, the computerized method comprising:
storing, via a first computing process, mapping information mapping product features to portions of rich media components for the interactive email campaign; sending, via a second computing process, an email message to a recipient, the email message containing links to the rich media components; receiving, via a third computing process, user interaction information regarding the recipient interacting with portions of a first rich media component; using, via a fourth computing process, the mapping information to identify product features corresponding with the portions of the first rich media component with which the recipient interacted based on the user interaction information; and modifying, via a fifth computing process, a second rich media component by ordering portions of the second rich media component corresponding with the product features identified by the fourth computing process. wherein the first, second, third, fourth, and fifth computing processes are performed by one or more computing devices. 14. The computerized method of claim 13, wherein the second rich media component is modified by:
ranking the product features corresponding with the portions of the first rich media component with which the recipient interacted based on one or more specific interactions with each of the portions of the first rich media component with which the recipient interacted to provide product feature rankings; and modifying the second rich media component by ordering the portions of the second rich media component based on the product feature rankings. 15. The computerized method of claim 14, wherein the portions of the second rich media component are ordered by:
identifying product features corresponding with each of the portions of second rich media component based on mapping information available for the second rich media component; and ordering the portions of the second rich media component based on product feature rankings of product features corresponding with the portions of the second rich media component. 16. A computerized system comprising:
a datastore storing mapping information mapping portions of rich media components to product features; one or more processors; and one or more computer storage media storing computer-useable instructions that, when used by the one or more processors, cause the one or more processors to:
send an email message to a recipient, the email message containing links to the rich media components;
receive interaction information regarding the recipient interacting with portions of a first rich media component;
identify product features corresponding with the portions of the first rich media component based on the mapping information;
rank the product features based on the interaction information to provide product feature rankings; and
modify a second rich media component based on the product feature rankings. 17. The system of claim 16, wherein the product features are ranked by initializing a weight for each product feature to an initial weight and modifying the weight for each of at least a portion of the product features based on the user interaction information. 18. The system of claim 16, wherein the second rich media component is modified by reordering portions of the second rich media component. 19. The system of claim 16, wherein the second rich media component is modified by removing one or more portions of the second rich media component. 20. The system of claim 16, wherein the second rich media component is modified by adding one or more portions to the second rich media component. | An interactive email experience is customized to the recipient's interests by modifying rich media components provided by the email based on the recipient's interactions with other rich media components from the email. To facilitate the interactive email experience, rich media components are provided by a marketer for an email campaign with mapping information mapping product features to portions of the rich media components. When an email is sent with links to the rich media components, the recipient's interactions with a rich media component is tracked. Product features are ranked based on the recipient's interactions with various portions corresponding with the various product features. The product feature rankings are then used to modify other rich media components from the email to emphasize portions of the other rich media components corresponding with product features of interest to the recipient.1. One or more computer storage media storing computer-useable instructions that, when used by one or more computing devices, cause the one or more computing devices to perform operations comprising:
sending an email message to a recipient, the email message containing links to a plurality of pre-existing rich media components; receiving interaction information regarding the recipient interacting with a first rich media component from the plurality of pre-existing rich media components; and modifying a second rich media component from the plurality of pre-existing rich-media components based on the interaction information. 2. The one or more computer storage media of claim 1, wherein the operations further comprise ranking a plurality of product features based on the interaction information to provide product feature rankings; and wherein modifying the second rich media component based on the interaction information comprises modifying the second rich media component based on the product feature rankings. 3. The one or more computer storage media of claim 2, wherein a first product feature is ranked by;
identifying, from the interaction information and mapping information mapping the product features to portions of the first rich media component, one or more specific interactions with one or more portions of the first rich media component corresponding with the first product feature; and ranking the first product feature based on the specific interactions with the one or more portions of the first rich media component corresponding with the first product feature. 4. The one or more computer storage media of claim 2, wherein ranking the plurality of product features based on the interaction information comprises:
accessing mapping information mapping the plurality of product features to portions of the first rich media component; and ranking the plurality of product features based on interactions with portions of the first rich media component identified by the interaction information. 5. The one or more computer storage media of claim 2, wherein the product features are ranked by initializing a weight for each product feature to an initial weight and modifying the weight for each of at least a portion of the product features based on the interaction information. 6. The one or more computer storage media of claim 5, wherein the weight for a first product feature is modified based on a type of interaction with one or more portions of the first rich media component corresponding with the first product feature. 7. The one or more computer storage media of claim 2, wherein the first rich media component comprises a collection of images, and wherein ranking the plurality of product features based on the interaction information comprises:
accessing mapping information mapping the plurality of product features to images within the collection of images; and ranking the plurality of product features based on interactions with images within the collection of images identified by the interaction information. 8. The one or more computer storage media of claim 2, wherein the first rich media component comprises one or more links to one or more webpages, and wherein ranking the plurality of product features based on the interaction information comprises:
determining product features corresponding with at least a portion of the one or more webpages viewed by the recipient as set forth by the user interaction information; and ranking the plurality of product features based on interactions with the at least the portion of the one or more webpages viewed by the recipient as set forth by the interaction information. 9. The one or more computer storage media of claim 2, wherein modifying the second rich media component comprises ordering portions of the second rich media component based on the product feature rankings using mapping information mapping the portions of the second rich media component to the product features. 10. The one or more computer storage media of claim 2, wherein the second rich media component comprises an image collection, and wherein modifying the second rich media component comprises ordering images within the image collection based on the product feature rankings. 11. The one or more computer storage media of claim 2, wherein the second rich media component comprises a link to a landing page, and wherein modifying the second rich media component comprises modifying the link to a second landing page. 12. The one or more computer storage media of claim 1, wherein the second rich media component is modified by at least one selected from the following: reordering one or more portions of the second rich media component; removing one or more portions of the second rich media component; and adding one or more portions to the second rich media component. 13. A computerized method for personalizing an interactive email campaign, the computerized method comprising:
storing, via a first computing process, mapping information mapping product features to portions of rich media components for the interactive email campaign; sending, via a second computing process, an email message to a recipient, the email message containing links to the rich media components; receiving, via a third computing process, user interaction information regarding the recipient interacting with portions of a first rich media component; using, via a fourth computing process, the mapping information to identify product features corresponding with the portions of the first rich media component with which the recipient interacted based on the user interaction information; and modifying, via a fifth computing process, a second rich media component by ordering portions of the second rich media component corresponding with the product features identified by the fourth computing process. wherein the first, second, third, fourth, and fifth computing processes are performed by one or more computing devices. 14. The computerized method of claim 13, wherein the second rich media component is modified by:
ranking the product features corresponding with the portions of the first rich media component with which the recipient interacted based on one or more specific interactions with each of the portions of the first rich media component with which the recipient interacted to provide product feature rankings; and modifying the second rich media component by ordering the portions of the second rich media component based on the product feature rankings. 15. The computerized method of claim 14, wherein the portions of the second rich media component are ordered by:
identifying product features corresponding with each of the portions of second rich media component based on mapping information available for the second rich media component; and ordering the portions of the second rich media component based on product feature rankings of product features corresponding with the portions of the second rich media component. 16. A computerized system comprising:
a datastore storing mapping information mapping portions of rich media components to product features; one or more processors; and one or more computer storage media storing computer-useable instructions that, when used by the one or more processors, cause the one or more processors to:
send an email message to a recipient, the email message containing links to the rich media components;
receive interaction information regarding the recipient interacting with portions of a first rich media component;
identify product features corresponding with the portions of the first rich media component based on the mapping information;
rank the product features based on the interaction information to provide product feature rankings; and
modify a second rich media component based on the product feature rankings. 17. The system of claim 16, wherein the product features are ranked by initializing a weight for each product feature to an initial weight and modifying the weight for each of at least a portion of the product features based on the user interaction information. 18. The system of claim 16, wherein the second rich media component is modified by reordering portions of the second rich media component. 19. The system of claim 16, wherein the second rich media component is modified by removing one or more portions of the second rich media component. 20. The system of claim 16, wherein the second rich media component is modified by adding one or more portions to the second rich media component. | 2,400 |
7,532 | 7,532 | 14,051,322 | 2,453 | A user of a content sharing platform is identified a playlist is generated for the user. The playlist is generated based on one or more of the user's social interactions with other entities and the user's actions associated with other media items. The playlist may be modified or deleted if the user does not access the playlist or does not consume media items from the playlist within a threshold period of time. | 1. A computer-implemented method comprising:
identifying a first user of a content sharing platform, wherein the content sharing platform comprises a plurality of media items; identifying a subset of the plurality of media items based on social interactions between the first user and one or more entities and actions performed by the first user, wherein the actions are associated with one or more media items of the content sharing platform; generating a playlist based on the subset of the plurality of media items; providing the playlist to the first user; and receiving an indication to activate the playlist from the first user. 2. The computer-implemented method of claim 1, wherein the one or more entities comprise a user of one or more off the content sharing platform or a social networking platform. 3. The computer-implemented method of claim 1, wherein the actions comprise subscribing to a channel associated with a first entity from the one or more entities. 4. The computer-implemented method of claim 1, wherein the actions comprise indicating an approval of a media item associated with a first entity from the one or more entities. 5. The computer-implemented method of claim 1, wherein the social interactions comprise adding a first entity from the one or more entities as a social connection on a social networking platform. 6. The computer-implemented method of claim 1, wherein the social interactions comprise one or more of electronic mail communications, telephone communications, SMS communications, and MMS communications, chat communications, or social connection network communications. 7. The computer-implemented method of claim 1, further comprising:
providing the first user with access to media items in the playlist. 8. The computer-implemented method of claim 1, further comprising:
determining that the first user has not viewed a media item from the playlist for a threshold period of time; and deleting the playlist. 9. The computer-implemented method of claim 1, further comprising:
determining that the first user has not viewed a media item from the playlist for a threshold period of time; and removing the media item from the playlist. 10. The computer-implemented method of claim 1, wherein identifying the subset of the plurality of media items comprises:
generating a plurality of affinity scores based on user interactions between the first user and the one or more entities, each affinity score indicative of a level of connection between the first user and one entity from the one or more entities; identifying a subset of the one or more entities based on the plurality of affinity scores; and identifying the subset of the plurality of media items based on the subset of the one or more entities. 11. An apparatus comprising:
a memory to store data; a processing device coupled to the memory, the processing device configured to
identify a first user of a content sharing platform, wherein the content sharing platform comprises a plurality of media items;
identify a subset of the plurality of media items based on social interactions between the first user and one or more entities and actions performed by the first user, wherein the actions are associated with one or more media items of the content sharing platform;
generate a playlist based on the subset of the plurality of media items;
provide the playlist to the first user; and
receive an indication to activate the playlist from the first user. 12. The apparatus of claim 11, wherein the actions comprise subscribing to a channel associated with a first entity from the one or more entities. 13. The apparatus of claim 11, wherein the actions comprise indicating an approval of a media item associated with a first entity from the one or more entities. 14. The apparatus of claim 11, wherein the social interactions comprise adding a first entity from the one or more entities as a social connection on a social networking platform. 15. The apparatus of claim 11, wherein the processing device is further configured to:
determine that the first user has not accessed the playlist for a threshold period of time; and delete the playlist. 16. The apparatus of claim 11, wherein the processing device is further configured to:
determine that the first user has not viewed a first media item from the playlist for a threshold period of time; and remove the first media item from the playlist. 17. The apparatus of claim 11, wherein identifying the subset of the plurality of media items comprises:
generating a plurality of affinity scores based on user interactions between the first user and the one or more entities, each affinity score indicative of a level of connection between the first user and one entity from the one or more entities; identifying a subset of the one or more entities based on the plurality of affinity scores; and identifying the subset of the plurality of media items based on the subset of the one or more entities. 18. A non-transitory computer readable storage medium comprising instructions that, when executed by a processing device, cause the processing device to perform operations comprising:
identifying a first user of a content sharing platform, wherein the content sharing platform comprises a plurality of media items; identifying a subset of the plurality of media items based on social interactions between the first user and one or more entities and actions performed by the first user, wherein the actions are associated with one or more media items of the content sharing platform; generating a playlist based on the subset of the plurality of media items; providing the playlist to the first user; and receiving an indication to activate the playlist from the first user. 19. The non-transitory computer readable storage medium of claim 18, wherein the actions comprise subscribing to a channel associated with a first entity from the one or more entities. 20. The non-transitory computer readable storage medium of claim 18, wherein the actions comprise indicating an approval of a media item associated with a first entity from the one or more entities. 21. The non-transitory computer readable storage medium of claim 18, wherein the social interactions comprise adding a first entity from the one or more entities as a social connection on a social networking platform. 22. The non-transitory computer readable storage medium of claim 18, the operations further comprising:
determining that the first user has not accessed the playlist for a threshold period of time; and deleting the playlist. 23. The non-transitory computer readable storage medium of claim 18, the operations further comprising:
determining that the first user has not viewed a first media item from the playlist for a threshold period of time; and removing the first media item from the playlist. 24. The non-transitory computer readable storage medium of claim 18, wherein identifying the subset of the plurality of media items comprises:
generating a plurality of affinity scores based on user interactions between the first user and the one or more entities, each affinity score indicative of a level of connection between the first user and one entity from the one or more entities; identifying a subset of the one or more entities based on the plurality of affinity scores; and identifying the subset of the plurality of media items based on the subset of the one or more entities. 25. A computer-implemented method comprising:
receiving a first user input comprising at least one of social interactions between a first user and one or more entities, or actions performed by the first user, wherein the actions are associated with one or more media items of a content sharing platform, wherein the content sharing platform comprises a plurality of media items; receiving a playlist identifying a subset of the plurality of media items based on at least one of the social interactions between the first user and one or more entities, or the actions performed by the first user; presenting the playlist via a user interface; and receiving a second user input indicating an activation of the playlist. | A user of a content sharing platform is identified a playlist is generated for the user. The playlist is generated based on one or more of the user's social interactions with other entities and the user's actions associated with other media items. The playlist may be modified or deleted if the user does not access the playlist or does not consume media items from the playlist within a threshold period of time.1. A computer-implemented method comprising:
identifying a first user of a content sharing platform, wherein the content sharing platform comprises a plurality of media items; identifying a subset of the plurality of media items based on social interactions between the first user and one or more entities and actions performed by the first user, wherein the actions are associated with one or more media items of the content sharing platform; generating a playlist based on the subset of the plurality of media items; providing the playlist to the first user; and receiving an indication to activate the playlist from the first user. 2. The computer-implemented method of claim 1, wherein the one or more entities comprise a user of one or more off the content sharing platform or a social networking platform. 3. The computer-implemented method of claim 1, wherein the actions comprise subscribing to a channel associated with a first entity from the one or more entities. 4. The computer-implemented method of claim 1, wherein the actions comprise indicating an approval of a media item associated with a first entity from the one or more entities. 5. The computer-implemented method of claim 1, wherein the social interactions comprise adding a first entity from the one or more entities as a social connection on a social networking platform. 6. The computer-implemented method of claim 1, wherein the social interactions comprise one or more of electronic mail communications, telephone communications, SMS communications, and MMS communications, chat communications, or social connection network communications. 7. The computer-implemented method of claim 1, further comprising:
providing the first user with access to media items in the playlist. 8. The computer-implemented method of claim 1, further comprising:
determining that the first user has not viewed a media item from the playlist for a threshold period of time; and deleting the playlist. 9. The computer-implemented method of claim 1, further comprising:
determining that the first user has not viewed a media item from the playlist for a threshold period of time; and removing the media item from the playlist. 10. The computer-implemented method of claim 1, wherein identifying the subset of the plurality of media items comprises:
generating a plurality of affinity scores based on user interactions between the first user and the one or more entities, each affinity score indicative of a level of connection between the first user and one entity from the one or more entities; identifying a subset of the one or more entities based on the plurality of affinity scores; and identifying the subset of the plurality of media items based on the subset of the one or more entities. 11. An apparatus comprising:
a memory to store data; a processing device coupled to the memory, the processing device configured to
identify a first user of a content sharing platform, wherein the content sharing platform comprises a plurality of media items;
identify a subset of the plurality of media items based on social interactions between the first user and one or more entities and actions performed by the first user, wherein the actions are associated with one or more media items of the content sharing platform;
generate a playlist based on the subset of the plurality of media items;
provide the playlist to the first user; and
receive an indication to activate the playlist from the first user. 12. The apparatus of claim 11, wherein the actions comprise subscribing to a channel associated with a first entity from the one or more entities. 13. The apparatus of claim 11, wherein the actions comprise indicating an approval of a media item associated with a first entity from the one or more entities. 14. The apparatus of claim 11, wherein the social interactions comprise adding a first entity from the one or more entities as a social connection on a social networking platform. 15. The apparatus of claim 11, wherein the processing device is further configured to:
determine that the first user has not accessed the playlist for a threshold period of time; and delete the playlist. 16. The apparatus of claim 11, wherein the processing device is further configured to:
determine that the first user has not viewed a first media item from the playlist for a threshold period of time; and remove the first media item from the playlist. 17. The apparatus of claim 11, wherein identifying the subset of the plurality of media items comprises:
generating a plurality of affinity scores based on user interactions between the first user and the one or more entities, each affinity score indicative of a level of connection between the first user and one entity from the one or more entities; identifying a subset of the one or more entities based on the plurality of affinity scores; and identifying the subset of the plurality of media items based on the subset of the one or more entities. 18. A non-transitory computer readable storage medium comprising instructions that, when executed by a processing device, cause the processing device to perform operations comprising:
identifying a first user of a content sharing platform, wherein the content sharing platform comprises a plurality of media items; identifying a subset of the plurality of media items based on social interactions between the first user and one or more entities and actions performed by the first user, wherein the actions are associated with one or more media items of the content sharing platform; generating a playlist based on the subset of the plurality of media items; providing the playlist to the first user; and receiving an indication to activate the playlist from the first user. 19. The non-transitory computer readable storage medium of claim 18, wherein the actions comprise subscribing to a channel associated with a first entity from the one or more entities. 20. The non-transitory computer readable storage medium of claim 18, wherein the actions comprise indicating an approval of a media item associated with a first entity from the one or more entities. 21. The non-transitory computer readable storage medium of claim 18, wherein the social interactions comprise adding a first entity from the one or more entities as a social connection on a social networking platform. 22. The non-transitory computer readable storage medium of claim 18, the operations further comprising:
determining that the first user has not accessed the playlist for a threshold period of time; and deleting the playlist. 23. The non-transitory computer readable storage medium of claim 18, the operations further comprising:
determining that the first user has not viewed a first media item from the playlist for a threshold period of time; and removing the first media item from the playlist. 24. The non-transitory computer readable storage medium of claim 18, wherein identifying the subset of the plurality of media items comprises:
generating a plurality of affinity scores based on user interactions between the first user and the one or more entities, each affinity score indicative of a level of connection between the first user and one entity from the one or more entities; identifying a subset of the one or more entities based on the plurality of affinity scores; and identifying the subset of the plurality of media items based on the subset of the one or more entities. 25. A computer-implemented method comprising:
receiving a first user input comprising at least one of social interactions between a first user and one or more entities, or actions performed by the first user, wherein the actions are associated with one or more media items of a content sharing platform, wherein the content sharing platform comprises a plurality of media items; receiving a playlist identifying a subset of the plurality of media items based on at least one of the social interactions between the first user and one or more entities, or the actions performed by the first user; presenting the playlist via a user interface; and receiving a second user input indicating an activation of the playlist. | 2,400 |
7,533 | 7,533 | 14,592,931 | 2,456 | A method and system for maintaining persistent network policies for a virtual machine (VM) that includes determining a name of the VM executing on a first host connected to a first network device; binding the name of the VM to a network policy for the VM on the first network device; acquiring from VM management software, using the name of the VM, a universally unique identifier (UUID) of the VM; associating the UUID to the network policy on the first network device; applying the network policy for the VM on the first network device; subscribing to receive notifications from the VM management software of changes to the configuration of the VM corresponding to the UUID; receiving notification from the VM management software of a configuration change made to the VM corresponding to the UUID; and updating the network policy of the VM to reflect the configuration change of the VM. | 1. A method for maintaining persistent network policies for a virtual machine (VM), the method comprising:
determining a name of the VM, wherein the VM is executing on a first host connected to a first network device; binding the name of the VM to a network policy for the VM on the first network device; acquiring from VM management software, using the name of the VM, a universally unique identifier (UUID) of the VM; associating the UUID to the network policy on the first network device; applying the network policy for the VM on the first network device; subscribing to receive notifications from the VM management software of changes to the configuration of the VM corresponding to the UUID; receiving notification from the VM management software of a configuration change made to the VM corresponding to the UUID; and updating the network policy of the VM to reflect the configuration change of the VM. 2. The method of claim 1, further comprising:
determining if there is a second network device that requires the network policy for the VM; and distributing the network policy to the second network device. 3. The method of claim 2, further comprising:
subscribing, by the second network device, after receiving the network policy from the first network device, to receive notifications from the VM management software of changes to the configuration of the VM corresponding to the UUID; receiving notification at the second network device from the VM management software that the VM is migrating; detecting, by the second network device, that the VM has migrated from the first host to a second host connected to the second network device; identifying, using the UUID, the network policy associated with the VM; and applying the network policy for the VM on the second network device. 4. The method of claim 1, wherein the network policy comprises an access control list (ACL). 5. The method of claim 1, wherein the network policy comprises a quality of service policy. 6. The method of claim 1, wherein the configuration change is one selected from a group consisting of a name change, a location change, and a networking configuration change. 7. The method of claim 1, wherein applying the network policy comprises implementing the network policy in hardware of the first network device. 8. A system for maintaining persistent network policies for a virtual machine (VM), the system comprising:
a first network device comprising a network policy; a VM comprising a name and a universally unique identifier (UUID), wherein the VM is executing on a first host that is operatively connected to the first network device; VM management software executing on a computing device that is operatively connected to the first network device and operatively connected to the first host; wherein the first network device is configured to:
determine a name of the VM;
bind the name of the VM to the network policy for the VM on the first network device;
acquire from VM management software, using the name of the VM, the UUID of the VM;
associate the UUID to the network policy on the first network device;
apply the network policy for the VM on the first network device;
subscribe to receive notifications from the VM management software of changes to the configuration of the VM corresponding to the UUID;
receive notification from the VM management software of a configuration change made to the VM corresponding to the UUID; and
update the network policy of the VM to reflect the configuration change of the VM. 9. The system of claim 8 wherein the first network device is one selected from a group consisting of a switch and a router. 10. The system of claim 8, wherein the first network device is further configured to:
determine whether there is a second network device that requires the network policy for the VM; distribute the network policy to the second network device. 11. The system of claim 10, wherein the second network device is further configured to:
subscribe, after receiving the network policy from the first network device, to receive notifications from the VM management software of changes to the configuration of the VM corresponding to the UUID; receive notification at the second network device from the VM management software that the VM is migrating; detect, by the second network device, that the VM has migrated from the first host to a second host connected to the second network device; identify, using the UUID, the network policy associated with the VM; and apply the network policy for the VM on the second network device. 12. The system of claim 8, wherein the network policy comprises an access control list (ACL). 13. The system of claim 8, wherein the network policy is one selected from a group consisting of a firewall policy and a network traffic shaping policy. 14. The system of claim 8, wherein the configuration change is one selected from a group consisting of name change, location change, internet protocol address change, and media access control address change. 15. The system of claim 8, wherein applying the network policy comprises implementing the network policy in software of the first network device. 16. A non-transitory computer readable medium comprising instructions, which when executed by a processor, perform a method for maintaining persistent network policies for a virtual machine (VM), the method comprising:
determining a name of the VM, wherein the VM is executing on a first host connected to a first network device; binding the name of the VM to a network policy for the VM on the first network device; acquiring from VM management software, using the name of the VM, a universally unique identifier (UUID) of the VM; associating the UUID to the network policy on the first network device; applying the network policy for the VM on the first network device; subscribing to receive notifications from the VM management software of changes to the configuration of the VM corresponding to the UUID; receiving notification from the VM management software of a configuration change made to the VM corresponding to the UUID; and updating the network policy of the VM to reflect the configuration change of the VM. 17. The non-transitory computer readable medium of claim 16, the method further comprising:
determining if there is a second network device that requires the network policy for the VM; and distributing the network policy to the second network device. 18. The non-transitory computer readable medium of claim 17, the method further comprising:
subscribing, by the second network device, after receiving the network policy from the first network device, to receive notifications from the VM management software of changes to the configuration of the VM corresponding to the UUID. receiving notification at the second network device from the VM management software that the VM is migrating; detecting, by the second network device, that the VM has migrated from the first host to a second host connected to the second network device; identifying, using the UUID, the network policy associated with the VM; and applying the network policy for the VM on the second network device. 19. The non-transitory computer readable medium of claim 16, wherein the network policy comprises an access control list (ACL). 20. The non-transitory computer readable medium of claim 16, wherein the configuration change is one selected from a group consisting of a name change, a location change, and a networking configuration change. | A method and system for maintaining persistent network policies for a virtual machine (VM) that includes determining a name of the VM executing on a first host connected to a first network device; binding the name of the VM to a network policy for the VM on the first network device; acquiring from VM management software, using the name of the VM, a universally unique identifier (UUID) of the VM; associating the UUID to the network policy on the first network device; applying the network policy for the VM on the first network device; subscribing to receive notifications from the VM management software of changes to the configuration of the VM corresponding to the UUID; receiving notification from the VM management software of a configuration change made to the VM corresponding to the UUID; and updating the network policy of the VM to reflect the configuration change of the VM.1. A method for maintaining persistent network policies for a virtual machine (VM), the method comprising:
determining a name of the VM, wherein the VM is executing on a first host connected to a first network device; binding the name of the VM to a network policy for the VM on the first network device; acquiring from VM management software, using the name of the VM, a universally unique identifier (UUID) of the VM; associating the UUID to the network policy on the first network device; applying the network policy for the VM on the first network device; subscribing to receive notifications from the VM management software of changes to the configuration of the VM corresponding to the UUID; receiving notification from the VM management software of a configuration change made to the VM corresponding to the UUID; and updating the network policy of the VM to reflect the configuration change of the VM. 2. The method of claim 1, further comprising:
determining if there is a second network device that requires the network policy for the VM; and distributing the network policy to the second network device. 3. The method of claim 2, further comprising:
subscribing, by the second network device, after receiving the network policy from the first network device, to receive notifications from the VM management software of changes to the configuration of the VM corresponding to the UUID; receiving notification at the second network device from the VM management software that the VM is migrating; detecting, by the second network device, that the VM has migrated from the first host to a second host connected to the second network device; identifying, using the UUID, the network policy associated with the VM; and applying the network policy for the VM on the second network device. 4. The method of claim 1, wherein the network policy comprises an access control list (ACL). 5. The method of claim 1, wherein the network policy comprises a quality of service policy. 6. The method of claim 1, wherein the configuration change is one selected from a group consisting of a name change, a location change, and a networking configuration change. 7. The method of claim 1, wherein applying the network policy comprises implementing the network policy in hardware of the first network device. 8. A system for maintaining persistent network policies for a virtual machine (VM), the system comprising:
a first network device comprising a network policy; a VM comprising a name and a universally unique identifier (UUID), wherein the VM is executing on a first host that is operatively connected to the first network device; VM management software executing on a computing device that is operatively connected to the first network device and operatively connected to the first host; wherein the first network device is configured to:
determine a name of the VM;
bind the name of the VM to the network policy for the VM on the first network device;
acquire from VM management software, using the name of the VM, the UUID of the VM;
associate the UUID to the network policy on the first network device;
apply the network policy for the VM on the first network device;
subscribe to receive notifications from the VM management software of changes to the configuration of the VM corresponding to the UUID;
receive notification from the VM management software of a configuration change made to the VM corresponding to the UUID; and
update the network policy of the VM to reflect the configuration change of the VM. 9. The system of claim 8 wherein the first network device is one selected from a group consisting of a switch and a router. 10. The system of claim 8, wherein the first network device is further configured to:
determine whether there is a second network device that requires the network policy for the VM; distribute the network policy to the second network device. 11. The system of claim 10, wherein the second network device is further configured to:
subscribe, after receiving the network policy from the first network device, to receive notifications from the VM management software of changes to the configuration of the VM corresponding to the UUID; receive notification at the second network device from the VM management software that the VM is migrating; detect, by the second network device, that the VM has migrated from the first host to a second host connected to the second network device; identify, using the UUID, the network policy associated with the VM; and apply the network policy for the VM on the second network device. 12. The system of claim 8, wherein the network policy comprises an access control list (ACL). 13. The system of claim 8, wherein the network policy is one selected from a group consisting of a firewall policy and a network traffic shaping policy. 14. The system of claim 8, wherein the configuration change is one selected from a group consisting of name change, location change, internet protocol address change, and media access control address change. 15. The system of claim 8, wherein applying the network policy comprises implementing the network policy in software of the first network device. 16. A non-transitory computer readable medium comprising instructions, which when executed by a processor, perform a method for maintaining persistent network policies for a virtual machine (VM), the method comprising:
determining a name of the VM, wherein the VM is executing on a first host connected to a first network device; binding the name of the VM to a network policy for the VM on the first network device; acquiring from VM management software, using the name of the VM, a universally unique identifier (UUID) of the VM; associating the UUID to the network policy on the first network device; applying the network policy for the VM on the first network device; subscribing to receive notifications from the VM management software of changes to the configuration of the VM corresponding to the UUID; receiving notification from the VM management software of a configuration change made to the VM corresponding to the UUID; and updating the network policy of the VM to reflect the configuration change of the VM. 17. The non-transitory computer readable medium of claim 16, the method further comprising:
determining if there is a second network device that requires the network policy for the VM; and distributing the network policy to the second network device. 18. The non-transitory computer readable medium of claim 17, the method further comprising:
subscribing, by the second network device, after receiving the network policy from the first network device, to receive notifications from the VM management software of changes to the configuration of the VM corresponding to the UUID. receiving notification at the second network device from the VM management software that the VM is migrating; detecting, by the second network device, that the VM has migrated from the first host to a second host connected to the second network device; identifying, using the UUID, the network policy associated with the VM; and applying the network policy for the VM on the second network device. 19. The non-transitory computer readable medium of claim 16, wherein the network policy comprises an access control list (ACL). 20. The non-transitory computer readable medium of claim 16, wherein the configuration change is one selected from a group consisting of a name change, a location change, and a networking configuration change. | 2,400 |
7,534 | 7,534 | 15,060,139 | 2,416 | An apparatus comprises: a memory; and a processor coupled to the memory and configured to: perform a random number generation; generate a host identifier (HID) based on the random number generation, wherein the HID is substantially unique within a local network; and generate, using the HID, an initial message requesting a local address. A method comprises: performing a random number generation; generating a host identifier (HID) based on the random number generation, wherein the HID is substantially unique within a local network; generating, using the HID, an initial message requesting a local address; and transmitting the initial message. | 1. An apparatus comprising:
a memory; and a processor coupled to the memory and configured to:
perform a random number generation;
generate a host identifier (HID) based on the random number generation, wherein the HID is substantially unique within a local network; and
generate, using the HID, an initial message requesting a local address. 2. The apparatus of claim 1, wherein the apparatus is an endpoint client in the local network. 3. The apparatus of claim 1, wherein the HID comprises a number of bits, and wherein the number guarantees within a probability that the HID is unique within the local network. 4. The apparatus of claim 1, wherein the HID is at least 48 bits. 5. The apparatus of claim 1, wherein the initial message comprises a destination media access control (DMAC) field, a source media access control (SMAC) field, an EtherType field, a server local address assignment protocol (S-LAAP) type field, and an HID field. 6. The apparatus of claim 5, wherein the DMAC field comprises a multicast address, the SMAC field comprises a first value indicating that the local address is unknown, the EtherType field comprises two octets indicating an S-LAAP protocol, the S-LAAP type field comprises a second value indicating that the initial message is an initial message type, and the HID field comprises the HID. 7. The apparatus of claim 1, wherein the processor is further configured to select, from among a plurality of response messages originating from servers in response to the initial message, a first response message comprising the local address part and a media access control (MAC) address based on the local address part. 8. The apparatus of claim 7, wherein the processor is further configured to generate a confirmation message in response to the first response message, and wherein the confirmation message comprises a destination media access control (DMAC) field, a source media access control (SMAC) field, an EtherType field, a server local address assignment protocol (S-LAAP) type field, and an HID field. 9. The apparatus of claim 8, wherein the DMAC field comprises a multicast address, the SMAC field comprises the MAC address, the EtherType field comprises two octets indicating an S-LAAP protocol, the S-LAAP type field comprises a value indicating that the confirmation message is a confirmation message type, and the HID field comprises the HID. 10. The apparatus of claim 8, further comprising a transmitter coupled to the processor and configured to transmit the initial message and the confirmation message to a proxy. 11. The apparatus of claim 8, further comprising a transmitter coupled to the processor and configured to transmit the initial message and the confirmation message to an intermediate node. 12. A proxy comprising:
a first port configured to receive an initial message from a client, wherein the initial message comprises a host identifier (HID) that substantially uniquely identifies the client within a local network; a processor coupled to the first port and configured to amend the initial message to create an amended initial message comprising a port value corresponding to the first port; and a transmitter coupled to the processor and configured to transmit the amended initial message to a server located outside the local network. 13. The proxy of claim 12, wherein the amended initial message comprises a destination media access control (DMAC) field, a source media access control (SMAC) field, an EtherType field, a server local address assignment protocol (S-LAAP) type field, an HID field, and a port field. 14. The proxy of claim 13, wherein the DMAC field comprises a multicast address, the SMAC field comprises a proxy media access control (MAC) address, the EtherType field comprises two octets indicating an S-LAAP protocol, the S-LAAP type field comprises a value indicating that the initial message is an initial message type, the HID field comprises the HID, and the port field comprises the port value. 15. The proxy of claim 12, further comprising a second port coupled to the processor and configured to receive a response message from the server, wherein the response message comprises a media access control (MAC) address for the client and the port value. 16. The proxy of claim 15, wherein the proxy is configured to transmit the response message to the client through the first port and in response to receiving the response message from the first server. 17. A method comprising:
performing a random number generation; generating a host identifier (HID) based on the random number generation, wherein the HID is substantially unique within a local network; generating, using the HID, an initial message requesting a local address; and transmitting the initial message. 18. The method of claim 17, wherein the generating the initial message comprises:
generating a destination media access control (DMAC) field comprising a multicast address; generating a source media access control (SMAC) field comprising a first value indicating that the local address is unknown; generating an EtherType field comprising two octets indicating a server local address assignment protocol (S-LAAP) protocol; generating an S-LAAP type field comprising a second value indicating that the initial message is an initial message type; and generating an HID field comprising the HID. 19. The method of claim 17, further comprising:
receiving a response message originating in response to the initial message; selecting, based on at least one criterion, a first response message from among the response message; and extracting, from the first response message, a media access control (MAC) address. 20. The method of claim 19, further comprising generating, in response to the first response message, a confirmation message by:
generating a destination media access control (DMAC) field comprising a multicast address; generating a source media access control (SMAC) field comprising the MAC address; generating an EtherType field comprising two octets indicating an S-LAAP protocol; generating a server local address assignment protocol (S-LAAP) type field comprising a value indicating that the confirmation message is a confirmation message type; and generating an HID field comprising the HID. | An apparatus comprises: a memory; and a processor coupled to the memory and configured to: perform a random number generation; generate a host identifier (HID) based on the random number generation, wherein the HID is substantially unique within a local network; and generate, using the HID, an initial message requesting a local address. A method comprises: performing a random number generation; generating a host identifier (HID) based on the random number generation, wherein the HID is substantially unique within a local network; generating, using the HID, an initial message requesting a local address; and transmitting the initial message.1. An apparatus comprising:
a memory; and a processor coupled to the memory and configured to:
perform a random number generation;
generate a host identifier (HID) based on the random number generation, wherein the HID is substantially unique within a local network; and
generate, using the HID, an initial message requesting a local address. 2. The apparatus of claim 1, wherein the apparatus is an endpoint client in the local network. 3. The apparatus of claim 1, wherein the HID comprises a number of bits, and wherein the number guarantees within a probability that the HID is unique within the local network. 4. The apparatus of claim 1, wherein the HID is at least 48 bits. 5. The apparatus of claim 1, wherein the initial message comprises a destination media access control (DMAC) field, a source media access control (SMAC) field, an EtherType field, a server local address assignment protocol (S-LAAP) type field, and an HID field. 6. The apparatus of claim 5, wherein the DMAC field comprises a multicast address, the SMAC field comprises a first value indicating that the local address is unknown, the EtherType field comprises two octets indicating an S-LAAP protocol, the S-LAAP type field comprises a second value indicating that the initial message is an initial message type, and the HID field comprises the HID. 7. The apparatus of claim 1, wherein the processor is further configured to select, from among a plurality of response messages originating from servers in response to the initial message, a first response message comprising the local address part and a media access control (MAC) address based on the local address part. 8. The apparatus of claim 7, wherein the processor is further configured to generate a confirmation message in response to the first response message, and wherein the confirmation message comprises a destination media access control (DMAC) field, a source media access control (SMAC) field, an EtherType field, a server local address assignment protocol (S-LAAP) type field, and an HID field. 9. The apparatus of claim 8, wherein the DMAC field comprises a multicast address, the SMAC field comprises the MAC address, the EtherType field comprises two octets indicating an S-LAAP protocol, the S-LAAP type field comprises a value indicating that the confirmation message is a confirmation message type, and the HID field comprises the HID. 10. The apparatus of claim 8, further comprising a transmitter coupled to the processor and configured to transmit the initial message and the confirmation message to a proxy. 11. The apparatus of claim 8, further comprising a transmitter coupled to the processor and configured to transmit the initial message and the confirmation message to an intermediate node. 12. A proxy comprising:
a first port configured to receive an initial message from a client, wherein the initial message comprises a host identifier (HID) that substantially uniquely identifies the client within a local network; a processor coupled to the first port and configured to amend the initial message to create an amended initial message comprising a port value corresponding to the first port; and a transmitter coupled to the processor and configured to transmit the amended initial message to a server located outside the local network. 13. The proxy of claim 12, wherein the amended initial message comprises a destination media access control (DMAC) field, a source media access control (SMAC) field, an EtherType field, a server local address assignment protocol (S-LAAP) type field, an HID field, and a port field. 14. The proxy of claim 13, wherein the DMAC field comprises a multicast address, the SMAC field comprises a proxy media access control (MAC) address, the EtherType field comprises two octets indicating an S-LAAP protocol, the S-LAAP type field comprises a value indicating that the initial message is an initial message type, the HID field comprises the HID, and the port field comprises the port value. 15. The proxy of claim 12, further comprising a second port coupled to the processor and configured to receive a response message from the server, wherein the response message comprises a media access control (MAC) address for the client and the port value. 16. The proxy of claim 15, wherein the proxy is configured to transmit the response message to the client through the first port and in response to receiving the response message from the first server. 17. A method comprising:
performing a random number generation; generating a host identifier (HID) based on the random number generation, wherein the HID is substantially unique within a local network; generating, using the HID, an initial message requesting a local address; and transmitting the initial message. 18. The method of claim 17, wherein the generating the initial message comprises:
generating a destination media access control (DMAC) field comprising a multicast address; generating a source media access control (SMAC) field comprising a first value indicating that the local address is unknown; generating an EtherType field comprising two octets indicating a server local address assignment protocol (S-LAAP) protocol; generating an S-LAAP type field comprising a second value indicating that the initial message is an initial message type; and generating an HID field comprising the HID. 19. The method of claim 17, further comprising:
receiving a response message originating in response to the initial message; selecting, based on at least one criterion, a first response message from among the response message; and extracting, from the first response message, a media access control (MAC) address. 20. The method of claim 19, further comprising generating, in response to the first response message, a confirmation message by:
generating a destination media access control (DMAC) field comprising a multicast address; generating a source media access control (SMAC) field comprising the MAC address; generating an EtherType field comprising two octets indicating an S-LAAP protocol; generating a server local address assignment protocol (S-LAAP) type field comprising a value indicating that the confirmation message is a confirmation message type; and generating an HID field comprising the HID. | 2,400 |
7,535 | 7,535 | 15,206,586 | 2,425 | System and method for facilitating advertisements within viewed content. The advertisements may be banner advertisements or other advertisement. The advertisements may be included in such a manner that if a user skips or otherwise fast forwards through the advertisements, the user if force to skip through at least a portion of the viewed content. | 1. (canceled) 2. A method comprising:
receiving, by a computing device, a request indicating requested video content, wherein the requested video content comprises a plurality of frames each having a first aspect ratio; determining a spatial dimension of an advertisement selected for display with the requested video content; generating, by the computing device, combined video content that comprises one or more frames, each of the one or more frames comprising:
a resized content portion having a second aspect ratio, wherein the second aspect ratio is based on a reduction in a spatial dimension of the first aspect ratio by at least the determined spatial dimension of the advertisement, and wherein the resized content portion is associated with the requested video content; and
an advertisement portion associated with the advertisement, wherein the advertisement portion does not overlap the resized content portion; and
sending the combined video content. 3. The method of claim 2, wherein sending the combined video content further comprises sending the combined video content to a user device configured to cause display of the combined video content. 4. The method of claim 2, wherein sending the combined video content further comprises sending the combined video content to a user device requesting the requested video content, and wherein the advertisement portion of the combined video content comprises a link. 5. The method of claim 4, further comprising:
receiving, from the user device, a message indicative of selection of the link; receiving, from an advertisement server, advertisement information describing the advertisement; determining an advertisement topic associated with the advertisement; and in response to receiving the message indicative of selection of the link:
determining additional content associated with the advertisement topic; and
sending the additional content to the user device. 6. The method of claim 4, further comprising:
receiving, from the user device, a message indicative of selection of the link; and in response to receiving the message indicative of selection of the link:
sending, to the user device, a signal, wherein the signal is configured to display on the user device an internet webpage associated with the advertisement. 7. The method of claim 2, wherein the spatial dimension of the advertisement corresponds to a width of a pillar box or a height of a letter box. 8. The method of claim 2, wherein the first aspect ratio corresponds to a full screen frame, and wherein the second aspect ratio corresponds to a widescreen frame. 9. A method comprising:
receiving, by a computing device and from a user associated with a user device, a request indicating video content, wherein the video content comprises a plurality of frames; determining a spatial dimension of an advertisement for display with the video content; generating, by the computing device, combined video content that comprises:
a resized version of the video content, wherein the resized version of the video content is resized based on the spatial dimension of the advertisement; and
the advertisement sized based on the spatial dimension of the advertisement and positioned adjacent to the resized version of the video content; and
sending, to the user device, the combined video content. 10. The method of claim 9, further comprising:
receiving, from a video server, video content information describing the video content; and
selecting, based on the video content information, the advertisement for display. 11. The method of claim 9, further comprising:
receiving, from a video server, video content information describing the video content; receiving, from an advertisement server, advertisement information describing the advertisement; and selecting, based on the video content information and the advertisement information, the advertisement for display with the video content. 12. The method of claim 11, further comprising:
determining a video content subject matter related to the video content information; determining an advertisement subject matter related to the advertisement information; and wherein selecting the advertisement for display with the video content is further based on the video content subject matter and the advertisement subject matter. 13. The method of claim 9, further comprising:
receiving information associated with the user; and selecting, based on the information associated with the user, the advertisement for display with the video content. 14. The method of claim 9, further comprising:
receiving, from the user device, playback capabilities of the user device; and selecting, based on the playback capabilities of the user device, the advertisement for display with the video content. 15. The method of claim 9, further comprising:
receiving, from an advertising server, a plurality of advertisements and a plurality of conditions for selection of the advertisement from the plurality of advertisements; and selecting, based on the plurality of conditions, the advertisement from the plurality of advertisements. 16. The method of claim 15, further comprising:
determining a number of times the advertisement is selected; determining, based on the number of times the advertisement is selected, an advertisement selection cost; and sending, to the advertisement server, the displaying the advertisement selection cost. 17. The method of claim 9, further comprising:
receiving, from an advertising server, a plurality of advertisements and an algorithm for a selection of the advertisement for display; and selecting, based on the algorithm, the advertisement from the plurality of advertisements for display with the video content. 18. A method comprising:
receiving, by a computing device and from an advertising server, a plurality of advertisements and at least one condition for selection of an advertisement from the plurality of advertisements; receiving, from a user device, a request indicating video content; selecting, based on the at least one condition, the advertisement from the plurality of advertisements for inclusion within the video content; determining a spatial dimension of the selected advertisement; generating, by the computing device, combined video content that comprises:
a resized version of the video content, wherein the resized version of the video content is resized based on the spatial dimension of the advertisement; and
the advertisement sized based on the spatial dimension of the advertisement and positioned adjacent to the resized version of the video content; and
sending the combined video content to the user device configured to cause display of the combined video content. 19. The method of claim 18, further comprising:
determining, based on the at least one condition for selection of the advertisement from the plurality of advertisements, an advertisement selection cost; and sending, to the advertisement server, the advertisement selection cost. 20. The method of claim 18, further comprising:
receiving, from a video server, video content information describing the video content; receiving, from the advertisement server, advertisement information describing the advertisement; and wherein selecting the advertisement from the plurality of advertisements is further based on the video content information or the advertisement information. 21. The method of claim 20, further comprising:
receiving user information indicative of viewing habits associated with a user; and wherein selecting the advertisement from the plurality of advertisements is further based the user information. | System and method for facilitating advertisements within viewed content. The advertisements may be banner advertisements or other advertisement. The advertisements may be included in such a manner that if a user skips or otherwise fast forwards through the advertisements, the user if force to skip through at least a portion of the viewed content.1. (canceled) 2. A method comprising:
receiving, by a computing device, a request indicating requested video content, wherein the requested video content comprises a plurality of frames each having a first aspect ratio; determining a spatial dimension of an advertisement selected for display with the requested video content; generating, by the computing device, combined video content that comprises one or more frames, each of the one or more frames comprising:
a resized content portion having a second aspect ratio, wherein the second aspect ratio is based on a reduction in a spatial dimension of the first aspect ratio by at least the determined spatial dimension of the advertisement, and wherein the resized content portion is associated with the requested video content; and
an advertisement portion associated with the advertisement, wherein the advertisement portion does not overlap the resized content portion; and
sending the combined video content. 3. The method of claim 2, wherein sending the combined video content further comprises sending the combined video content to a user device configured to cause display of the combined video content. 4. The method of claim 2, wherein sending the combined video content further comprises sending the combined video content to a user device requesting the requested video content, and wherein the advertisement portion of the combined video content comprises a link. 5. The method of claim 4, further comprising:
receiving, from the user device, a message indicative of selection of the link; receiving, from an advertisement server, advertisement information describing the advertisement; determining an advertisement topic associated with the advertisement; and in response to receiving the message indicative of selection of the link:
determining additional content associated with the advertisement topic; and
sending the additional content to the user device. 6. The method of claim 4, further comprising:
receiving, from the user device, a message indicative of selection of the link; and in response to receiving the message indicative of selection of the link:
sending, to the user device, a signal, wherein the signal is configured to display on the user device an internet webpage associated with the advertisement. 7. The method of claim 2, wherein the spatial dimension of the advertisement corresponds to a width of a pillar box or a height of a letter box. 8. The method of claim 2, wherein the first aspect ratio corresponds to a full screen frame, and wherein the second aspect ratio corresponds to a widescreen frame. 9. A method comprising:
receiving, by a computing device and from a user associated with a user device, a request indicating video content, wherein the video content comprises a plurality of frames; determining a spatial dimension of an advertisement for display with the video content; generating, by the computing device, combined video content that comprises:
a resized version of the video content, wherein the resized version of the video content is resized based on the spatial dimension of the advertisement; and
the advertisement sized based on the spatial dimension of the advertisement and positioned adjacent to the resized version of the video content; and
sending, to the user device, the combined video content. 10. The method of claim 9, further comprising:
receiving, from a video server, video content information describing the video content; and
selecting, based on the video content information, the advertisement for display. 11. The method of claim 9, further comprising:
receiving, from a video server, video content information describing the video content; receiving, from an advertisement server, advertisement information describing the advertisement; and selecting, based on the video content information and the advertisement information, the advertisement for display with the video content. 12. The method of claim 11, further comprising:
determining a video content subject matter related to the video content information; determining an advertisement subject matter related to the advertisement information; and wherein selecting the advertisement for display with the video content is further based on the video content subject matter and the advertisement subject matter. 13. The method of claim 9, further comprising:
receiving information associated with the user; and selecting, based on the information associated with the user, the advertisement for display with the video content. 14. The method of claim 9, further comprising:
receiving, from the user device, playback capabilities of the user device; and selecting, based on the playback capabilities of the user device, the advertisement for display with the video content. 15. The method of claim 9, further comprising:
receiving, from an advertising server, a plurality of advertisements and a plurality of conditions for selection of the advertisement from the plurality of advertisements; and selecting, based on the plurality of conditions, the advertisement from the plurality of advertisements. 16. The method of claim 15, further comprising:
determining a number of times the advertisement is selected; determining, based on the number of times the advertisement is selected, an advertisement selection cost; and sending, to the advertisement server, the displaying the advertisement selection cost. 17. The method of claim 9, further comprising:
receiving, from an advertising server, a plurality of advertisements and an algorithm for a selection of the advertisement for display; and selecting, based on the algorithm, the advertisement from the plurality of advertisements for display with the video content. 18. A method comprising:
receiving, by a computing device and from an advertising server, a plurality of advertisements and at least one condition for selection of an advertisement from the plurality of advertisements; receiving, from a user device, a request indicating video content; selecting, based on the at least one condition, the advertisement from the plurality of advertisements for inclusion within the video content; determining a spatial dimension of the selected advertisement; generating, by the computing device, combined video content that comprises:
a resized version of the video content, wherein the resized version of the video content is resized based on the spatial dimension of the advertisement; and
the advertisement sized based on the spatial dimension of the advertisement and positioned adjacent to the resized version of the video content; and
sending the combined video content to the user device configured to cause display of the combined video content. 19. The method of claim 18, further comprising:
determining, based on the at least one condition for selection of the advertisement from the plurality of advertisements, an advertisement selection cost; and sending, to the advertisement server, the advertisement selection cost. 20. The method of claim 18, further comprising:
receiving, from a video server, video content information describing the video content; receiving, from the advertisement server, advertisement information describing the advertisement; and wherein selecting the advertisement from the plurality of advertisements is further based on the video content information or the advertisement information. 21. The method of claim 20, further comprising:
receiving user information indicative of viewing habits associated with a user; and wherein selecting the advertisement from the plurality of advertisements is further based the user information. | 2,400 |
7,536 | 7,536 | 15,013,452 | 2,431 | Disclosed are various embodiments for detecting and responding to attacks on a computer network. One embodiment of such a method describes monitoring dropped data communications intended for a target class of first virtual machine nodes; determining whether a dropped data communication is a form of attack on a network to which the first virtual machine nodes are connected; and sending a notification message of the determined attack to a data transmission system manager node thereby causing the data transmission system manager node to generate a list of one or more internet protocol addresses associated with a source of the dropped data communication and send the list of one or more internet protocol addresses to at least one second transmission manager node for second virtual machine nodes that are not part of the target class | 1. A method comprising:
monitoring, by a network diagnostic system node, a data communication dropped by a first transmission manager node servicing a target class of first virtual machine nodes; determining, by the network diagnostic system node, that the dropped data communication is a form of attack on a network to which the first virtual machine nodes are connected; and sending, by the network diagnostic system node, a notification message of the determined attack to a data transmission system manager node thereby causing the data transmission system manager node to generate a list of one or more internet protocol addresses associated with a source of the dropped data communication and send the list of one or more internet protocol addresses to at least one second transmission manager node for second virtual machine nodes that are not part of the target class. | Disclosed are various embodiments for detecting and responding to attacks on a computer network. One embodiment of such a method describes monitoring dropped data communications intended for a target class of first virtual machine nodes; determining whether a dropped data communication is a form of attack on a network to which the first virtual machine nodes are connected; and sending a notification message of the determined attack to a data transmission system manager node thereby causing the data transmission system manager node to generate a list of one or more internet protocol addresses associated with a source of the dropped data communication and send the list of one or more internet protocol addresses to at least one second transmission manager node for second virtual machine nodes that are not part of the target class1. A method comprising:
monitoring, by a network diagnostic system node, a data communication dropped by a first transmission manager node servicing a target class of first virtual machine nodes; determining, by the network diagnostic system node, that the dropped data communication is a form of attack on a network to which the first virtual machine nodes are connected; and sending, by the network diagnostic system node, a notification message of the determined attack to a data transmission system manager node thereby causing the data transmission system manager node to generate a list of one or more internet protocol addresses associated with a source of the dropped data communication and send the list of one or more internet protocol addresses to at least one second transmission manager node for second virtual machine nodes that are not part of the target class. | 2,400 |
7,537 | 7,537 | 14,559,442 | 2,473 | A first network device adapted for communication with one or more other network devices is configured to determine a number of receivers of a multicast, and to control switching of traffic between selective and inclusive routes for the multicast based at least in part on the determined number of receivers. For example, in some embodiments the first network device is configured to control switching of traffic between the selective and inclusive routes for the multicast by utilizing a selective route for the multicast responsive to a determination that traffic for the multicast is at or above a bandwidth threshold and the number of receivers is below an add threshold, and utilizing an inclusive route for the multicast responsive to a determination that traffic for the multicast is below the bandwidth threshold or the number of receivers is above a delete threshold. | 1. An apparatus comprising:
a first network device adapted for communication with one or more other network devices; the first network device being configured: to determine a number of receivers of a multicast; and to control switching of traffic between selective and inclusive routes for the multicast based at least in part on the determined number of receivers. 2. The apparatus of claim 1 wherein the first network device is configured to control switching of traffic between the selective and inclusive routes for the multicast by utilizing a selective route for the multicast responsive to a determination that:
(i) traffic for the multicast is at or above a bandwidth threshold; and
(ii) the number of receivers is below an add threshold. 3. The apparatus of claim 2 wherein utilizing the selective route comprises establishing the selective route responsive to the determination of (i) and (ii). 4. The apparatus of claim 1 wherein the first network device is configured to control switching of traffic between the selective and inclusive routes for the multicast by utilizing an inclusive route for the multicast responsive to a determination that:
(i) traffic for the multicast is below a bandwidth threshold; or
(ii) the number of receivers is above a delete threshold. 5. The apparatus of claim 4 wherein utilizing the inclusive route comprises tearing down a previously-established selective route and transitioning to the inclusive route responsive to the determination of (i) and (ii). 6. The apparatus of claim 4 wherein the delete threshold is greater than an add threshold specifying a number of receivers below which a selective route is utilized for the multicast if traffic for the multicast is at or above the bandwidth threshold. 7. The apparatus of claim 1 wherein the first network device is configured to control switching of traffic between the selective and inclusive routes for the multicast by establishing a selective route based at least in part on the number of receivers determined in a first iteration being below an add threshold, and subsequently tearing down the selective route and transitioning to an inclusive route based at least in part on the number of receivers determined in a second iteration being above a delete threshold. 8. The apparatus of claim 7 wherein the first network device is configured to control switching of traffic between the selective and inclusive routes for the multicast by again establishing a selective route based at least in part on the number of receivers determined in a third iteration being below the add threshold. 9. The apparatus of claim 1 wherein the first network device is configured to determine the number of receivers of the multicast by tracking receivers of the multicast based at least in part on leaf information received from the receivers responsive to a leaf information requirement established for the multicast. 10. The apparatus of claim 9 wherein the first network device is configured to establish the leaf information requirement for the multicast by originating a selective route that specifies the leaf information requirement but does not identify a tunnel for carrying traffic for the multicast. 11. The apparatus of claim 10 wherein the specified leaf information requirement of the selective route is established by setting a leaf information field of a tunnel attribute of the selective route to a predetermined value. 12. The apparatus of claim 11 wherein the leaf information field of the tunnel attribute of the selective route comprises a leaf information required flag that is set to a predetermined logic value to indicate the specified leaf information requirement. 13. The apparatus of claim 1 wherein the selective and inclusive routes comprise respective S-PMSI and I-PMSI routes. 14. A communication network comprising the apparatus of claim 1. 15. A method comprising:
determining a number of receivers of a multicast; and controlling switching of traffic between selective and inclusive routes for the multicast based at least in part on the determined number of receivers; wherein the determining and controlling are performed by a network device. 16. The method of claim 15 wherein controlling switching of traffic between selective and inclusive routes for the multicast comprises utilizing a selective route for the multicast responsive to a determination that:
(i) traffic for the multicast is at or above a bandwidth threshold; and
(ii) the number of receivers is below an add threshold. 17. The method of claim 15 wherein controlling switching of traffic between selective and inclusive routes for the multicast comprises utilizing an inclusive route for the multicast responsive to a determination that:
(i) traffic for the multicast is below a bandwidth threshold; or
(ii) the number of receivers is above a delete threshold. 18. The method of claim 15 wherein controlling switching of traffic between the selective and inclusive routes for the multicast comprises:
establishing a selective route based at least in part on the number of receivers determined in a first iteration being below an add threshold; and
subsequently tearing down the selective route and transitioning to an inclusive route based at least in part on the number of receivers determined in a second iteration being above a delete threshold. 19. The method of claim 18 wherein controlling switching of traffic between the selective and inclusive routes for the multicast comprises again establishing a selective route based at least in part on the number of receivers determined in a third iteration being below the add threshold. 20. An article of manufacture comprising a processor-readable storage medium having embodied therein executable program code that when executed by a network device causes the network device:
to determine a number of receivers of a multicast; and to control switching of traffic between selective and inclusive routes for the multicast based at least in part on the determined number of receivers. | A first network device adapted for communication with one or more other network devices is configured to determine a number of receivers of a multicast, and to control switching of traffic between selective and inclusive routes for the multicast based at least in part on the determined number of receivers. For example, in some embodiments the first network device is configured to control switching of traffic between the selective and inclusive routes for the multicast by utilizing a selective route for the multicast responsive to a determination that traffic for the multicast is at or above a bandwidth threshold and the number of receivers is below an add threshold, and utilizing an inclusive route for the multicast responsive to a determination that traffic for the multicast is below the bandwidth threshold or the number of receivers is above a delete threshold.1. An apparatus comprising:
a first network device adapted for communication with one or more other network devices; the first network device being configured: to determine a number of receivers of a multicast; and to control switching of traffic between selective and inclusive routes for the multicast based at least in part on the determined number of receivers. 2. The apparatus of claim 1 wherein the first network device is configured to control switching of traffic between the selective and inclusive routes for the multicast by utilizing a selective route for the multicast responsive to a determination that:
(i) traffic for the multicast is at or above a bandwidth threshold; and
(ii) the number of receivers is below an add threshold. 3. The apparatus of claim 2 wherein utilizing the selective route comprises establishing the selective route responsive to the determination of (i) and (ii). 4. The apparatus of claim 1 wherein the first network device is configured to control switching of traffic between the selective and inclusive routes for the multicast by utilizing an inclusive route for the multicast responsive to a determination that:
(i) traffic for the multicast is below a bandwidth threshold; or
(ii) the number of receivers is above a delete threshold. 5. The apparatus of claim 4 wherein utilizing the inclusive route comprises tearing down a previously-established selective route and transitioning to the inclusive route responsive to the determination of (i) and (ii). 6. The apparatus of claim 4 wherein the delete threshold is greater than an add threshold specifying a number of receivers below which a selective route is utilized for the multicast if traffic for the multicast is at or above the bandwidth threshold. 7. The apparatus of claim 1 wherein the first network device is configured to control switching of traffic between the selective and inclusive routes for the multicast by establishing a selective route based at least in part on the number of receivers determined in a first iteration being below an add threshold, and subsequently tearing down the selective route and transitioning to an inclusive route based at least in part on the number of receivers determined in a second iteration being above a delete threshold. 8. The apparatus of claim 7 wherein the first network device is configured to control switching of traffic between the selective and inclusive routes for the multicast by again establishing a selective route based at least in part on the number of receivers determined in a third iteration being below the add threshold. 9. The apparatus of claim 1 wherein the first network device is configured to determine the number of receivers of the multicast by tracking receivers of the multicast based at least in part on leaf information received from the receivers responsive to a leaf information requirement established for the multicast. 10. The apparatus of claim 9 wherein the first network device is configured to establish the leaf information requirement for the multicast by originating a selective route that specifies the leaf information requirement but does not identify a tunnel for carrying traffic for the multicast. 11. The apparatus of claim 10 wherein the specified leaf information requirement of the selective route is established by setting a leaf information field of a tunnel attribute of the selective route to a predetermined value. 12. The apparatus of claim 11 wherein the leaf information field of the tunnel attribute of the selective route comprises a leaf information required flag that is set to a predetermined logic value to indicate the specified leaf information requirement. 13. The apparatus of claim 1 wherein the selective and inclusive routes comprise respective S-PMSI and I-PMSI routes. 14. A communication network comprising the apparatus of claim 1. 15. A method comprising:
determining a number of receivers of a multicast; and controlling switching of traffic between selective and inclusive routes for the multicast based at least in part on the determined number of receivers; wherein the determining and controlling are performed by a network device. 16. The method of claim 15 wherein controlling switching of traffic between selective and inclusive routes for the multicast comprises utilizing a selective route for the multicast responsive to a determination that:
(i) traffic for the multicast is at or above a bandwidth threshold; and
(ii) the number of receivers is below an add threshold. 17. The method of claim 15 wherein controlling switching of traffic between selective and inclusive routes for the multicast comprises utilizing an inclusive route for the multicast responsive to a determination that:
(i) traffic for the multicast is below a bandwidth threshold; or
(ii) the number of receivers is above a delete threshold. 18. The method of claim 15 wherein controlling switching of traffic between the selective and inclusive routes for the multicast comprises:
establishing a selective route based at least in part on the number of receivers determined in a first iteration being below an add threshold; and
subsequently tearing down the selective route and transitioning to an inclusive route based at least in part on the number of receivers determined in a second iteration being above a delete threshold. 19. The method of claim 18 wherein controlling switching of traffic between the selective and inclusive routes for the multicast comprises again establishing a selective route based at least in part on the number of receivers determined in a third iteration being below the add threshold. 20. An article of manufacture comprising a processor-readable storage medium having embodied therein executable program code that when executed by a network device causes the network device:
to determine a number of receivers of a multicast; and to control switching of traffic between selective and inclusive routes for the multicast based at least in part on the determined number of receivers. | 2,400 |
7,538 | 7,538 | 14,244,537 | 2,416 | Aspects of the present disclosure involve systems, methods, computer program products, and the like, for implementing providing a web conferencing service. In one example, the system and methods involve a real-time application programming interface (RTAPI) component in the telecommunications network. The RTAPI is configured, in one embodiment, to provide a platform through which one or more users of the telecommunications network interfaces with one or more conferencing components of the network. In one example, the RTAPI may be configured to coordinate a dial-out to a participant of a conference at a designated time such that the participant is entered into the conference automatically. | 1. A method for facilitating a collaboration conference in a telecommunications network, the method comprising:
receiving a collaboration conference access request from a client application program executed on a client device at an application programming interface of a server associated with the telecommunications network; selecting a hosting conference bridge from a plurality of conference bridges associated with the telecommunications network and configured to host a collaboration conference, the selection occurring in response to the collaboration conference access request; translating the received collaboration conference access request to one or more instructions specific to the selected hosting conferencing bridge; and transmitting the one or more instructions specific to the selected hosting conferencing bridge to the selected hosting conferencing bridge, wherein the one or more instructions specific to the selected hosting conferencing bridge include an indication of the collaboration conference access request from the client application program. 2. The method of claim 1 further comprising:
receiving connection information at the application programming interface of the server from the selected hosting conferencing bridge configured to connect the client device to the collaboration conference hosted by the selected hosting conferencing bridge. 3. The method of claim 2 further comprising:
connecting the requester's device to the collaboration conference of the selected hosting conferencing bridge based on the received connection information and utilizing the telecommunications network. 4. The method of claim 3 further comprising:
receiving one or more collaboration conferencing instructions from the client application program, the one or more collaboration conferencing instructions configured to initiate a feature of the collaboration conference;
translating the received one or more collaboration conferencing instructions to one or more feature initiating instructions specific to the selected hosting conferencing bridge; and
transmitting the one or more collaboration conferencing instructions to the selected hosting conferencing bridge. 5. The method of claim 4 wherein the feature of the collaboration conference is at least one of adding a participant to the collaboration conference, dropping a participant from the collaboration conference, muting a participant or locking a collaboration conference. 6. The method of claim 4 wherein the collaboration conference access request includes a web-based conference component and the one or more collaboration conferencing instructions are configured to initiate a web-based conferencing feature. 7. The method of claim 6 wherein web-based conferencing feature is a roster configured to indicate the participants to the collaboration conference on at least one user interface display. 8. The method of claim 7 wherein the web-based conferencing feature is an active speaker indicator configured to indicate an active speaker on the at least one user interface display. 9. The method of claim 7 wherein the web-based conferencing feature is a roster reconciliation configured to associate an identifier to an audio component and the web-based conference component of a participant to the collaboration conference in the at least one user interface display. 10. A system for hosting a collaboration conference in a telecommunications network, the system comprising:
a network interface configured to receive a communication from a user of a communications network to establish a collaboration conference on the network; a processing device in communication with the network interface unit; and a computer-readable medium connected to the processing device configured to store information and instructions that, when executed by the processing device, instantiates an application programming interface that performs the operations of:
receiving a collaboration conference access request from a client application program associated with the user;
selecting a hosting conference bridge from a plurality of conference bridges associated with the network and configured to host a collaboration conference, the selection occurring in response to the collaboration conference access request received from the client application program associated with the user;
translating the received collaboration conference access request to one or more instructions specific to the selected hosting conferencing bridge; and
transmitting the one or more instructions specific to the selected hosting conferencing bridge to the selected hosting conferencing bridge. 11. The system of claim 10 wherein the computer-readable medium is further configured to store one or more collaboration conferencing preferences of the user. 12. The system of claim 10 wherein the application programming interface further performs the operations of:
receiving connection information from the selected hosting conferencing bridge configured to connect a device associated with the user to the collaboration conference hosted by the selected hosting conferencing bridge; and
connecting the device associated with the user to the collaboration conference of the selected hosting conferencing bridge based at least on the received connection information. 13. The system of claim 12 wherein the application programming interface further performs the operations of:
receiving one or more collaboration conferencing instructions from the client application program associated with the user, the one or more collaboration conferencing instructions configured to initiate a feature of the collaboration conference;
translating the received one or more collaboration conferencing instructions to one or more feature initiating instructions specific to the selected hosting conferencing bridge; and
transmitting the one or more collaboration conferencing instructions to the selected hosting conferencing bridge. 14. The system of claim 13 wherein the collaboration conference access request includes a web-based conference component and the one or more collaboration conferencing instructions are configured to initiate a web-based conferencing feature. 15. The system of claim 14 wherein web-based conferencing feature is a roster configured to indicate the participants to the collaboration conference on at least one user interface display. 16. The system of claim 15 wherein the web-based conferencing feature is an active speaker indicator configured to indicate an active speaker on the at least one user interface display. 17. The system of claim 15 wherein the web-based conferencing feature is a roster reconciliation configured to associate an identifier to an audio component and the web-based conference component of a participant to the collaboration conference in the at least one user interface display. 18. A networking component of a telecommunications network comprising:
a server comprising:
a processor; and
an application programming interface executed by the processor, application programming interface configured to perform the operations of:
receiving a collaboration conference access request from a client application program associated with a client device associated with the telecommunications network, the collaboration conference access request received through a network interface unit of the server and configured to request access to a collaboration conference hosted by the telecommunications network;
translating the received collaboration conference access request to one or more instructions specific to a hosting conferencing bridge selected from a plurality of conference bridges associated with the telecommunication's network, each of the plurality of conference bridges configured to host a collaboration conference, the selection occurring in response to the collaboration conference access request received from the client application program; and
transmitting the one or more instructions specific to the selected hosting conferencing bridge to the selected hosting conferencing bridge. 19. The networking component of claim 18 wherein the selected hosting conferencing bridge is a time division multiplexing telecommunication device. 20. The networking component of claim 18 wherein the selected hosting conferencing bridge is a session initiation protocol based telecommunication device. | Aspects of the present disclosure involve systems, methods, computer program products, and the like, for implementing providing a web conferencing service. In one example, the system and methods involve a real-time application programming interface (RTAPI) component in the telecommunications network. The RTAPI is configured, in one embodiment, to provide a platform through which one or more users of the telecommunications network interfaces with one or more conferencing components of the network. In one example, the RTAPI may be configured to coordinate a dial-out to a participant of a conference at a designated time such that the participant is entered into the conference automatically.1. A method for facilitating a collaboration conference in a telecommunications network, the method comprising:
receiving a collaboration conference access request from a client application program executed on a client device at an application programming interface of a server associated with the telecommunications network; selecting a hosting conference bridge from a plurality of conference bridges associated with the telecommunications network and configured to host a collaboration conference, the selection occurring in response to the collaboration conference access request; translating the received collaboration conference access request to one or more instructions specific to the selected hosting conferencing bridge; and transmitting the one or more instructions specific to the selected hosting conferencing bridge to the selected hosting conferencing bridge, wherein the one or more instructions specific to the selected hosting conferencing bridge include an indication of the collaboration conference access request from the client application program. 2. The method of claim 1 further comprising:
receiving connection information at the application programming interface of the server from the selected hosting conferencing bridge configured to connect the client device to the collaboration conference hosted by the selected hosting conferencing bridge. 3. The method of claim 2 further comprising:
connecting the requester's device to the collaboration conference of the selected hosting conferencing bridge based on the received connection information and utilizing the telecommunications network. 4. The method of claim 3 further comprising:
receiving one or more collaboration conferencing instructions from the client application program, the one or more collaboration conferencing instructions configured to initiate a feature of the collaboration conference;
translating the received one or more collaboration conferencing instructions to one or more feature initiating instructions specific to the selected hosting conferencing bridge; and
transmitting the one or more collaboration conferencing instructions to the selected hosting conferencing bridge. 5. The method of claim 4 wherein the feature of the collaboration conference is at least one of adding a participant to the collaboration conference, dropping a participant from the collaboration conference, muting a participant or locking a collaboration conference. 6. The method of claim 4 wherein the collaboration conference access request includes a web-based conference component and the one or more collaboration conferencing instructions are configured to initiate a web-based conferencing feature. 7. The method of claim 6 wherein web-based conferencing feature is a roster configured to indicate the participants to the collaboration conference on at least one user interface display. 8. The method of claim 7 wherein the web-based conferencing feature is an active speaker indicator configured to indicate an active speaker on the at least one user interface display. 9. The method of claim 7 wherein the web-based conferencing feature is a roster reconciliation configured to associate an identifier to an audio component and the web-based conference component of a participant to the collaboration conference in the at least one user interface display. 10. A system for hosting a collaboration conference in a telecommunications network, the system comprising:
a network interface configured to receive a communication from a user of a communications network to establish a collaboration conference on the network; a processing device in communication with the network interface unit; and a computer-readable medium connected to the processing device configured to store information and instructions that, when executed by the processing device, instantiates an application programming interface that performs the operations of:
receiving a collaboration conference access request from a client application program associated with the user;
selecting a hosting conference bridge from a plurality of conference bridges associated with the network and configured to host a collaboration conference, the selection occurring in response to the collaboration conference access request received from the client application program associated with the user;
translating the received collaboration conference access request to one or more instructions specific to the selected hosting conferencing bridge; and
transmitting the one or more instructions specific to the selected hosting conferencing bridge to the selected hosting conferencing bridge. 11. The system of claim 10 wherein the computer-readable medium is further configured to store one or more collaboration conferencing preferences of the user. 12. The system of claim 10 wherein the application programming interface further performs the operations of:
receiving connection information from the selected hosting conferencing bridge configured to connect a device associated with the user to the collaboration conference hosted by the selected hosting conferencing bridge; and
connecting the device associated with the user to the collaboration conference of the selected hosting conferencing bridge based at least on the received connection information. 13. The system of claim 12 wherein the application programming interface further performs the operations of:
receiving one or more collaboration conferencing instructions from the client application program associated with the user, the one or more collaboration conferencing instructions configured to initiate a feature of the collaboration conference;
translating the received one or more collaboration conferencing instructions to one or more feature initiating instructions specific to the selected hosting conferencing bridge; and
transmitting the one or more collaboration conferencing instructions to the selected hosting conferencing bridge. 14. The system of claim 13 wherein the collaboration conference access request includes a web-based conference component and the one or more collaboration conferencing instructions are configured to initiate a web-based conferencing feature. 15. The system of claim 14 wherein web-based conferencing feature is a roster configured to indicate the participants to the collaboration conference on at least one user interface display. 16. The system of claim 15 wherein the web-based conferencing feature is an active speaker indicator configured to indicate an active speaker on the at least one user interface display. 17. The system of claim 15 wherein the web-based conferencing feature is a roster reconciliation configured to associate an identifier to an audio component and the web-based conference component of a participant to the collaboration conference in the at least one user interface display. 18. A networking component of a telecommunications network comprising:
a server comprising:
a processor; and
an application programming interface executed by the processor, application programming interface configured to perform the operations of:
receiving a collaboration conference access request from a client application program associated with a client device associated with the telecommunications network, the collaboration conference access request received through a network interface unit of the server and configured to request access to a collaboration conference hosted by the telecommunications network;
translating the received collaboration conference access request to one or more instructions specific to a hosting conferencing bridge selected from a plurality of conference bridges associated with the telecommunication's network, each of the plurality of conference bridges configured to host a collaboration conference, the selection occurring in response to the collaboration conference access request received from the client application program; and
transmitting the one or more instructions specific to the selected hosting conferencing bridge to the selected hosting conferencing bridge. 19. The networking component of claim 18 wherein the selected hosting conferencing bridge is a time division multiplexing telecommunication device. 20. The networking component of claim 18 wherein the selected hosting conferencing bridge is a session initiation protocol based telecommunication device. | 2,400 |
7,539 | 7,539 | 14,496,440 | 2,411 | A node identifies at least one first channel of an unlicensed frequency band that is occupied by a first node that operates according to a first radio access technology (RAT). The node transmits a signal on a second channel of the unlicensed frequency band. The signal is formed according to a second RAT used for transmission on the second channel and the signal includes information identifying the at least one second channel of the unlicensed frequency band. | 1. A method comprising:
identifying at least one first channel of an unlicensed frequency band that is occupied by a first node that operates according to a first radio access technology (RAT); and transmitting a signal on a second channel of the unlicensed frequency band, wherein the signal is formed according to a second RAT used for transmission on the second channel, and wherein the signal comprises information identifying the at least one first channel of the unlicensed frequency band. 2. The method of claim 1, wherein the first RAT is defined by Long Term Evolution (LTE) standards, and wherein the second RAT is defined by Wi-Fi standards. 3. The method of claim 1, wherein transmitting the signal comprises transmitting a beacon signal from a second node that operates according to the second RAT in response to the second node receiving the information identifying the at least one first channel from the first node. 4. The method of claim 1, wherein transmitting the signal comprises autonomously broadcasting a beacon signal from user equipment having a wireless communication link to the first node or transmitting a radio measurement report from the user equipment in response to a request signal received from a third node that operates according to the second RAT. 5. The method of claim 4, further comprising:
establishing the wireless communication link between the user equipment and the first node; generating the information identifying the at least one second channel from the first node in response to establishing the wireless communication link; and storing the information identifying the at least one second channel at the user equipment prior to receiving the request signal from the third node. 6. The method of claim 1, wherein the signal further comprises information indicating a fraction of time that the first node occupies the at least one second channel. 7. A method comprising:
providing, from a first node that operates according to a first radio access technology (RAT) to a second node that operates according to a second RAT, information identifying at least one first channel of an unlicensed frequency band that is occupied by the first node, wherein the second node transmits a signal formed according to a second RAT that includes the information identifying the at least one first channel. 8. The method of claim 7, wherein the first RAT is defined by Long Term Evolution (LTE) standards, and wherein the second RAT is defined by Wi-Fi standards. 9. The method of claim 7, further comprising:
forming a trusted relationship between the first node and the second node prior to providing the information identifying the at least one first channel to the second node. 10. The method of claim 9, wherein providing the information identifying the at least one first channel comprises providing the information identifying the at least one first channel over a wired connection between the first node and the second node. 11. The method of claim 7, wherein providing the information to the second node comprises providing the information to user equipment having a wireless communication link to the first node for configuring the user equipment to generate a radio measurement report in response to a request signal received from a third node that operates according to the second RAT. 12. The method of claim 11, further comprising:
establishing the wireless communication link between the user equipment and the first node; and providing the information identifying the at least one second channel from the first node in response to establishing the wireless communication link and prior to the user equipment receiving the request signal from the third node. 13. The method of claim 7, wherein providing the information further comprises providing information indicating a fraction of time that the first node occupies the at least one first channel. 14. A method comprising:
receiving, at a first node that operates according to a first radio access technology (RAT), a signal on a first channel of an unlicensed frequency band comprising information identifying at least one second channel of the unlicensed frequency band that is occupied by a second node that operates according to a second RAT; and performing channel selection in the unlicensed frequency band at the first node based on the information identifying the at least one second channel. 15. The method of claim 14, wherein the first RAT is defined by Wi-Fi standards, and wherein the second RAT is defined by Long Term Evolution (LTE) standards. 16. The method of claim 14, wherein the signal further comprises information indicating a fraction of time that the first node occupies the at least one second channel. 17. The method of claim 14, wherein receiving the signal comprises receiving a beacon signal from a third node that operates according to the second RAT in response to the third node receiving the information identifying the at least one second channel from the second node. 18. The method of claim 14, wherein receiving the signal comprises receiving a radio measurement report from user equipment having a wireless communication link to the second node. 19. The method of claim 18, further comprising:
transmitting a request signal to request the radio measurement report from the user equipment; and receiving the measurement report in response to transmitting the request signal. 20. The method of claim 14, wherein performing the channel selection comprises selecting at least one third channel that is different than the at least one second channel. | A node identifies at least one first channel of an unlicensed frequency band that is occupied by a first node that operates according to a first radio access technology (RAT). The node transmits a signal on a second channel of the unlicensed frequency band. The signal is formed according to a second RAT used for transmission on the second channel and the signal includes information identifying the at least one second channel of the unlicensed frequency band.1. A method comprising:
identifying at least one first channel of an unlicensed frequency band that is occupied by a first node that operates according to a first radio access technology (RAT); and transmitting a signal on a second channel of the unlicensed frequency band, wherein the signal is formed according to a second RAT used for transmission on the second channel, and wherein the signal comprises information identifying the at least one first channel of the unlicensed frequency band. 2. The method of claim 1, wherein the first RAT is defined by Long Term Evolution (LTE) standards, and wherein the second RAT is defined by Wi-Fi standards. 3. The method of claim 1, wherein transmitting the signal comprises transmitting a beacon signal from a second node that operates according to the second RAT in response to the second node receiving the information identifying the at least one first channel from the first node. 4. The method of claim 1, wherein transmitting the signal comprises autonomously broadcasting a beacon signal from user equipment having a wireless communication link to the first node or transmitting a radio measurement report from the user equipment in response to a request signal received from a third node that operates according to the second RAT. 5. The method of claim 4, further comprising:
establishing the wireless communication link between the user equipment and the first node; generating the information identifying the at least one second channel from the first node in response to establishing the wireless communication link; and storing the information identifying the at least one second channel at the user equipment prior to receiving the request signal from the third node. 6. The method of claim 1, wherein the signal further comprises information indicating a fraction of time that the first node occupies the at least one second channel. 7. A method comprising:
providing, from a first node that operates according to a first radio access technology (RAT) to a second node that operates according to a second RAT, information identifying at least one first channel of an unlicensed frequency band that is occupied by the first node, wherein the second node transmits a signal formed according to a second RAT that includes the information identifying the at least one first channel. 8. The method of claim 7, wherein the first RAT is defined by Long Term Evolution (LTE) standards, and wherein the second RAT is defined by Wi-Fi standards. 9. The method of claim 7, further comprising:
forming a trusted relationship between the first node and the second node prior to providing the information identifying the at least one first channel to the second node. 10. The method of claim 9, wherein providing the information identifying the at least one first channel comprises providing the information identifying the at least one first channel over a wired connection between the first node and the second node. 11. The method of claim 7, wherein providing the information to the second node comprises providing the information to user equipment having a wireless communication link to the first node for configuring the user equipment to generate a radio measurement report in response to a request signal received from a third node that operates according to the second RAT. 12. The method of claim 11, further comprising:
establishing the wireless communication link between the user equipment and the first node; and providing the information identifying the at least one second channel from the first node in response to establishing the wireless communication link and prior to the user equipment receiving the request signal from the third node. 13. The method of claim 7, wherein providing the information further comprises providing information indicating a fraction of time that the first node occupies the at least one first channel. 14. A method comprising:
receiving, at a first node that operates according to a first radio access technology (RAT), a signal on a first channel of an unlicensed frequency band comprising information identifying at least one second channel of the unlicensed frequency band that is occupied by a second node that operates according to a second RAT; and performing channel selection in the unlicensed frequency band at the first node based on the information identifying the at least one second channel. 15. The method of claim 14, wherein the first RAT is defined by Wi-Fi standards, and wherein the second RAT is defined by Long Term Evolution (LTE) standards. 16. The method of claim 14, wherein the signal further comprises information indicating a fraction of time that the first node occupies the at least one second channel. 17. The method of claim 14, wherein receiving the signal comprises receiving a beacon signal from a third node that operates according to the second RAT in response to the third node receiving the information identifying the at least one second channel from the second node. 18. The method of claim 14, wherein receiving the signal comprises receiving a radio measurement report from user equipment having a wireless communication link to the second node. 19. The method of claim 18, further comprising:
transmitting a request signal to request the radio measurement report from the user equipment; and receiving the measurement report in response to transmitting the request signal. 20. The method of claim 14, wherein performing the channel selection comprises selecting at least one third channel that is different than the at least one second channel. | 2,400 |
7,540 | 7,540 | 15,356,342 | 2,462 | A medical monitoring and surveillance system uses a server communicating with a general-purpose personal device running an application. The application may be downloadable. The application is configured by the server. The application configures the device to perform medical tests using the sensors, preexisting capabilities, and functionality built into the device. The device may be a cellular telephone with data communication and other functionality, a personal digital organizer, a portable entertainment device, or another similar personal device. The application reports the results of the medical tests to the server or a third party device. Various trigger events and associated tasks may be incorporated in the server or in the application residing on the device. A trigger event may occur, for example, in response to the test results meeting one or more predetermined criteria. Once a trigger event occurs, a task associated with the trigger event is performed. | 1-54. (canceled) 55. A method for sharing medical data between applications on a user device, the method comprising:
maintaining a data store on the user device; executing an initialization application on the user device, the initialization application being configured to manage a plurality of configuration files corresponding to a plurality of applications on the user device, each of the configuration files comprising one or more key value pairs, each key value pair comprising identifying data matched with data that defines an operating parameter for at least one of the plurality of applications; executing a first application of the plurality of applications based on a first configuration file of the plurality of configuration files, the first configuration file including a first key value pair that defines an access level to the data store; executing a second application of the plurality of applications based on a second configuration file of the plurality of configuration files, the second application being different than the first application, the second configuration file including a second key value pair that defines an access level to the data store; storing, by the first application, medical data in the data store based on the access level defined by the first key value pair; and accessing, by the second application, the medical data in the data store based on the access level defined by the second key value pair. 56. The method of claim 55, wherein the initialization application is one of the first application and the second application. 57. The method of claim 55, wherein accessing comprises modifying, deleting, or augmenting depending on the access level defined by the second key value pair. 58. The method of claim 55, wherein at least one of the one or more key value pairs comprises data values restricted to a range or type. 59. The method of claim 55, wherein at least one of the plurality of configuration files comprises an extended markup language formatted file. 60. The method of claim 55, wherein the first key value pair defines a security parameter, and wherein the first application encrypts the stored medical data based on the security parameter. 61. The method of claim 55, further comprising modifying the second configuration file by the initialization application to change an access level of the second application to the data store. 62. The method of claim 55, further comprising:
receiving an input on a touch screen of the user device corresponding to a tracing of a pattern by the first application; determining a deviation of the tracing from the pattern; and triggering an event based on a comparison of the deviation to a threshold. 63. The method of claim 55, further comprising:
measuring movement of the user device using a sensor in the user device while a user holds the user device; determining a movement metric based on the measured movement; and triggering an event based on a comparison of the movement metric to a threshold. 64. The method of claim 55, further comprising:
performing a medical test for a user with the user device; determining a metric based on the medical test; and triggering an event based on a comparison of the medical test to a threshold. 65. The method of claim 64, wherein the event comprises sending information indicative of the comparison to a computing device. 66. The method of claim 64, further comprising:
performing a verification of an identity of the user, wherein triggering the event is further based on verification of the identification of the user. 67. The method of claim 64, wherein the event comprises preventing display of results of the medical test. 68. The method of claim 64, wherein the medical test comprises a vision test, and wherein performing the test comprises measuring the timing of areas touched on a touch screen related to graphics displayed on the touch screen of the user device. 69. The method of claim 64, wherein the medical test comprises a motor skill test, and wherein performing the test comprises measuring an accuracy and timing of areas touched on a touch screen related to graphics displayed on the touch screen of the user device. 70. The method of claim 55, wherein the initialization application manages the plurality of configuration files based on an identification code received on the user device. 71. The method of claim 55, wherein the operating parameter comprises a configuration for a menu of at least one of the plurality of applications. 72. A user device configured to share medical data between applications on the user device, the user device comprising:
a memory comprising a data store; and a processor configured to:
execute an initialization application on the user device, the initialization application being configured to manage a plurality of configuration files corresponding to a plurality of applications on the user device, each of the configuration files comprising one or more key value pairs, each key value pair comprising identifying data matched with data that defines an operating parameter for at least one of the plurality of applications;
execute a first application of the plurality of applications based on a first configuration file of the plurality of configuration files, the first configuration file including a first key value pair that defines an access level to the data store;
execute a second application of the plurality of applications based on a second configuration file of the plurality of configuration files, the second application being different than the first application, the second configuration file including a second key value pair that defines an access level to the data store;
store, by the first application, medical data in the data store based on the access level defined by the first key value pair; and
access, by the second application, the medical data in the data store based on the access level defined by the second key value pair. 73. The user device of claim 72, wherein the first key value pair defines a security parameter, and wherein the first application encrypts the stored medical data based on the security parameter. 74. The user device of claim 72, wherein the processor is further configured to modify the second configuration file by the initialization application to change an access level of the second application to the data store. 75. The user device of claim 72, wherein the processor is further configured to:
receive an input on a touch screen of the user device corresponding to a tracing of a pattern by the first application; determine a deviation of the tracing from the pattern; and trigger an event based on a comparison of the deviation to a threshold. 76. The user device of claim 72, wherein the processor is further configured to:
measure movement of the user device using a sensor in the user device while a user holds the user device; determine a movement metric based on the measured movement; and trigger an event based on a comparison of the movement metric to a threshold. 77. The user device of claim 72, wherein the processor is further configured to:
perform a medical test for a user with the user device; determine a metric based on the medical test; and trigger an event based on a comparison of the medical test to a threshold. 78. The user device of claim 77, wherein the event comprises sending information indicative of the comparison to a computing device. 79. The user device of claim 77, wherein the processor is further configured to:
perform a verification of an identity of the user, wherein triggering the event is further based on verification of the identification of the user. 80. The user device of claim 77, wherein the event comprises preventing display of results of the medical test. 81. The user device of claim 77, wherein the medical test comprises a vision test, and wherein performing the test comprises measuring the timing of areas touched on a touch screen related to graphics displayed on the touch screen of the user device. 82. The user device of claim 77, wherein the medical test comprises a motor skill test, and wherein performing the test comprises measuring an accuracy and timing of areas touched on a touch screen related to graphics displayed on the touch screen of the user device. 83. The user device of claim 72, wherein the initialization application manages the plurality of configuration files based on an identification code received on the user device. 84. A non-transitory computer-readable medium comprising instructions that when executed by a computing device cause the computing device to perform a method for sharing medical data between applications on a user device, the method comprising:
maintaining a data store on the user device; executing an initialization application on the user device, the initialization application being configured to manage a plurality of configuration files corresponding to a plurality of applications on the user device, each of the configuration files comprising one or more key value pairs, each key value pair comprising identifying data matched with data that defines an operating parameter for at least one of the plurality of applications; executing a first application of the plurality of applications based on a first configuration file of the plurality of configuration files, the first configuration file including a first key value pair that defines an access level to the data store; executing a second application of the plurality of applications based on a second configuration file of the plurality of configuration files, the second application being different than the first application, the second configuration file including a second key value pair that defines an access level to the data store; storing, by the first application, medical data in the data store based on the access level defined by the first key value pair; and accessing, by the second application, the medical data in the data store based on the access level defined by the second key value pair. | A medical monitoring and surveillance system uses a server communicating with a general-purpose personal device running an application. The application may be downloadable. The application is configured by the server. The application configures the device to perform medical tests using the sensors, preexisting capabilities, and functionality built into the device. The device may be a cellular telephone with data communication and other functionality, a personal digital organizer, a portable entertainment device, or another similar personal device. The application reports the results of the medical tests to the server or a third party device. Various trigger events and associated tasks may be incorporated in the server or in the application residing on the device. A trigger event may occur, for example, in response to the test results meeting one or more predetermined criteria. Once a trigger event occurs, a task associated with the trigger event is performed.1-54. (canceled) 55. A method for sharing medical data between applications on a user device, the method comprising:
maintaining a data store on the user device; executing an initialization application on the user device, the initialization application being configured to manage a plurality of configuration files corresponding to a plurality of applications on the user device, each of the configuration files comprising one or more key value pairs, each key value pair comprising identifying data matched with data that defines an operating parameter for at least one of the plurality of applications; executing a first application of the plurality of applications based on a first configuration file of the plurality of configuration files, the first configuration file including a first key value pair that defines an access level to the data store; executing a second application of the plurality of applications based on a second configuration file of the plurality of configuration files, the second application being different than the first application, the second configuration file including a second key value pair that defines an access level to the data store; storing, by the first application, medical data in the data store based on the access level defined by the first key value pair; and accessing, by the second application, the medical data in the data store based on the access level defined by the second key value pair. 56. The method of claim 55, wherein the initialization application is one of the first application and the second application. 57. The method of claim 55, wherein accessing comprises modifying, deleting, or augmenting depending on the access level defined by the second key value pair. 58. The method of claim 55, wherein at least one of the one or more key value pairs comprises data values restricted to a range or type. 59. The method of claim 55, wherein at least one of the plurality of configuration files comprises an extended markup language formatted file. 60. The method of claim 55, wherein the first key value pair defines a security parameter, and wherein the first application encrypts the stored medical data based on the security parameter. 61. The method of claim 55, further comprising modifying the second configuration file by the initialization application to change an access level of the second application to the data store. 62. The method of claim 55, further comprising:
receiving an input on a touch screen of the user device corresponding to a tracing of a pattern by the first application; determining a deviation of the tracing from the pattern; and triggering an event based on a comparison of the deviation to a threshold. 63. The method of claim 55, further comprising:
measuring movement of the user device using a sensor in the user device while a user holds the user device; determining a movement metric based on the measured movement; and triggering an event based on a comparison of the movement metric to a threshold. 64. The method of claim 55, further comprising:
performing a medical test for a user with the user device; determining a metric based on the medical test; and triggering an event based on a comparison of the medical test to a threshold. 65. The method of claim 64, wherein the event comprises sending information indicative of the comparison to a computing device. 66. The method of claim 64, further comprising:
performing a verification of an identity of the user, wherein triggering the event is further based on verification of the identification of the user. 67. The method of claim 64, wherein the event comprises preventing display of results of the medical test. 68. The method of claim 64, wherein the medical test comprises a vision test, and wherein performing the test comprises measuring the timing of areas touched on a touch screen related to graphics displayed on the touch screen of the user device. 69. The method of claim 64, wherein the medical test comprises a motor skill test, and wherein performing the test comprises measuring an accuracy and timing of areas touched on a touch screen related to graphics displayed on the touch screen of the user device. 70. The method of claim 55, wherein the initialization application manages the plurality of configuration files based on an identification code received on the user device. 71. The method of claim 55, wherein the operating parameter comprises a configuration for a menu of at least one of the plurality of applications. 72. A user device configured to share medical data between applications on the user device, the user device comprising:
a memory comprising a data store; and a processor configured to:
execute an initialization application on the user device, the initialization application being configured to manage a plurality of configuration files corresponding to a plurality of applications on the user device, each of the configuration files comprising one or more key value pairs, each key value pair comprising identifying data matched with data that defines an operating parameter for at least one of the plurality of applications;
execute a first application of the plurality of applications based on a first configuration file of the plurality of configuration files, the first configuration file including a first key value pair that defines an access level to the data store;
execute a second application of the plurality of applications based on a second configuration file of the plurality of configuration files, the second application being different than the first application, the second configuration file including a second key value pair that defines an access level to the data store;
store, by the first application, medical data in the data store based on the access level defined by the first key value pair; and
access, by the second application, the medical data in the data store based on the access level defined by the second key value pair. 73. The user device of claim 72, wherein the first key value pair defines a security parameter, and wherein the first application encrypts the stored medical data based on the security parameter. 74. The user device of claim 72, wherein the processor is further configured to modify the second configuration file by the initialization application to change an access level of the second application to the data store. 75. The user device of claim 72, wherein the processor is further configured to:
receive an input on a touch screen of the user device corresponding to a tracing of a pattern by the first application; determine a deviation of the tracing from the pattern; and trigger an event based on a comparison of the deviation to a threshold. 76. The user device of claim 72, wherein the processor is further configured to:
measure movement of the user device using a sensor in the user device while a user holds the user device; determine a movement metric based on the measured movement; and trigger an event based on a comparison of the movement metric to a threshold. 77. The user device of claim 72, wherein the processor is further configured to:
perform a medical test for a user with the user device; determine a metric based on the medical test; and trigger an event based on a comparison of the medical test to a threshold. 78. The user device of claim 77, wherein the event comprises sending information indicative of the comparison to a computing device. 79. The user device of claim 77, wherein the processor is further configured to:
perform a verification of an identity of the user, wherein triggering the event is further based on verification of the identification of the user. 80. The user device of claim 77, wherein the event comprises preventing display of results of the medical test. 81. The user device of claim 77, wherein the medical test comprises a vision test, and wherein performing the test comprises measuring the timing of areas touched on a touch screen related to graphics displayed on the touch screen of the user device. 82. The user device of claim 77, wherein the medical test comprises a motor skill test, and wherein performing the test comprises measuring an accuracy and timing of areas touched on a touch screen related to graphics displayed on the touch screen of the user device. 83. The user device of claim 72, wherein the initialization application manages the plurality of configuration files based on an identification code received on the user device. 84. A non-transitory computer-readable medium comprising instructions that when executed by a computing device cause the computing device to perform a method for sharing medical data between applications on a user device, the method comprising:
maintaining a data store on the user device; executing an initialization application on the user device, the initialization application being configured to manage a plurality of configuration files corresponding to a plurality of applications on the user device, each of the configuration files comprising one or more key value pairs, each key value pair comprising identifying data matched with data that defines an operating parameter for at least one of the plurality of applications; executing a first application of the plurality of applications based on a first configuration file of the plurality of configuration files, the first configuration file including a first key value pair that defines an access level to the data store; executing a second application of the plurality of applications based on a second configuration file of the plurality of configuration files, the second application being different than the first application, the second configuration file including a second key value pair that defines an access level to the data store; storing, by the first application, medical data in the data store based on the access level defined by the first key value pair; and accessing, by the second application, the medical data in the data store based on the access level defined by the second key value pair. | 2,400 |
7,541 | 7,541 | 14,704,521 | 2,482 | An imaging system for a motor vehicle includes a camera housing part and at least one camera module to be mounted to said camera housing part. The camera module has first rotation locking means, the camera housing part has second rotation locking means adapted to cooperate with the first rotation locking means, wherein the first and second rotation locking means cooperate to lock the camera module against rotation relative to the camera housing part. | 1. An imaging system for a motor vehicle, comprising:
a camera housing part and at least one camera module to be mounted to said camera housing part, said camera module comprises first rotation locking means, said camera housing part comprises second rotation locking means adapted to cooperate with said first rotation locking means, wherein said first and second rotation locking means cooperate to lock the camera module against rotation relative to the camera housing part. 2. The imaging system as claimed in claim 1, wherein said first and second rotation locking means are designed to be fittingly inserted into each other along a linear insertion direction. 3. The imaging system as claimed in claim 1, wherein said rotation locking means comprises bores and pins to be fittingly inserted into said bores. 4. The imaging system as claimed in claim 3, wherein said pins are hollow to allow engagement of screws for fixing said camera module to said camera housing part. 5. The imaging system as claimed in claim 4, wherein said bores are through holes in a mounting wall to allow said screws to extend through said bores. 6. The imaging system as claimed in claim 5, wherein said mounting wall comprises an opening through which light can pass on its way into the camera module. 7. The imaging system as claimed in claim 1, wherein said camera housing part comprises at least one contact surface designed to be in planar contact with a surface of the camera module when the camera module is fully inserted in said camera housing part. 8. The imaging system as claimed in claim 1, wherein each camera module comprises a plurality of first rotation locking means to cooperate with a corresponding plurality of second rotation locking means provided at the camera housing part. 9. The imaging system as claimed in claim 1, wherein said plurality of first locking elements is arranged in opposite octants defined by a geometrical axis of the camera module. 10. The imaging system as claimed in claim 1, wherein:
said camera module comprises a lens objective, a lens holder holding said lens objective, an image sensor and a back plate connected to said lens holder and holding said image sensor in or close to an image plane of the lens objective, and wherein said first rotation locking means is provided at said lens holder. 11. The camera module as claimed in claim 10, wherein the lens holder is made of metal. 12. The imaging system as claimed in claim 10, wherein the alignment of the lens holder relative to the back plate is fixed by a glue joint between the lens holder and the back plate. 13. A method of mounting an imaging system for a motor vehicle camera module having a camera housing part and at least one camera module to be mounted to said camera housing part, the method comprising the steps of:
inserting said camera module into a reception of said camera module, bringing first rotation locking means provided at said camera module into cooperation with second rotation locking means provided at said camera housing part, and fixing said camera module to said camera housing part. 14. The method of claim 13, wherein said first and second rotation locking means are designed to be fittingly inserted into each other along a linear insertion direction. 15. The method of claim 13, wherein said rotation locking means comprises bores and pins to be fittingly inserted into said bores. 16. The method of claim 15, wherein said pins are hollow to allow engagement of screws for fixing said camera module to said camera housing part. 17. The method of claim 16, wherein said bores are through holes in a mounting wall to allow said screws to extend through said bores. 18. The method of claim 17, wherein said mounting wall comprises an opening through which light can pass on its way into the camera module. | An imaging system for a motor vehicle includes a camera housing part and at least one camera module to be mounted to said camera housing part. The camera module has first rotation locking means, the camera housing part has second rotation locking means adapted to cooperate with the first rotation locking means, wherein the first and second rotation locking means cooperate to lock the camera module against rotation relative to the camera housing part.1. An imaging system for a motor vehicle, comprising:
a camera housing part and at least one camera module to be mounted to said camera housing part, said camera module comprises first rotation locking means, said camera housing part comprises second rotation locking means adapted to cooperate with said first rotation locking means, wherein said first and second rotation locking means cooperate to lock the camera module against rotation relative to the camera housing part. 2. The imaging system as claimed in claim 1, wherein said first and second rotation locking means are designed to be fittingly inserted into each other along a linear insertion direction. 3. The imaging system as claimed in claim 1, wherein said rotation locking means comprises bores and pins to be fittingly inserted into said bores. 4. The imaging system as claimed in claim 3, wherein said pins are hollow to allow engagement of screws for fixing said camera module to said camera housing part. 5. The imaging system as claimed in claim 4, wherein said bores are through holes in a mounting wall to allow said screws to extend through said bores. 6. The imaging system as claimed in claim 5, wherein said mounting wall comprises an opening through which light can pass on its way into the camera module. 7. The imaging system as claimed in claim 1, wherein said camera housing part comprises at least one contact surface designed to be in planar contact with a surface of the camera module when the camera module is fully inserted in said camera housing part. 8. The imaging system as claimed in claim 1, wherein each camera module comprises a plurality of first rotation locking means to cooperate with a corresponding plurality of second rotation locking means provided at the camera housing part. 9. The imaging system as claimed in claim 1, wherein said plurality of first locking elements is arranged in opposite octants defined by a geometrical axis of the camera module. 10. The imaging system as claimed in claim 1, wherein:
said camera module comprises a lens objective, a lens holder holding said lens objective, an image sensor and a back plate connected to said lens holder and holding said image sensor in or close to an image plane of the lens objective, and wherein said first rotation locking means is provided at said lens holder. 11. The camera module as claimed in claim 10, wherein the lens holder is made of metal. 12. The imaging system as claimed in claim 10, wherein the alignment of the lens holder relative to the back plate is fixed by a glue joint between the lens holder and the back plate. 13. A method of mounting an imaging system for a motor vehicle camera module having a camera housing part and at least one camera module to be mounted to said camera housing part, the method comprising the steps of:
inserting said camera module into a reception of said camera module, bringing first rotation locking means provided at said camera module into cooperation with second rotation locking means provided at said camera housing part, and fixing said camera module to said camera housing part. 14. The method of claim 13, wherein said first and second rotation locking means are designed to be fittingly inserted into each other along a linear insertion direction. 15. The method of claim 13, wherein said rotation locking means comprises bores and pins to be fittingly inserted into said bores. 16. The method of claim 15, wherein said pins are hollow to allow engagement of screws for fixing said camera module to said camera housing part. 17. The method of claim 16, wherein said bores are through holes in a mounting wall to allow said screws to extend through said bores. 18. The method of claim 17, wherein said mounting wall comprises an opening through which light can pass on its way into the camera module. | 2,400 |
7,542 | 7,542 | 13,933,194 | 2,485 | A system and method to identify the leader of a group in a retail, restaurant, or queue-type setting (or virtually any setting) through recognition of payment gestures. The method comprises acquiring initial video of a group, developing feature models for members of the group, acquiring video at a payment location, identifying a payment gesture in the acquired video, defining the person making the gesture as the leader of the group, and forwarding/backtracking through the video to identify timings associated with leader events (e.g., entering, exiting, ordering, etc.). | 1. A method of monitoring a customer space comprising:
obtaining visual data comprising image frames of the customer space over a period of time; generating feature models for members of at least one group within the customer space; identifying a payment gesture in at least one image frame; associating the payment gesture with a member of the at least one group based at least in part on the feature models; and designating a leader of the group as the member associated with the payment gesture. 2. The method of claim 1 further comprising, after designating the leader, analyzing the visual data to determine the timing of at least one event involving the leader. 3. The method of claim 2, wherein the at least one event includes one or more of the leader entering the customer space, the leader exiting the customer space, or the leader placing an order. 4. The method of claim 1 further comprising, after designating the leader, analyzing the visual data before or after the payment gesture to identify at least one characteristic of the leader's experience within the retail space. 5. The method of claim 4, wherein the at least one characteristic includes position within the group, location within the retail space, or action taken by the leader. 6. The method of claim 1, wherein the generating feature models includes using a face detection algorithm. 7. The method of claim 1, wherein the obtaining visual data includes obtaining overhead visual data comprising image frames of a payment station, and using said overhead visual data to identify the payment gesture. 8. The method of claim 7, wherein the associating the payment gesture with a member of the at least one group based at least in part on the feature models includes determining the member making the payment gesture based on location information associated with the visual data. 9. The method of claim 1, wherein the obtaining visual data includes recording images with a camera. 10. A non-transitory computer-readable medium having stored thereon computer-executable instructions for monitoring a customer space, the instructions being executable by a processor and comprising:
obtaining visual data comprising image frames of the customer space over a period of time; generating feature models for members of at least one group within the customer space; identifying a payment gesture in at least one image frame; associating the payment gesture with a member of the at least one group based at least in part on the feature models; and designating a leader of the group as the member associated with the payment gesture. 11. The non-transitory computer-readable medium as set forth in claim 10, wherein the instructions further comprise, after designating the leader, analyzing the visual data to determine the timing of at least one event involving the leader. 12. The non-transitory computer-readable medium as set forth in claim 10, wherein the at least one event includes one or more of the leader entering the customer space, the leader exiting the customer space, or the leader placing an order. 13. The non-transitory computer-readable medium as set forth in claim 10, wherein the instructions further comprise, after designating the leader, analyzing the visual data before or after the payment gesture to identify at least one characteristic of the leader's experience within the retail space. 14. The non-transitory computer-readable medium as set forth in claim 13, wherein the at least one characteristic includes, position within the group, location within the retail space, or action taken by the leader. 15. A system for monitoring a customer space comprising:
at least one optical sensor for obtaining visual data corresponding to the customer space; and a central processing unit including a processor and a non-transitory computer-readable medium having stored thereon computer-executable instructions for monitoring a customer space executable by the processor, the instructions comprising: receiving visual data of the customer space over a period of time from the optical sensor; generating feature models for members of at least one group within the customer space; identifying a payment gesture in at least one image frame; associating the payment gesture with a member of the at least one group based at least in part on the feature models; and designating a leader of the group as the member associated with the payment gesture. 16. The system of claim 15, further comprising a plurality of optical sensors including at least one overhead sensor associated with a payment location, said overhead sensor adapted to obtain visual data relating to a payment gesture, and at least one oblique sensor adapted to obtain visual data for generating the feature models. 17. The system of claim 15, wherein the instructions further comprise, after designating the leader, analyzing the visual data to determine the timing of at least one event involving the leader. 18. The system of claim 17, wherein the at least one event includes one or more of the leader entering the customer space, the leader exiting the customer space, or the leader placing an order. 19. The system of claim 15, wherein the instructions further comprise, after designating the leader, analyzing the visual data before and after the payment gesture to identify at least one characteristic of the leader's experience within the retail space. 20. The system of claim 19, wherein the at least one characteristic includes, position within the group, location within the retail space, or action taken by the leader. | A system and method to identify the leader of a group in a retail, restaurant, or queue-type setting (or virtually any setting) through recognition of payment gestures. The method comprises acquiring initial video of a group, developing feature models for members of the group, acquiring video at a payment location, identifying a payment gesture in the acquired video, defining the person making the gesture as the leader of the group, and forwarding/backtracking through the video to identify timings associated with leader events (e.g., entering, exiting, ordering, etc.).1. A method of monitoring a customer space comprising:
obtaining visual data comprising image frames of the customer space over a period of time; generating feature models for members of at least one group within the customer space; identifying a payment gesture in at least one image frame; associating the payment gesture with a member of the at least one group based at least in part on the feature models; and designating a leader of the group as the member associated with the payment gesture. 2. The method of claim 1 further comprising, after designating the leader, analyzing the visual data to determine the timing of at least one event involving the leader. 3. The method of claim 2, wherein the at least one event includes one or more of the leader entering the customer space, the leader exiting the customer space, or the leader placing an order. 4. The method of claim 1 further comprising, after designating the leader, analyzing the visual data before or after the payment gesture to identify at least one characteristic of the leader's experience within the retail space. 5. The method of claim 4, wherein the at least one characteristic includes position within the group, location within the retail space, or action taken by the leader. 6. The method of claim 1, wherein the generating feature models includes using a face detection algorithm. 7. The method of claim 1, wherein the obtaining visual data includes obtaining overhead visual data comprising image frames of a payment station, and using said overhead visual data to identify the payment gesture. 8. The method of claim 7, wherein the associating the payment gesture with a member of the at least one group based at least in part on the feature models includes determining the member making the payment gesture based on location information associated with the visual data. 9. The method of claim 1, wherein the obtaining visual data includes recording images with a camera. 10. A non-transitory computer-readable medium having stored thereon computer-executable instructions for monitoring a customer space, the instructions being executable by a processor and comprising:
obtaining visual data comprising image frames of the customer space over a period of time; generating feature models for members of at least one group within the customer space; identifying a payment gesture in at least one image frame; associating the payment gesture with a member of the at least one group based at least in part on the feature models; and designating a leader of the group as the member associated with the payment gesture. 11. The non-transitory computer-readable medium as set forth in claim 10, wherein the instructions further comprise, after designating the leader, analyzing the visual data to determine the timing of at least one event involving the leader. 12. The non-transitory computer-readable medium as set forth in claim 10, wherein the at least one event includes one or more of the leader entering the customer space, the leader exiting the customer space, or the leader placing an order. 13. The non-transitory computer-readable medium as set forth in claim 10, wherein the instructions further comprise, after designating the leader, analyzing the visual data before or after the payment gesture to identify at least one characteristic of the leader's experience within the retail space. 14. The non-transitory computer-readable medium as set forth in claim 13, wherein the at least one characteristic includes, position within the group, location within the retail space, or action taken by the leader. 15. A system for monitoring a customer space comprising:
at least one optical sensor for obtaining visual data corresponding to the customer space; and a central processing unit including a processor and a non-transitory computer-readable medium having stored thereon computer-executable instructions for monitoring a customer space executable by the processor, the instructions comprising: receiving visual data of the customer space over a period of time from the optical sensor; generating feature models for members of at least one group within the customer space; identifying a payment gesture in at least one image frame; associating the payment gesture with a member of the at least one group based at least in part on the feature models; and designating a leader of the group as the member associated with the payment gesture. 16. The system of claim 15, further comprising a plurality of optical sensors including at least one overhead sensor associated with a payment location, said overhead sensor adapted to obtain visual data relating to a payment gesture, and at least one oblique sensor adapted to obtain visual data for generating the feature models. 17. The system of claim 15, wherein the instructions further comprise, after designating the leader, analyzing the visual data to determine the timing of at least one event involving the leader. 18. The system of claim 17, wherein the at least one event includes one or more of the leader entering the customer space, the leader exiting the customer space, or the leader placing an order. 19. The system of claim 15, wherein the instructions further comprise, after designating the leader, analyzing the visual data before and after the payment gesture to identify at least one characteristic of the leader's experience within the retail space. 20. The system of claim 19, wherein the at least one characteristic includes, position within the group, location within the retail space, or action taken by the leader. | 2,400 |
7,543 | 7,543 | 14,711,833 | 2,482 | An electronic device includes a housing, a user interface, and one or more processors operable with the user interface. The user interface includes a fingerprint sensor proximately located with at least one proximity sensor component. The proximity sensor component can include an infrared signal receiver to receive an infrared emission from an object external to the housing. The proximity detector component is to actuate the fingerprint sensor when the infrared signal receiver receives the infrared emission from the object. | 1. An electronic device, comprising:
a housing; a user interface; and one or more processors operable with the user interface; the user interface comprising a fingerprint sensor proximately located with at least one proximity sensor component comprising an infrared signal receiver to receive an infrared emission from an object external to the housing; the at least one proximity detector component to actuate the fingerprint sensor when the infrared signal receiver receives the infrared emission from the object. 2. The electronic device of claim 1, the fingerprint sensor collocated with the at least one proximity sensor component. 3. The electronic device of claim 1, the fingerprint sensor adjacent to the at least one proximity sensor component. 4. The electronic device of claim 3, the fingerprint sensor immediately adjacent to the at least one proximity sensor component. 5. The electronic device of claim 1, the at least one proximity sensor component comprising a single proximity sensor component. 6. The electronic device of claim 1, the at least one proximity sensor component disposed about a perimeter of the fingerprint sensor. 7. The electronic device of claim 1, the at least one proximity sensor component to actuate the fingerprint sensor by transitioning the fingerprint sensor from a low-power or sleep mode to an active mode of operation. 8. The electronic device of claim 1, wherein:
the at least one proximity sensor component to actuate the fingerprint sensor when a finger is within a predetermined distance of the the fingerprint sensor; the fingerprint sensor to capture and store fingerprint data from the finger; and the one or more processors to compare the fingerprint data to reference data and determine whether the fingerprint data substantially matches the reference data. 9. An electronic device, comprising:
a housing; one or more processors operable; a fingerprint sensor; and an infrared signal receiver, proximately located with the fingerprint sensor, the infrared signal receiver to receive an infrared emission from an object external to the housing; the one or more processors operable to, when the infrared signal receiver receives the infrared emission from the object, transition the fingerprint sensor from a low-power or sleep mode to an active mode. 10. The electronic device of claim 9, the object comprising a finger, the fingerprint sensor to capture and store fingerprint data from the finger when in the active mode. 11. The electronic device of claim 9, further comprising a timer, one of the infrared signal receiver or the one or more processors to initiate the timer when the infrared signal receiver receives the infrared emission, and, where the fingerprint sensor fails to capture and store fingerprint data prior to expiration of the timer, transition the fingerprint sensor from the active mode to the low-power or sleep mode. 12. The electronic device of claim 9, the fingerprint sensor comprising a push button. 13. The electronic device of claim 9, the one or more processors to operate the fingerprint sensor in the low-power or sleep mode until the infrared signal receiver receives the infrared emission from the object. 14. A method in an electronic device, the method comprising:
determining, with at least one proximity sensor component proximately located with a fingerprint sensor and comprising an infrared signal receiver to receive an infrared emission from an object external to a housing, a proximity of the object to the fingerprint sensor; and in response to detecting the proximity of the object, transition the fingerprint sensor from a low-power or sleep mode to an active mode of operation. 15. The method of claim 14, the proximity less than a predetermined distance. 16. The method of claim 15, the predetermined distance less than three inches. 17. The method of claim 15, further comprising initiating a timer when the object is less than the predetermined distance from the fingerprint sensor, and returning the fingerprint sensor to the low-power or sleep mode when the fingerprint sensor fails to capture fingerprint data prior to expiration of the timer. 18. The method of claim 14, further comprising receiving, with the fingerprint sensor, fingerprint data and attempting to authenticate the fingerprint data. 19. The method of claim 18, further comprising returning the fingerprint sensor to the low-power or sleep mode upon failing to authenticate the fingerprint data. 20. The method of claim 14, further comprising operating the at least one proximity sensor component in the active mode of operation while the fingerprint sensor is in the low-power or sleep mode. | An electronic device includes a housing, a user interface, and one or more processors operable with the user interface. The user interface includes a fingerprint sensor proximately located with at least one proximity sensor component. The proximity sensor component can include an infrared signal receiver to receive an infrared emission from an object external to the housing. The proximity detector component is to actuate the fingerprint sensor when the infrared signal receiver receives the infrared emission from the object.1. An electronic device, comprising:
a housing; a user interface; and one or more processors operable with the user interface; the user interface comprising a fingerprint sensor proximately located with at least one proximity sensor component comprising an infrared signal receiver to receive an infrared emission from an object external to the housing; the at least one proximity detector component to actuate the fingerprint sensor when the infrared signal receiver receives the infrared emission from the object. 2. The electronic device of claim 1, the fingerprint sensor collocated with the at least one proximity sensor component. 3. The electronic device of claim 1, the fingerprint sensor adjacent to the at least one proximity sensor component. 4. The electronic device of claim 3, the fingerprint sensor immediately adjacent to the at least one proximity sensor component. 5. The electronic device of claim 1, the at least one proximity sensor component comprising a single proximity sensor component. 6. The electronic device of claim 1, the at least one proximity sensor component disposed about a perimeter of the fingerprint sensor. 7. The electronic device of claim 1, the at least one proximity sensor component to actuate the fingerprint sensor by transitioning the fingerprint sensor from a low-power or sleep mode to an active mode of operation. 8. The electronic device of claim 1, wherein:
the at least one proximity sensor component to actuate the fingerprint sensor when a finger is within a predetermined distance of the the fingerprint sensor; the fingerprint sensor to capture and store fingerprint data from the finger; and the one or more processors to compare the fingerprint data to reference data and determine whether the fingerprint data substantially matches the reference data. 9. An electronic device, comprising:
a housing; one or more processors operable; a fingerprint sensor; and an infrared signal receiver, proximately located with the fingerprint sensor, the infrared signal receiver to receive an infrared emission from an object external to the housing; the one or more processors operable to, when the infrared signal receiver receives the infrared emission from the object, transition the fingerprint sensor from a low-power or sleep mode to an active mode. 10. The electronic device of claim 9, the object comprising a finger, the fingerprint sensor to capture and store fingerprint data from the finger when in the active mode. 11. The electronic device of claim 9, further comprising a timer, one of the infrared signal receiver or the one or more processors to initiate the timer when the infrared signal receiver receives the infrared emission, and, where the fingerprint sensor fails to capture and store fingerprint data prior to expiration of the timer, transition the fingerprint sensor from the active mode to the low-power or sleep mode. 12. The electronic device of claim 9, the fingerprint sensor comprising a push button. 13. The electronic device of claim 9, the one or more processors to operate the fingerprint sensor in the low-power or sleep mode until the infrared signal receiver receives the infrared emission from the object. 14. A method in an electronic device, the method comprising:
determining, with at least one proximity sensor component proximately located with a fingerprint sensor and comprising an infrared signal receiver to receive an infrared emission from an object external to a housing, a proximity of the object to the fingerprint sensor; and in response to detecting the proximity of the object, transition the fingerprint sensor from a low-power or sleep mode to an active mode of operation. 15. The method of claim 14, the proximity less than a predetermined distance. 16. The method of claim 15, the predetermined distance less than three inches. 17. The method of claim 15, further comprising initiating a timer when the object is less than the predetermined distance from the fingerprint sensor, and returning the fingerprint sensor to the low-power or sleep mode when the fingerprint sensor fails to capture fingerprint data prior to expiration of the timer. 18. The method of claim 14, further comprising receiving, with the fingerprint sensor, fingerprint data and attempting to authenticate the fingerprint data. 19. The method of claim 18, further comprising returning the fingerprint sensor to the low-power or sleep mode upon failing to authenticate the fingerprint data. 20. The method of claim 14, further comprising operating the at least one proximity sensor component in the active mode of operation while the fingerprint sensor is in the low-power or sleep mode. | 2,400 |
7,544 | 7,544 | 15,657,163 | 2,456 | A system and method in a building or vehicle for an actuator operation in response to a sensor according to a control logic, the system comprising a router or a gateway communicating with a device associated with the sensor and a device associated with the actuator over in-building or in-vehicle networks, and an external Internet-connected control server associated with the control logic implementing a PID closed linear control loop and communicating with the router over external network for controlling the in-building or in-vehicle phenomenon. The sensor may be a microphone or a camera, and the system may include voice or image processing as part of the control logic. A redundancy is used by using multiple sensors or actuators, or by using multiple data paths over the building or vehicle internal or external communication. The networks may be wired or wireless, and may be BAN, PAN, LAN, WAN, or home networks. | 1. A method for operating an actuator in a controlled device in response to captured human voice data, for use with a client device in a building communicating over a wireless network and an Internet-connected server device external to the building, the method comprising:
capturing, by a microphone in the client device, the human voice data; sending to the server, by the client device via the wireless network, the captured human voice data; receiving, by the server over the Internet, the captured human voice data; processing, by the server, the captured human voice data; responsive to the processing, sending a message, by the server to the client over the Internet; receiving, by the controlled device via the wireless network, the message; and operating the actuator in the controlled device in response to the received message. 2. The method according to claim 1, wherein the controlled device is part of, integrated with, or the same as, the client device. 3. The method according to claim 1, wherein the processing comprises performing a voice recognition algorithm for identifying the voice of a specific person. 4. The method according to claim 1, wherein the client device further comprises a sensor that outputs sensor data that responds to a physical phenomenon, and wherein the method further comprising sending to the server, by the client device via the wireless network, the sensor data, and wherein the message is sent by the server in response to the sensor data. 5. The method according to claim 4, wherein the sensor is a thermoelectric sensor that responds to a temperature or to a temperature gradient of an object using conduction, convection, or radiation, wherein the sensor is a photoelectric sensor that responds to a visible or an invisible light, the invisible light is infrared, ultraviolet, X-rays, or gamma rays, wherein the photoelectric sensor is based on the photoelectric or photovoltaic effect, and consists of, or comprises, a semiconductor component that consists of, or comprises, a photodiode, a phototransistor, or a solar cell, or wherein the photoelectric sensor is based on Charge-Coupled Device (CCD) or a Complementary Metal-Oxide Semiconductor (CMOS) element. 6. The method according to claim 1, wherein the actuator is directly or indirectly affecting, changing, producing, or creating a physical phenomenon. 7. The method according to claim 6, wherein the physical phenomenon comprises temperature, humidity, pressure, audio, vibration, light, motion, sound, proximity, flow rate, electrical voltage, or electrical current. 8. The method according to claim 1, wherein the client device comprises multiple microphones, and wherein the capturing comprises capturing, by the multiple microphones in the client device, the human voice data. 9. The method according to claim 8, wherein the multiple microphones are arranged as a directional microphones array operative to estimate a number, magnitude, frequency, Direction-Of-Arrival (DOA), distance, or speed of a phenomenon impinging the microphones array. 10. The method according to claim 1, wherein the microphone is an omnidirectional, unidirectional, or bidirectional microphone that is based on the sensing an incident sound based motion of a diaphragm or a ribbon, or wherein the microphone consists of, or comprises, a condenser, an electret, a dynamic, a ribbon, a carbon, or a piezoelectric microphone. 11. The method according to claim 1, wherein the client device or the controlled device are addressable in the wireless network or the Internet using distinct locally administered addresses or a universally administered digital addresses stored in a volatile or non-volatile memory of the respective device and uniquely identifying the respective device in the network. 12. The method according to claim 11, wherein the digital address is a MAC layer address that is MAC-48, EUI-48, or EUI-64 address type or wherein the digital address is a layer 3 address and is static or dynamic IP address that is IPv4 or IPv6 type address. 13. The method according to claim 1, wherein the wireless network is a Wireless Personal Area Network (WPAN), that is according to, or based on, Bluetooth™ or IEEE 802.15.1-2005standards, or wherein the WPAN is a wireless control network that is according to, or based on, Zigbee™, IEEE 802.15.4-2003, or Z-Wave™ standards. 14. The method according to claim 1, wherein the wireless network is a Wireless LAN (WLAN) that is according to, or base on, IEEE 802.11-2012, IEEE 802.11a, IEEE 802.11b, IEEE 802.11g, IEEE 802.11n, or IEEE 802.11ac. 15. The method according to claim 1, wherein the wireless network uses a wireless communication over a licensed or an unlicensed radio frequency band, that is an Industrial, Scientific and Medical (ISM) radio band. 16. The method according to claim 1, wherein the wireless network is a cellular telephone network, that is a Third Generation (3G) network that uses UMTS W-CDMA, UMTS HSPA, UMTS TDD, CDMA2000 1×RTT, CDMA2000 EV-DO, or GSM EDGE-Evolution, or wherein the cellular telephone network is a Fourth Generation (4G) network that uses HSPA+, Mobile WiMAX, LTE, LTE-Advanced, MBWA, or is based on IEEE 802.20-2008. 17. The method according to claim 1, wherein the client device or the controlled device is integrated in, is part or, or is entirely included in, an appliance. 18. The method according to claim 17, wherein the primary functionality of the appliance is associated with food storage, handling, or preparation. 19. The method according to claim 18, wherein the primary function of the appliance is heating food, and wherein the appliance is a microwave oven, an electric mixer, a stove, an oven, or an induction cooker. 20. The method according to claim 18, wherein the appliance is a refrigerator, a freezer, a food processor, a dishwashers, a food blender, a beverage maker, a coffeemaker, or an iced-tea maker. 21. The method according to claim 17, wherein the primary function of the appliance is associated with environmental control, and the appliance consists of, or is part of, an HVAC system. 22. The method according to claim 21, wherein the primary function of the appliance is associated with temperature control, and wherein the appliance is an air conditioner or a heater. 23. The method according to claim 17, wherein the primary function of the appliance is associated with cleaning, wherein the appliance primary function is associated with clothes cleaning and the appliance is a washing machine or a clothes dryer, or wherein the appliance is a vacuum cleaner. 24. The method according to claim 17, wherein the appliance is an answering machine, a telephone set, a home cinema system, a HiFi system, a CD or DVD player, an electric furnace, a trash compactor, a smoke detector, a light fixture, or a dehumidifier. 25. The method according to claim 1, wherein the actuator is an electric light source for converting electrical energy into light that emits visible or non-visible light for illumination or indication, and the non-visible light is infrared, ultraviolet, X-rays, or gamma rays. 26. The method according to claim 25, wherein the electric light source consists of, or comprises, a lamp, an incandescent lamp, a gas discharge lamp, a fluorescent lamp, a Solid-State Lighting (SSL), a Light Emitting Diode (LED), an Organic LED (OLED), a polymer LED (PLED), or a laser diode. 27. The method according to claim 1, wherein the actuator is a motion actuator that causes linear or rotary motion. 28. The method according to claim 1, wherein the actuator is a sounder for converting an electrical energy to omnidirectional, unidirectional, or bidirectional pattern emitted, audible or inaudible, sound waves. 29. The method according to claim 28, wherein the sounder comprises an electromagnetic loudspeaker, a piezoelectric speaker, an electrostatic loudspeaker (ESL), a ribbon or planar magnetic loudspeaker, or a bending wave loudspeaker, or wherein the sounder comprises an electric bell, a buzzer (or beeper), a chime, a whistle or a ringer. 30. The method according to claim 28, wherein the operating of the actuator comprises playing digital audio content that is pre-recorded or synthesized, or wherein the operating of the actuator comprises simulating the voice of a human being or generating music, or wherein the operating of the actuator comprises sounding a syllable, a word, a phrase, a sentence, a short story, or a long story, using male or female voice. | A system and method in a building or vehicle for an actuator operation in response to a sensor according to a control logic, the system comprising a router or a gateway communicating with a device associated with the sensor and a device associated with the actuator over in-building or in-vehicle networks, and an external Internet-connected control server associated with the control logic implementing a PID closed linear control loop and communicating with the router over external network for controlling the in-building or in-vehicle phenomenon. The sensor may be a microphone or a camera, and the system may include voice or image processing as part of the control logic. A redundancy is used by using multiple sensors or actuators, or by using multiple data paths over the building or vehicle internal or external communication. The networks may be wired or wireless, and may be BAN, PAN, LAN, WAN, or home networks.1. A method for operating an actuator in a controlled device in response to captured human voice data, for use with a client device in a building communicating over a wireless network and an Internet-connected server device external to the building, the method comprising:
capturing, by a microphone in the client device, the human voice data; sending to the server, by the client device via the wireless network, the captured human voice data; receiving, by the server over the Internet, the captured human voice data; processing, by the server, the captured human voice data; responsive to the processing, sending a message, by the server to the client over the Internet; receiving, by the controlled device via the wireless network, the message; and operating the actuator in the controlled device in response to the received message. 2. The method according to claim 1, wherein the controlled device is part of, integrated with, or the same as, the client device. 3. The method according to claim 1, wherein the processing comprises performing a voice recognition algorithm for identifying the voice of a specific person. 4. The method according to claim 1, wherein the client device further comprises a sensor that outputs sensor data that responds to a physical phenomenon, and wherein the method further comprising sending to the server, by the client device via the wireless network, the sensor data, and wherein the message is sent by the server in response to the sensor data. 5. The method according to claim 4, wherein the sensor is a thermoelectric sensor that responds to a temperature or to a temperature gradient of an object using conduction, convection, or radiation, wherein the sensor is a photoelectric sensor that responds to a visible or an invisible light, the invisible light is infrared, ultraviolet, X-rays, or gamma rays, wherein the photoelectric sensor is based on the photoelectric or photovoltaic effect, and consists of, or comprises, a semiconductor component that consists of, or comprises, a photodiode, a phototransistor, or a solar cell, or wherein the photoelectric sensor is based on Charge-Coupled Device (CCD) or a Complementary Metal-Oxide Semiconductor (CMOS) element. 6. The method according to claim 1, wherein the actuator is directly or indirectly affecting, changing, producing, or creating a physical phenomenon. 7. The method according to claim 6, wherein the physical phenomenon comprises temperature, humidity, pressure, audio, vibration, light, motion, sound, proximity, flow rate, electrical voltage, or electrical current. 8. The method according to claim 1, wherein the client device comprises multiple microphones, and wherein the capturing comprises capturing, by the multiple microphones in the client device, the human voice data. 9. The method according to claim 8, wherein the multiple microphones are arranged as a directional microphones array operative to estimate a number, magnitude, frequency, Direction-Of-Arrival (DOA), distance, or speed of a phenomenon impinging the microphones array. 10. The method according to claim 1, wherein the microphone is an omnidirectional, unidirectional, or bidirectional microphone that is based on the sensing an incident sound based motion of a diaphragm or a ribbon, or wherein the microphone consists of, or comprises, a condenser, an electret, a dynamic, a ribbon, a carbon, or a piezoelectric microphone. 11. The method according to claim 1, wherein the client device or the controlled device are addressable in the wireless network or the Internet using distinct locally administered addresses or a universally administered digital addresses stored in a volatile or non-volatile memory of the respective device and uniquely identifying the respective device in the network. 12. The method according to claim 11, wherein the digital address is a MAC layer address that is MAC-48, EUI-48, or EUI-64 address type or wherein the digital address is a layer 3 address and is static or dynamic IP address that is IPv4 or IPv6 type address. 13. The method according to claim 1, wherein the wireless network is a Wireless Personal Area Network (WPAN), that is according to, or based on, Bluetooth™ or IEEE 802.15.1-2005standards, or wherein the WPAN is a wireless control network that is according to, or based on, Zigbee™, IEEE 802.15.4-2003, or Z-Wave™ standards. 14. The method according to claim 1, wherein the wireless network is a Wireless LAN (WLAN) that is according to, or base on, IEEE 802.11-2012, IEEE 802.11a, IEEE 802.11b, IEEE 802.11g, IEEE 802.11n, or IEEE 802.11ac. 15. The method according to claim 1, wherein the wireless network uses a wireless communication over a licensed or an unlicensed radio frequency band, that is an Industrial, Scientific and Medical (ISM) radio band. 16. The method according to claim 1, wherein the wireless network is a cellular telephone network, that is a Third Generation (3G) network that uses UMTS W-CDMA, UMTS HSPA, UMTS TDD, CDMA2000 1×RTT, CDMA2000 EV-DO, or GSM EDGE-Evolution, or wherein the cellular telephone network is a Fourth Generation (4G) network that uses HSPA+, Mobile WiMAX, LTE, LTE-Advanced, MBWA, or is based on IEEE 802.20-2008. 17. The method according to claim 1, wherein the client device or the controlled device is integrated in, is part or, or is entirely included in, an appliance. 18. The method according to claim 17, wherein the primary functionality of the appliance is associated with food storage, handling, or preparation. 19. The method according to claim 18, wherein the primary function of the appliance is heating food, and wherein the appliance is a microwave oven, an electric mixer, a stove, an oven, or an induction cooker. 20. The method according to claim 18, wherein the appliance is a refrigerator, a freezer, a food processor, a dishwashers, a food blender, a beverage maker, a coffeemaker, or an iced-tea maker. 21. The method according to claim 17, wherein the primary function of the appliance is associated with environmental control, and the appliance consists of, or is part of, an HVAC system. 22. The method according to claim 21, wherein the primary function of the appliance is associated with temperature control, and wherein the appliance is an air conditioner or a heater. 23. The method according to claim 17, wherein the primary function of the appliance is associated with cleaning, wherein the appliance primary function is associated with clothes cleaning and the appliance is a washing machine or a clothes dryer, or wherein the appliance is a vacuum cleaner. 24. The method according to claim 17, wherein the appliance is an answering machine, a telephone set, a home cinema system, a HiFi system, a CD or DVD player, an electric furnace, a trash compactor, a smoke detector, a light fixture, or a dehumidifier. 25. The method according to claim 1, wherein the actuator is an electric light source for converting electrical energy into light that emits visible or non-visible light for illumination or indication, and the non-visible light is infrared, ultraviolet, X-rays, or gamma rays. 26. The method according to claim 25, wherein the electric light source consists of, or comprises, a lamp, an incandescent lamp, a gas discharge lamp, a fluorescent lamp, a Solid-State Lighting (SSL), a Light Emitting Diode (LED), an Organic LED (OLED), a polymer LED (PLED), or a laser diode. 27. The method according to claim 1, wherein the actuator is a motion actuator that causes linear or rotary motion. 28. The method according to claim 1, wherein the actuator is a sounder for converting an electrical energy to omnidirectional, unidirectional, or bidirectional pattern emitted, audible or inaudible, sound waves. 29. The method according to claim 28, wherein the sounder comprises an electromagnetic loudspeaker, a piezoelectric speaker, an electrostatic loudspeaker (ESL), a ribbon or planar magnetic loudspeaker, or a bending wave loudspeaker, or wherein the sounder comprises an electric bell, a buzzer (or beeper), a chime, a whistle or a ringer. 30. The method according to claim 28, wherein the operating of the actuator comprises playing digital audio content that is pre-recorded or synthesized, or wherein the operating of the actuator comprises simulating the voice of a human being or generating music, or wherein the operating of the actuator comprises sounding a syllable, a word, a phrase, a sentence, a short story, or a long story, using male or female voice. | 2,400 |
7,545 | 7,545 | 13,857,714 | 2,432 | Methods and systems for providing device-specific authentication are described. One example method includes generating device-specific credentials, associating the device-specific credentials with a device, authenticating the device based on the device-specific credentials, and after authenticating the device, authenticating a user of the device based on user-specific credentials associated with the user and different than the device-specific credentials. | 1. A method performed by one or more data processing apparatuses, the method comprising:
generating device-specific credentials, wherein the device-specific credentials are configured to be used more than one time by an associated device; associating the device-specific credentials with a device; associating a particular user of the device with user-specific credentials different than the device-specific credentials; after generating the device-specific credentials, associating the device-specific credentials with the device, and associating the particular user with the user-specific credentials, authenticating, by a data processing apparatus connected to a first network, the device based on the device-specific credentials, wherein the authentication occurs within the first network; after authenticating the device:
permitting the device to access a second network different than the first network; and
authenticating, by a data processing apparatus connected to the second network, the particular user of the device based on the user-specific credentials. 2. The method of claim 1, wherein the device-specific credentials include a device-specific username and password. 3. The method of claim 2, wherein generating the device-specific credentials includes generating a random username and password. 4. The method of claim 1, wherein authenticating the device occurs via an insecure method, and authenticating the user occurs via a secure method. 5. The method of claim 4, wherein the insecure method includes HyperText Transfer Protocol Basic Authentication (Basic Auth), and the secure method includes HyperText Transfer Protocol Secure (HTTPS). 6. The method of claim 1, wherein the device-specific credentials are generated and associated with the device based on a profile associated with the device. 7. The method of claim 1, wherein authenticating the device based on the device-specific credentials occurs without user interaction. 8. The method of claim 1, further comprising:
tracking a usage pattern of the device based on the user-specific credentials of the user of the device. 9. The method of claim 8, further comprising:
logging the user out of the device in response to the usage pattern indicating that the user has used the device for a time greater than a maximum usage time associated with the device. 10. A method performed by one or more data processing apparatuses, the method comprising:
authenticating, by a data processing apparatus connected to a first network, a first device based on a first set of device-specific credentials, wherein the authentication occurs within the first network; after authenticating the first device, permitting the first device to access a second network different than the first network; authenticating, by a data processing apparatus connected to the second network, a user based on user-specific credentials associated with the user and different than the first set of device-specific credentials, wherein the authentication occurs while the user is using the first device and occurs within the second network, wherein the user-specific credentials include a username and password; applying a first policy associated with the first device to the user while the user is using the first device; authenticating, by the data processing apparatus connected to the first network, a second device based on a second set of device-specific credentials, wherein the authentication occurs within the first network; after authenticating the second device, permitting the second device to access the second network; authenticating, by the data processing apparatus connected to the second network, the user based on the user-specific credentials, the user-specific credentials being different than the second set of device-specific credentials, wherein the authentication occurs while the user is using the second device and occurs within the second network; and applying a second policy associated with the second device to the user while the user is using the second device, the second policy being different than the first policy. 11. The method of claim 10, wherein the first device-specific credentials and the second device-specific credentials include device-specific usernames and passwords. 12. The method of claim 10, wherein authenticating the first and second devices occurs via an insecure method, and authenticating the user occurs via a secure method. 13. The method of claim 12, wherein the insecure method includes HyperText Transfer Protocol Basic Authentication (Basic Auth), and the secure method includes HyperText Transfer Protocol Secure (HTTPS). 14. The method of claim 10, wherein authenticating the first and second devices occurs via a secure method. 15. The method of claim 10, further comprising:
tracking a usage pattern of the first device based on the user-specific credentials of the user of the device. 16. The method of claim 15, further comprising:
logging the user out of the first device in response to the usage pattern indicating that the user has used the first device for a time greater than a maximum usage time associated with the first device. 17. A system comprising:
a hardware processor configured to execute computer program instructions; and a non-transitory computer storage medium encoded with computer program instructions that, when executed by the processor, cause the system to perform operations comprising:
generating device-specific credentials;
associating the device-specific credentials with a device;
authenticating, by a data processing apparatus connected to a first network, the device based on the device-specific credentials, wherein the authentication occurs within the first network;
after authenticating the device:
permitting the device to access a first portion of a second network different than the first network;
authenticating, by a data processing apparatus connected to the first portion of the second network, the particular user of the device based on the user-specific credentials; and
in response to authenticating the particular user, permitting the device to access a second portion of the second network different than the first portion. 18. The system of claim 17, wherein the device-specific credentials include a device-specific username and password. 19. The system of claim 18, wherein generating the device-specific credentials includes generating a random username and password. 20. The system of claim 17, wherein authenticating the device occurs via an insecure method, and authenticating the user occurs via a secure method. 21. The system of claim 20, wherein the insecure method includes HyperText Transfer Protocol Basic Authentication (Basic Auth), and the secure method includes HyperText Transfer Protocol Secure (HTTPS). 22. The system of claim 17, wherein the device-specific credentials are generated and associated with the device based on a profile associated with the device. 23. The system of claim 17, wherein authenticating the device based on the device-specific credentials occurs without user interaction. 24. The system of claim 17, the operations further comprising:
tracking a usage pattern of the device based on the user-specific credentials of the user of the device. 25. The system of claim 24, the operations further comprising:
logging the user out of the device in response to the usage pattern indicating that the user has used the device for a time greater than a maximum usage time associated with the device. 26. A system comprising:
a hardware processor configured to execute computer program instructions; and a non-transitory computer storage medium encoded with computer program instructions that, when executed by the processor, cause the system to perform operations comprising:
authenticating a first device based on a first set of device-specific credentials, wherein the authentication occurs within a first network;
after authenticating the first device, permitting the first device to access a second network different than the first network;
authenticating a user based on user-specific credentials associated with the user and different than the first set of device-specific credentials, wherein the authentication occurs while the user is using the first device and occurs within the second network, wherein the user-specific credentials include a username and password;
applying a first policy associated with the first device to the user while the user is using the first device;
authenticating a second device based on a second set of device-specific credentials, wherein the authentication occurs within the first network;
after authenticating the second device, permitting the second device to access the second network;
authenticating the user based on the user-specific credentials, the user-specific credentials being different than the second set of device-specific credentials, wherein the authentication occurs while the user is using the second device and occurs within the second network; and
applying a second policy associated with the second device to the user while the user is using the second device, the second policy being different than the first policy. 27. The system of claim 26, wherein the first device-specific credentials and the second device-specific credentials include device-specific usernames and passwords. 28. The system of claim 26, wherein authenticating the first and second devices occurs via an insecure method, and authenticating the user occurs via a secure method. 29. The system of claim 28, wherein the insecure method includes HyperText Transfer Protocol Basic Authentication (Basic Auth), and the secure method includes HyperText Transfer Protocol Secure (HTTPS). 30. The system of claim 26, wherein authenticating the first and second devices occurs via a secure method. | Methods and systems for providing device-specific authentication are described. One example method includes generating device-specific credentials, associating the device-specific credentials with a device, authenticating the device based on the device-specific credentials, and after authenticating the device, authenticating a user of the device based on user-specific credentials associated with the user and different than the device-specific credentials.1. A method performed by one or more data processing apparatuses, the method comprising:
generating device-specific credentials, wherein the device-specific credentials are configured to be used more than one time by an associated device; associating the device-specific credentials with a device; associating a particular user of the device with user-specific credentials different than the device-specific credentials; after generating the device-specific credentials, associating the device-specific credentials with the device, and associating the particular user with the user-specific credentials, authenticating, by a data processing apparatus connected to a first network, the device based on the device-specific credentials, wherein the authentication occurs within the first network; after authenticating the device:
permitting the device to access a second network different than the first network; and
authenticating, by a data processing apparatus connected to the second network, the particular user of the device based on the user-specific credentials. 2. The method of claim 1, wherein the device-specific credentials include a device-specific username and password. 3. The method of claim 2, wherein generating the device-specific credentials includes generating a random username and password. 4. The method of claim 1, wherein authenticating the device occurs via an insecure method, and authenticating the user occurs via a secure method. 5. The method of claim 4, wherein the insecure method includes HyperText Transfer Protocol Basic Authentication (Basic Auth), and the secure method includes HyperText Transfer Protocol Secure (HTTPS). 6. The method of claim 1, wherein the device-specific credentials are generated and associated with the device based on a profile associated with the device. 7. The method of claim 1, wherein authenticating the device based on the device-specific credentials occurs without user interaction. 8. The method of claim 1, further comprising:
tracking a usage pattern of the device based on the user-specific credentials of the user of the device. 9. The method of claim 8, further comprising:
logging the user out of the device in response to the usage pattern indicating that the user has used the device for a time greater than a maximum usage time associated with the device. 10. A method performed by one or more data processing apparatuses, the method comprising:
authenticating, by a data processing apparatus connected to a first network, a first device based on a first set of device-specific credentials, wherein the authentication occurs within the first network; after authenticating the first device, permitting the first device to access a second network different than the first network; authenticating, by a data processing apparatus connected to the second network, a user based on user-specific credentials associated with the user and different than the first set of device-specific credentials, wherein the authentication occurs while the user is using the first device and occurs within the second network, wherein the user-specific credentials include a username and password; applying a first policy associated with the first device to the user while the user is using the first device; authenticating, by the data processing apparatus connected to the first network, a second device based on a second set of device-specific credentials, wherein the authentication occurs within the first network; after authenticating the second device, permitting the second device to access the second network; authenticating, by the data processing apparatus connected to the second network, the user based on the user-specific credentials, the user-specific credentials being different than the second set of device-specific credentials, wherein the authentication occurs while the user is using the second device and occurs within the second network; and applying a second policy associated with the second device to the user while the user is using the second device, the second policy being different than the first policy. 11. The method of claim 10, wherein the first device-specific credentials and the second device-specific credentials include device-specific usernames and passwords. 12. The method of claim 10, wherein authenticating the first and second devices occurs via an insecure method, and authenticating the user occurs via a secure method. 13. The method of claim 12, wherein the insecure method includes HyperText Transfer Protocol Basic Authentication (Basic Auth), and the secure method includes HyperText Transfer Protocol Secure (HTTPS). 14. The method of claim 10, wherein authenticating the first and second devices occurs via a secure method. 15. The method of claim 10, further comprising:
tracking a usage pattern of the first device based on the user-specific credentials of the user of the device. 16. The method of claim 15, further comprising:
logging the user out of the first device in response to the usage pattern indicating that the user has used the first device for a time greater than a maximum usage time associated with the first device. 17. A system comprising:
a hardware processor configured to execute computer program instructions; and a non-transitory computer storage medium encoded with computer program instructions that, when executed by the processor, cause the system to perform operations comprising:
generating device-specific credentials;
associating the device-specific credentials with a device;
authenticating, by a data processing apparatus connected to a first network, the device based on the device-specific credentials, wherein the authentication occurs within the first network;
after authenticating the device:
permitting the device to access a first portion of a second network different than the first network;
authenticating, by a data processing apparatus connected to the first portion of the second network, the particular user of the device based on the user-specific credentials; and
in response to authenticating the particular user, permitting the device to access a second portion of the second network different than the first portion. 18. The system of claim 17, wherein the device-specific credentials include a device-specific username and password. 19. The system of claim 18, wherein generating the device-specific credentials includes generating a random username and password. 20. The system of claim 17, wherein authenticating the device occurs via an insecure method, and authenticating the user occurs via a secure method. 21. The system of claim 20, wherein the insecure method includes HyperText Transfer Protocol Basic Authentication (Basic Auth), and the secure method includes HyperText Transfer Protocol Secure (HTTPS). 22. The system of claim 17, wherein the device-specific credentials are generated and associated with the device based on a profile associated with the device. 23. The system of claim 17, wherein authenticating the device based on the device-specific credentials occurs without user interaction. 24. The system of claim 17, the operations further comprising:
tracking a usage pattern of the device based on the user-specific credentials of the user of the device. 25. The system of claim 24, the operations further comprising:
logging the user out of the device in response to the usage pattern indicating that the user has used the device for a time greater than a maximum usage time associated with the device. 26. A system comprising:
a hardware processor configured to execute computer program instructions; and a non-transitory computer storage medium encoded with computer program instructions that, when executed by the processor, cause the system to perform operations comprising:
authenticating a first device based on a first set of device-specific credentials, wherein the authentication occurs within a first network;
after authenticating the first device, permitting the first device to access a second network different than the first network;
authenticating a user based on user-specific credentials associated with the user and different than the first set of device-specific credentials, wherein the authentication occurs while the user is using the first device and occurs within the second network, wherein the user-specific credentials include a username and password;
applying a first policy associated with the first device to the user while the user is using the first device;
authenticating a second device based on a second set of device-specific credentials, wherein the authentication occurs within the first network;
after authenticating the second device, permitting the second device to access the second network;
authenticating the user based on the user-specific credentials, the user-specific credentials being different than the second set of device-specific credentials, wherein the authentication occurs while the user is using the second device and occurs within the second network; and
applying a second policy associated with the second device to the user while the user is using the second device, the second policy being different than the first policy. 27. The system of claim 26, wherein the first device-specific credentials and the second device-specific credentials include device-specific usernames and passwords. 28. The system of claim 26, wherein authenticating the first and second devices occurs via an insecure method, and authenticating the user occurs via a secure method. 29. The system of claim 28, wherein the insecure method includes HyperText Transfer Protocol Basic Authentication (Basic Auth), and the secure method includes HyperText Transfer Protocol Secure (HTTPS). 30. The system of claim 26, wherein authenticating the first and second devices occurs via a secure method. | 2,400 |
7,546 | 7,546 | 14,789,517 | 2,423 | Primary content can be provided to a first device, wherein the primary content can comprise at least a first portion and a second portion. A provider can determine a user parameter related to secondary content, interspersed with the first and second portions of the primary content, and can provide the secondary content to a second device instead of to the first device, based on the user parameter. The provider can provide the second portion of the primary content to the first device immediately following the first portion of the primary content. | 1. A method comprising:
providing primary content to a first device, wherein the primary content comprises at least a first portion and a second portion and wherein the first device is associated with a user; determining a user parameter related to secondary content; providing the secondary content to a second device instead of to the first device, based on the user parameter, wherein the second device is associated with the user; and providing the second portion of the primary content to the first device immediately following the first portion of the primary content. 2. The method of claim 1, wherein at least one of the primary content and the secondary content is stored locally on the first device. 3. The method of claim 2, wherein the secondary content is provided to the second device from the first device. 4. The method of claim 2, wherein the secondary content is provided to the second device from a content server. 5. The method of claim 1, wherein the secondary content comprises advertising content. 6. The method of claim 1, wherein the secondary content is related to the primary content. 7. The method of claim 6, further comprising:
storing a token on the second device, wherein the token restricts use of the second device; and deleting the token in response to an indication that the secondary content has been displayed using the secondary device. 8. A method comprising:
delivering secondary content to a first device between portions of primary content; determining a user parameter related to consumption of the secondary content; identifying a second device associated with the user parameter; determining a proximity of the first device to the second device; discontinuing delivery of the secondary content to the first device and delivering the secondary content to the second device prior to the secondary content being rendered on the first device, based on at least the user parameter and the determined proximity, and delivering the portions of the primary content to the first device. 9. The method of claim 8, further comprising:
receiving, from the second device, a secondary content display acknowledgement message upon completion of displaying the provided secondary content. 10. The method of claim 9, wherein, if the secondary content display acknowledgement message is not received within a predetermined time period, injecting the secondary content into the primary content. 11. The method of claim 8, wherein, when the proximity is determined to exceed a predetermined threshold, the secondary content is delivered to the first device instead of to the second device. 12. The method of claim 8, further comprising determining whether or not the second device is active, wherein delivering the secondary content to the second device based on at least the user parameter and the determined proximity is performed in response to the second device being determined to be active. 13. The method of claim 8, further comprising:
receiving user input at the second device; and providing alternate secondary content to the second device based on the received user input. 14. A method comprising:
determining an indication of secondary content to be delivered to a first device between portions of primary content; determining a user parameter related to the secondary content; identifying a second device associated with the user parameter; determining whether or not the second device is active in response to the indication of secondary content to be delivered; and delivering the secondary content to the second device, based on the user parameter and the determination that the second device is active. 15. The method of claim 14, wherein the user parameter comprises an address of the second device. 16. The method of claim 14, further comprising receiving, from the second device, a secondary content display acknowledgement message upon completion of displaying the provided secondary content, wherein, if the secondary content display acknowledgement message is not received within a predetermined time period, injecting the secondary content into the primary content. 17. The method of claim 16, wherein the predetermined time period is selected based on a length of the secondary content. 18. The method of claim 14, further comprising determining a proximity of the first device and the second device, wherein delivering the secondary content to the second device based on at least the user parameter and the determined proximity is performed in response to the second device being determined to be within a predetermined proximity to the first device. 19. The method of claim 14, further comprising:
receiving user input at the second device; and providing alternate secondary content to the second device based on the received user input. 20. The method of claim 14, wherein the provided secondary content is selected based on the primary content. | Primary content can be provided to a first device, wherein the primary content can comprise at least a first portion and a second portion. A provider can determine a user parameter related to secondary content, interspersed with the first and second portions of the primary content, and can provide the secondary content to a second device instead of to the first device, based on the user parameter. The provider can provide the second portion of the primary content to the first device immediately following the first portion of the primary content.1. A method comprising:
providing primary content to a first device, wherein the primary content comprises at least a first portion and a second portion and wherein the first device is associated with a user; determining a user parameter related to secondary content; providing the secondary content to a second device instead of to the first device, based on the user parameter, wherein the second device is associated with the user; and providing the second portion of the primary content to the first device immediately following the first portion of the primary content. 2. The method of claim 1, wherein at least one of the primary content and the secondary content is stored locally on the first device. 3. The method of claim 2, wherein the secondary content is provided to the second device from the first device. 4. The method of claim 2, wherein the secondary content is provided to the second device from a content server. 5. The method of claim 1, wherein the secondary content comprises advertising content. 6. The method of claim 1, wherein the secondary content is related to the primary content. 7. The method of claim 6, further comprising:
storing a token on the second device, wherein the token restricts use of the second device; and deleting the token in response to an indication that the secondary content has been displayed using the secondary device. 8. A method comprising:
delivering secondary content to a first device between portions of primary content; determining a user parameter related to consumption of the secondary content; identifying a second device associated with the user parameter; determining a proximity of the first device to the second device; discontinuing delivery of the secondary content to the first device and delivering the secondary content to the second device prior to the secondary content being rendered on the first device, based on at least the user parameter and the determined proximity, and delivering the portions of the primary content to the first device. 9. The method of claim 8, further comprising:
receiving, from the second device, a secondary content display acknowledgement message upon completion of displaying the provided secondary content. 10. The method of claim 9, wherein, if the secondary content display acknowledgement message is not received within a predetermined time period, injecting the secondary content into the primary content. 11. The method of claim 8, wherein, when the proximity is determined to exceed a predetermined threshold, the secondary content is delivered to the first device instead of to the second device. 12. The method of claim 8, further comprising determining whether or not the second device is active, wherein delivering the secondary content to the second device based on at least the user parameter and the determined proximity is performed in response to the second device being determined to be active. 13. The method of claim 8, further comprising:
receiving user input at the second device; and providing alternate secondary content to the second device based on the received user input. 14. A method comprising:
determining an indication of secondary content to be delivered to a first device between portions of primary content; determining a user parameter related to the secondary content; identifying a second device associated with the user parameter; determining whether or not the second device is active in response to the indication of secondary content to be delivered; and delivering the secondary content to the second device, based on the user parameter and the determination that the second device is active. 15. The method of claim 14, wherein the user parameter comprises an address of the second device. 16. The method of claim 14, further comprising receiving, from the second device, a secondary content display acknowledgement message upon completion of displaying the provided secondary content, wherein, if the secondary content display acknowledgement message is not received within a predetermined time period, injecting the secondary content into the primary content. 17. The method of claim 16, wherein the predetermined time period is selected based on a length of the secondary content. 18. The method of claim 14, further comprising determining a proximity of the first device and the second device, wherein delivering the secondary content to the second device based on at least the user parameter and the determined proximity is performed in response to the second device being determined to be within a predetermined proximity to the first device. 19. The method of claim 14, further comprising:
receiving user input at the second device; and providing alternate secondary content to the second device based on the received user input. 20. The method of claim 14, wherein the provided secondary content is selected based on the primary content. | 2,400 |
7,547 | 7,547 | 14,998,173 | 2,468 | A method of operating a first user terminal of a first user, comprising: running a first communication client application on the first user terminal so as to enable the first user terminal to participate in a group communication session over a network with respective communication clients running on other user terminals; receiving, by the first user terminal, a plurality of audio data streams, each carrying audio data generated at a respective one of the other user terminals; the first communication client associating each of the received audio data streams with a respective user of one said other user terminals; the first communication client outputting the received audio data streams through one or more audio output devices associated with the first user terminal; and independently controlling, within an audible range, an output volume level of at least a selected one of the received audio data streams output through the one or more audio output devices. | 1. A method of operating a first user terminal of a first user, comprising:
running a first communication client application on the first user terminal so as to enable the first user terminal to participate in a group communication session over a network with respective communication clients running on other user terminals; receiving, by the first user terminal, a plurality of audio data streams, each carrying audio data generated at a respective one of the other user terminals; the first communication client associating each of the received audio data streams with a respective user of one said other user terminals; the first communication client outputting the received audio data streams through one or more audio output devices associated with the first user terminal; and independently controlling, within an audible range, an output volume level of at least a selected one of the received audio data streams output through the one or more audio output devices. 2. The method of claim 1 further comprising the first user terminal transmitting an encoded audio data stream for reception by the other user terminals. 3. The method of claim 2 further comprising the first communication client generating and transmitting to a second of the user terminals a volume instruction for causing the second user terminal to play out the encoded audio data stream transmitted from the first user terminal with an output volume level denoted by the volume instruction. 4. The method of claim 3 wherein the selected audio data stream is the one associated with the second user terminal and the independently controlling step comprises changing the output volume level of the selected audio data stream in response to a local volume change input from the first user; and
wherein the volume instruction is transmitted to the second user terminal by the first user terminal also in response to the local volume change input to cause at the second user terminal a corresponding change in the output volume level of the encoded audio data stream transmitted by the first user terminal. 5. The method of claim 1 further comprising initially setting the output volume level of each of the received audio data streams to a same audible level. 6. The method of claim 5 wherein said same audible level is a proportion of an overall master volume output level of the first user terminal. 7. The method of claim 5 further comprising controlling the output volume level of each of the received audio data streams to revert back to the initially set same audible level. 8. The method of claim 1 further comprising displaying a user interface of the first communication client, the user interface comprising a plurality of user-actuatable audio volume controls respectively associated with each of the received audio data streams. 9. The method of claim 8, wherein the step of independently controlling, within an audible range, an output volume level of at least a selected one of the received audio data streams output through the one or more audio output devices further comprises actuating one or more of the audio volume controls. 10. The method of claim 1, wherein the step of independently controlling is performed by the first communication client application according to respective volume instructions received at the first user terminal from the other user terminals. 11. The method of claim 1 further comprising automatically changing, within an audible range, the output volume level of one or more of the received audio data streams; wherein said automatically changing the output volume level is based on receiving, at the first user terminal, one of the respective volume instructions which is received with said one or more of the received audio data streams. 12. The method of claim 11 further comprising setting a control at the first communication client to override the step of automatically changing the output volume level of one or more of the received audio data streams. 13. The method of claim 8 wherein the user-actuatable audio volume controls comprise mute control buttons for temporarily muting one or more of the received audio data streams output through the one or more audio output devices. 14. The method of claim 2 further comprising receiving a signal from one or more of the other user terminals when the transmitted encoded audio stream has been muted by a respective user at said one or more of the other user terminals but said one or more of the other user terminals are still receiving an audio input as part of the group communication session; wherein the signal indicates to the first user terminal that the respective user at said one or more of the other user terminals is in a side conversation. 15. The method of claim 1 further comprising displaying a system volume control interface suitable for controlling an overall master volume output level of all audio signals output through the one or more audio output devices. 16. The method of claim 1 further comprising the first communication client generating data relating to the step of independently controlling, within an audible range, an output volume level of at least a selected one of the received audio data streams; and transmitting said data to a remote database over the network so that the data can be monitored. 17. The method of claim 2 wherein the received audio data streams and the encoded audio data stream transmitted by the first terminal are routed via a remote server. 18. The method of claim 17 wherein the remote server is a multipoint control unit, optionally an audio-visual multipoint control unit. 19. A user terminal, comprising:
a processor configured to run a first communication client application on the user terminal so as to enable the user terminal to participate in a group communication session over a network with respective communication clients running on other user terminals; a network interface configured for receiving a plurality of audio data streams, each carrying audio data generated at a respective one of the other user terminals; wherein the first communication client is configured for: associating each of the received audio data streams with a respective user of one said other user terminals; outputting the received audio data streams through one or more audio output devices associated with the user terminal; and independently controlling, within an audible range, an output volume level of at least a selected one of the received audio data streams output through the one or more audio output devices. 20. A communication client application embodied on a computer readable storage medium and comprising code configured so as when run on a first user terminal to enable the first user terminal to participate in a group communication session over a network with respective communication clients running on other user terminals by implementing at least the following steps:
receiving, by the first user terminal, a plurality of audio data streams, each carrying audio data generated at a respective one of the other user terminals; the first communication client associating each of the received audio data streams with a respective user of one said other user terminals; the first communication client outputting the received audio data streams through one or more audio output devices associated with the first user terminal; and independently controlling, within an audible range, an output volume level of at least a selected one of the received audio data streams output through the one or more audio output devices. | A method of operating a first user terminal of a first user, comprising: running a first communication client application on the first user terminal so as to enable the first user terminal to participate in a group communication session over a network with respective communication clients running on other user terminals; receiving, by the first user terminal, a plurality of audio data streams, each carrying audio data generated at a respective one of the other user terminals; the first communication client associating each of the received audio data streams with a respective user of one said other user terminals; the first communication client outputting the received audio data streams through one or more audio output devices associated with the first user terminal; and independently controlling, within an audible range, an output volume level of at least a selected one of the received audio data streams output through the one or more audio output devices.1. A method of operating a first user terminal of a first user, comprising:
running a first communication client application on the first user terminal so as to enable the first user terminal to participate in a group communication session over a network with respective communication clients running on other user terminals; receiving, by the first user terminal, a plurality of audio data streams, each carrying audio data generated at a respective one of the other user terminals; the first communication client associating each of the received audio data streams with a respective user of one said other user terminals; the first communication client outputting the received audio data streams through one or more audio output devices associated with the first user terminal; and independently controlling, within an audible range, an output volume level of at least a selected one of the received audio data streams output through the one or more audio output devices. 2. The method of claim 1 further comprising the first user terminal transmitting an encoded audio data stream for reception by the other user terminals. 3. The method of claim 2 further comprising the first communication client generating and transmitting to a second of the user terminals a volume instruction for causing the second user terminal to play out the encoded audio data stream transmitted from the first user terminal with an output volume level denoted by the volume instruction. 4. The method of claim 3 wherein the selected audio data stream is the one associated with the second user terminal and the independently controlling step comprises changing the output volume level of the selected audio data stream in response to a local volume change input from the first user; and
wherein the volume instruction is transmitted to the second user terminal by the first user terminal also in response to the local volume change input to cause at the second user terminal a corresponding change in the output volume level of the encoded audio data stream transmitted by the first user terminal. 5. The method of claim 1 further comprising initially setting the output volume level of each of the received audio data streams to a same audible level. 6. The method of claim 5 wherein said same audible level is a proportion of an overall master volume output level of the first user terminal. 7. The method of claim 5 further comprising controlling the output volume level of each of the received audio data streams to revert back to the initially set same audible level. 8. The method of claim 1 further comprising displaying a user interface of the first communication client, the user interface comprising a plurality of user-actuatable audio volume controls respectively associated with each of the received audio data streams. 9. The method of claim 8, wherein the step of independently controlling, within an audible range, an output volume level of at least a selected one of the received audio data streams output through the one or more audio output devices further comprises actuating one or more of the audio volume controls. 10. The method of claim 1, wherein the step of independently controlling is performed by the first communication client application according to respective volume instructions received at the first user terminal from the other user terminals. 11. The method of claim 1 further comprising automatically changing, within an audible range, the output volume level of one or more of the received audio data streams; wherein said automatically changing the output volume level is based on receiving, at the first user terminal, one of the respective volume instructions which is received with said one or more of the received audio data streams. 12. The method of claim 11 further comprising setting a control at the first communication client to override the step of automatically changing the output volume level of one or more of the received audio data streams. 13. The method of claim 8 wherein the user-actuatable audio volume controls comprise mute control buttons for temporarily muting one or more of the received audio data streams output through the one or more audio output devices. 14. The method of claim 2 further comprising receiving a signal from one or more of the other user terminals when the transmitted encoded audio stream has been muted by a respective user at said one or more of the other user terminals but said one or more of the other user terminals are still receiving an audio input as part of the group communication session; wherein the signal indicates to the first user terminal that the respective user at said one or more of the other user terminals is in a side conversation. 15. The method of claim 1 further comprising displaying a system volume control interface suitable for controlling an overall master volume output level of all audio signals output through the one or more audio output devices. 16. The method of claim 1 further comprising the first communication client generating data relating to the step of independently controlling, within an audible range, an output volume level of at least a selected one of the received audio data streams; and transmitting said data to a remote database over the network so that the data can be monitored. 17. The method of claim 2 wherein the received audio data streams and the encoded audio data stream transmitted by the first terminal are routed via a remote server. 18. The method of claim 17 wherein the remote server is a multipoint control unit, optionally an audio-visual multipoint control unit. 19. A user terminal, comprising:
a processor configured to run a first communication client application on the user terminal so as to enable the user terminal to participate in a group communication session over a network with respective communication clients running on other user terminals; a network interface configured for receiving a plurality of audio data streams, each carrying audio data generated at a respective one of the other user terminals; wherein the first communication client is configured for: associating each of the received audio data streams with a respective user of one said other user terminals; outputting the received audio data streams through one or more audio output devices associated with the user terminal; and independently controlling, within an audible range, an output volume level of at least a selected one of the received audio data streams output through the one or more audio output devices. 20. A communication client application embodied on a computer readable storage medium and comprising code configured so as when run on a first user terminal to enable the first user terminal to participate in a group communication session over a network with respective communication clients running on other user terminals by implementing at least the following steps:
receiving, by the first user terminal, a plurality of audio data streams, each carrying audio data generated at a respective one of the other user terminals; the first communication client associating each of the received audio data streams with a respective user of one said other user terminals; the first communication client outputting the received audio data streams through one or more audio output devices associated with the first user terminal; and independently controlling, within an audible range, an output volume level of at least a selected one of the received audio data streams output through the one or more audio output devices. | 2,400 |
7,548 | 7,548 | 14,152,398 | 2,442 | Apparatus and method for migrating data within an object storage system using available storage system bandwidth. In accordance with some embodiments, a server communicates with users of the object storage system over a network. A plurality of data storage devices are grouped into zones, with each zone corresponding to a different physical location within the object storage system. A controller direct transfers of data objects between the server and the data storage devices of a selected zone. A rebalancing module directs migration of sets of data objects between zones in relation to an available bandwidth of the server. | 1. An object storage system comprising:
a server adapted to communicate with users of the object storage system over a network; a plurality of data storage devices grouped into zones each corresponding to a different physical location within the object storage system; a controller adapted to direct transfers of data objects between the server and the data storage devices of a selected zone; and a rebalancing module adapted to direct migration of sets of data objects between zones in relation to an available bandwidth of the network. 2. The object storage system of claim 1, wherein the rebalancing module is adapted to detect the available bandwidth of the network and to direct migration of the sets of data objects between zones at a rate nominally equal to the detected available bandwidth. 3. The object storage system of claim 1, wherein the proxy server has a total data transfer capacity in terms of a total possible number of units of data transferrable per unit of time, and wherein the rebalancing module detects the available bandwidth in relation to a difference between the total data transfer capacity and an existing system utilization level of the proxy server comprising an actual number of units of user data transferred per unit of time. 4. The object storage system of claim 1, wherein the rebalancing module operates to identify a sample period associated with the available bandwidth and wherein the rebalancing module directs a migration of data objects during the sample period having sufficient volume to nominally equal the available bandwidth. 5. The object storage system of claim 1, wherein the rebalancing module comprises a monitor module which identifies an existing system utilization level of the distributed object storage system in relation to an input from the server. 6. The object storage system of claim 1, wherein, over a succession of consecutive time periods, the rebalancing module measures an existing system utilization level, identifies a different available bandwidth for each of the consecutive time periods in relation to a difference between the existing system utilization level and an overall system data transfer capability, and directs migration operations upon different amounts of data objects for each time period so that the sum, in each time period, of the existing system utilization level and amount of migrated data objects nominally equals the overall system data transfer capability. 7. The object storage system of claim 6, wherein the rebalancing module temporarily suspends further data migration operations responsive to the existing system utilization level for a selected time period reaching a first predetermined threshold. 8. The object storage system of claim 7, wherein the rebalancing module resumes further data migration operations responsive to the existing system utilization level for a subsequent selected time period reaching a second predetermined threshold. 9. The object storage system of claim 8, wherein the first and second predetermined thresholds are equal and constitute a selected percentage of the overall system data transfer capability. 10. The object storage system of claim 6, wherein the rebalancing module temporarily suspends further data migration operations responsive to a rate of change of the system utilization level over a plurality of successive time periods. 11. The object storage system of claim 1, wherein the distributed object storage system is further arranged as a plurality of storage nodes with each storage node comprising a selected storage controller and a subset of the plurality of data storage devices, wherein the rebalancing module allocates a first portion of the available bandwidth to a first storage node of said plurality of storage nodes for the migration of data objects therefrom, and wherein the rebalancing module allocates a second portion of the available bandwidth to a second storage node of said plurality of storage nodes for the migration of data objects therefrom. 12. An object storage system comprising:
a plurality of storage nodes each comprising a storage controller and an associated group of data storage devices each having associated memory; a server connected to the storage nodes and configured to direct transfer of data objects between the storage nodes and at least one user device connected to the distributed object storage system; and a rebalancing module configured to identify an existing system utilization level associated with the transfer of data objects from the proxy server, to determine an overall additional data transfer capability of the distributed object storage system above the existing system utilization level, and to direct a migration of data between the storage nodes during the sample period at a rate nominally equal to the additional data transfer capability. 13. The object storage system of claim 12, wherein, over a succession of consecutive time periods, the rebalancing module measures an existing system utilization level, identifies a different available bandwidth for each of the consecutive time periods in relation to a difference between the existing system utilization level and an overall system data transfer capability, and directs migration operations upon different sets of data objects for each time period so that, in each time period, a sum of the existing system utilization level and amount of migrated data objects nominally equals the overall system data transfer capability. 14. The object storage system of claim 13, wherein the rebalancing module temporarily suspends further data migration operations responsive to the existing system utilization level for a selected time period reaching a first predetermined threshold. 15. The object storage system of claim 13, wherein the rebalancing module temporarily suspends further data migration operations responsive to a rate of change of the system utilization level over a plurality of successive time periods. 16. A computer-implemented method comprising:
arranging a plurality of data storage devices into a plurality of zones of an object storage system, each zone corresponding to a different physical location and having an associated controller; using a server to store data objects from users of the object storage system in the respective zones; detecting an available bandwidth of the server; and directing migration of data objects between the zones in relation to the detected available bandwidth. 17. The computer-implemented method of claim 16, wherein the available bandwidth of the proxy server is determined in relation to a difference between a total data transfer capacity associated with the proxy server comprising a total possible number of units of data transferrable per unit time, an existing system utilization level of the server comprising an actual number of units of user data objects transferred per unit of time, and wherein the data objects migrated between the zones comprise a number of units of user data objects transferred per unit of time that nominally matches an overall difference between the total possible number and the actual number. 18. The computer-implemented method of claim 16, further comprising, for each of a succession of consecutive time periods, measuring an existing system utilization level, identifying a different available bandwidth, and directing migration of different total amounts of data objects for each time period so that the sum of the existing system utilization level and the amount of migrated data objects during each time period nominally equals the overall system data transfer capability. 19. The computer-implemented method of claim 18, further comprising temporarily suspending further migration of data objects responsive to the existing system utilization level for a selected time period reaching a first predetermined threshold. 20. The computer-implemented method of claim 18, further comprising temporarily suspending further migration of data objects responsive to a rate of change of the system utilization level exceeding a slope threshold. | Apparatus and method for migrating data within an object storage system using available storage system bandwidth. In accordance with some embodiments, a server communicates with users of the object storage system over a network. A plurality of data storage devices are grouped into zones, with each zone corresponding to a different physical location within the object storage system. A controller direct transfers of data objects between the server and the data storage devices of a selected zone. A rebalancing module directs migration of sets of data objects between zones in relation to an available bandwidth of the server.1. An object storage system comprising:
a server adapted to communicate with users of the object storage system over a network; a plurality of data storage devices grouped into zones each corresponding to a different physical location within the object storage system; a controller adapted to direct transfers of data objects between the server and the data storage devices of a selected zone; and a rebalancing module adapted to direct migration of sets of data objects between zones in relation to an available bandwidth of the network. 2. The object storage system of claim 1, wherein the rebalancing module is adapted to detect the available bandwidth of the network and to direct migration of the sets of data objects between zones at a rate nominally equal to the detected available bandwidth. 3. The object storage system of claim 1, wherein the proxy server has a total data transfer capacity in terms of a total possible number of units of data transferrable per unit of time, and wherein the rebalancing module detects the available bandwidth in relation to a difference between the total data transfer capacity and an existing system utilization level of the proxy server comprising an actual number of units of user data transferred per unit of time. 4. The object storage system of claim 1, wherein the rebalancing module operates to identify a sample period associated with the available bandwidth and wherein the rebalancing module directs a migration of data objects during the sample period having sufficient volume to nominally equal the available bandwidth. 5. The object storage system of claim 1, wherein the rebalancing module comprises a monitor module which identifies an existing system utilization level of the distributed object storage system in relation to an input from the server. 6. The object storage system of claim 1, wherein, over a succession of consecutive time periods, the rebalancing module measures an existing system utilization level, identifies a different available bandwidth for each of the consecutive time periods in relation to a difference between the existing system utilization level and an overall system data transfer capability, and directs migration operations upon different amounts of data objects for each time period so that the sum, in each time period, of the existing system utilization level and amount of migrated data objects nominally equals the overall system data transfer capability. 7. The object storage system of claim 6, wherein the rebalancing module temporarily suspends further data migration operations responsive to the existing system utilization level for a selected time period reaching a first predetermined threshold. 8. The object storage system of claim 7, wherein the rebalancing module resumes further data migration operations responsive to the existing system utilization level for a subsequent selected time period reaching a second predetermined threshold. 9. The object storage system of claim 8, wherein the first and second predetermined thresholds are equal and constitute a selected percentage of the overall system data transfer capability. 10. The object storage system of claim 6, wherein the rebalancing module temporarily suspends further data migration operations responsive to a rate of change of the system utilization level over a plurality of successive time periods. 11. The object storage system of claim 1, wherein the distributed object storage system is further arranged as a plurality of storage nodes with each storage node comprising a selected storage controller and a subset of the plurality of data storage devices, wherein the rebalancing module allocates a first portion of the available bandwidth to a first storage node of said plurality of storage nodes for the migration of data objects therefrom, and wherein the rebalancing module allocates a second portion of the available bandwidth to a second storage node of said plurality of storage nodes for the migration of data objects therefrom. 12. An object storage system comprising:
a plurality of storage nodes each comprising a storage controller and an associated group of data storage devices each having associated memory; a server connected to the storage nodes and configured to direct transfer of data objects between the storage nodes and at least one user device connected to the distributed object storage system; and a rebalancing module configured to identify an existing system utilization level associated with the transfer of data objects from the proxy server, to determine an overall additional data transfer capability of the distributed object storage system above the existing system utilization level, and to direct a migration of data between the storage nodes during the sample period at a rate nominally equal to the additional data transfer capability. 13. The object storage system of claim 12, wherein, over a succession of consecutive time periods, the rebalancing module measures an existing system utilization level, identifies a different available bandwidth for each of the consecutive time periods in relation to a difference between the existing system utilization level and an overall system data transfer capability, and directs migration operations upon different sets of data objects for each time period so that, in each time period, a sum of the existing system utilization level and amount of migrated data objects nominally equals the overall system data transfer capability. 14. The object storage system of claim 13, wherein the rebalancing module temporarily suspends further data migration operations responsive to the existing system utilization level for a selected time period reaching a first predetermined threshold. 15. The object storage system of claim 13, wherein the rebalancing module temporarily suspends further data migration operations responsive to a rate of change of the system utilization level over a plurality of successive time periods. 16. A computer-implemented method comprising:
arranging a plurality of data storage devices into a plurality of zones of an object storage system, each zone corresponding to a different physical location and having an associated controller; using a server to store data objects from users of the object storage system in the respective zones; detecting an available bandwidth of the server; and directing migration of data objects between the zones in relation to the detected available bandwidth. 17. The computer-implemented method of claim 16, wherein the available bandwidth of the proxy server is determined in relation to a difference between a total data transfer capacity associated with the proxy server comprising a total possible number of units of data transferrable per unit time, an existing system utilization level of the server comprising an actual number of units of user data objects transferred per unit of time, and wherein the data objects migrated between the zones comprise a number of units of user data objects transferred per unit of time that nominally matches an overall difference between the total possible number and the actual number. 18. The computer-implemented method of claim 16, further comprising, for each of a succession of consecutive time periods, measuring an existing system utilization level, identifying a different available bandwidth, and directing migration of different total amounts of data objects for each time period so that the sum of the existing system utilization level and the amount of migrated data objects during each time period nominally equals the overall system data transfer capability. 19. The computer-implemented method of claim 18, further comprising temporarily suspending further migration of data objects responsive to the existing system utilization level for a selected time period reaching a first predetermined threshold. 20. The computer-implemented method of claim 18, further comprising temporarily suspending further migration of data objects responsive to a rate of change of the system utilization level exceeding a slope threshold. | 2,400 |
7,549 | 7,549 | 13,865,362 | 2,492 | An apparatus for generating trusted image data includes an image data generator, a processor and an output unit. The image data generator generates image data of an image to be taken of a three-dimensional scene and trust data of the three-dimensional scene. The trust data indicates a depth information of at least one pixel of the image to be taken or comprises data capable of being used to calculate a depth information of at least one pixel of the image to be taken. The processor generates encrypted image data by encrypting at least the trust data or characteristic data derivable from at least the trust data, so that an authentication of the image data is enabled based on the encrypted image data. The output unit provides trusted image data including the encrypted image data. | 1. An apparatus for generating trusted image data, the apparatus comprising:
an image data generator configured to generate image data of an image to be taken of a three-dimensional scene and trust data of the three-dimensional scene, wherein the trust data indicates a depth information of at least one pixel of the image to be taken or comprises data capable of being used to calculate a depth information of at least one pixel of the image to be taken; a processor configured to generate encrypted image data by encrypting at least the trust data or characteristic data derivable from at least the trust data; and an output unit configured to provide trusted image data comprising the encrypted image data. 2. The apparatus according to claim 1, wherein the trust data comprises absolute depth information of at least one pixel of the image to be taken or comprises data capable of being used to calculate absolute depth information of at least one pixel of the image to be taken. 3. The apparatus according to claim 1, wherein the image data generator comprises a time-of-flight camera or a sensor system based on a structured light principle configured to generate two-dimensional image data of the image to be taken and trust data indicating individual depth information for a plurality of pixels of the image to be taken. 4. The apparatus according to claim 1, wherein the image data generator comprises a stereoscopic camera configured to generate image data of a first image to be taken of a three-dimensional scene and image data of a second image to be taken of the three-dimensional scene, wherein the stereoscopic camera is configured to take the first image and the second image from different view angles, wherein at least a part of the image data of the second image represents data capable of being used to calculate a depth information of at least one pixel of the image to be taken. 5. The apparatus according to claim 1, wherein the processor is configured to encrypt data by asymmetric encryption. 6. The apparatus according to claim 1, comprising a memory unit configured to store a private key of an encryption algorithm used by the processor for generating the encrypted image data, wherein the output unit is configured to provide a public key of the encryption algorithm. 7. The apparatus according to claim 1, wherein the processor is configured to generate characteristic data derivable from at least the trust data by calculating hash data based on at least the trust data. 8. The apparatus according to claim 1, further comprising a position determiner configured to determine a position of the apparatus and enable an addition of position data indicating the determined position to the trust data. 9. The apparatus according to claim 8, further comprising an internal clock configured to enable to add time data or position confidence data to the trust data, wherein the time data indicates a time, the position of the apparatus was determined by the position determiner at last, or the position confidence data indicates a confidence level of the determined position based on the time data. 10. The apparatus according to claim 1, further comprising an internal clock configured to enable to add time data indicating a time, the image is taken, to the trust data. 11. The apparatus according to claim 10, further comprising a receiver configured to receive a clock synchronization signal from an external clock, wherein the internal clock is configured to be synchronized with the external clock based on the clock synchronization signal, wherein the internal clock is configured to enable to add synchronization data or time confidence data to the trust data, wherein the synchronization data indicates a time, the time of the internal clock was synchronized at last, or the time confidence data indicates a confidence level of the time data based on the synchronization data. 12. The apparatus according to claim 1, wherein the image data generator comprises a camera configured to generate the image data of the image to be taken and trust data indicating individual depth information for a plurality of pixels of the image to be taken and at least one parameter of the camera enabling a computation of absolute depth information of at least one pixel of the image to be taken. 13. The apparatus according to claim 1, further comprising an additional sensor comprising a compass or a three-dimensional angle sensor configured to determine orientation data to be added to the trust data. 14. The apparatus according to claim 1, wherein the output unit is configured to provide image data of a taken image of a three-dimensional scene or trust data of the three-dimensional scene. 15. An apparatus for generating trusted image data, the apparatus comprising:
an image data generator configured to generate image data of an image to be taken from a scene; a position determiner configured to determine a position of the apparatus and configured to provide trust data indicating the determined position; a clock configured to add time data or position confidence data to the trust data, wherein the time data indicates a time, the position of the apparatus was determined by the position determiner at last, or the position confidence data indicates a confidence level of the determined position based on the time data; a processor configured to generate encrypted image data by encrypting at least the trust data or characteristic data derivable from at least the trust data, so that an authentication of the image data is enabled based on the encrypted image data; and an output unit configured to provide trusted image data comprising the encrypted image data. 16. An apparatus for generating trusted image data, the apparatus comprising:
an image data generator configured to generate image data of an image to be taken from a scene; a clock configured to provide trust data representing time data indicating a time, the image is taken; a receiver configured to receive a clock synchronization signal from an external clock, wherein the clock is configured to be synchronized with the external clock based on the clock synchronization signal, wherein the clock is configured to add synchronization data or time confidence data to the trust data, wherein the synchronization data indicates a time, the time of the clock was synchronized at last, or the time confidence data indicates a confidence level of the time data based on the synchronization data; and a processor configured to generate encrypted image data by encrypting at least the trust data or characteristic data derivable from at least the trust data, so that an authentication of the image data is enabled based on the encrypted image data; and an output unit configured to provide trusted image data comprising the encrypted image data. | An apparatus for generating trusted image data includes an image data generator, a processor and an output unit. The image data generator generates image data of an image to be taken of a three-dimensional scene and trust data of the three-dimensional scene. The trust data indicates a depth information of at least one pixel of the image to be taken or comprises data capable of being used to calculate a depth information of at least one pixel of the image to be taken. The processor generates encrypted image data by encrypting at least the trust data or characteristic data derivable from at least the trust data, so that an authentication of the image data is enabled based on the encrypted image data. The output unit provides trusted image data including the encrypted image data.1. An apparatus for generating trusted image data, the apparatus comprising:
an image data generator configured to generate image data of an image to be taken of a three-dimensional scene and trust data of the three-dimensional scene, wherein the trust data indicates a depth information of at least one pixel of the image to be taken or comprises data capable of being used to calculate a depth information of at least one pixel of the image to be taken; a processor configured to generate encrypted image data by encrypting at least the trust data or characteristic data derivable from at least the trust data; and an output unit configured to provide trusted image data comprising the encrypted image data. 2. The apparatus according to claim 1, wherein the trust data comprises absolute depth information of at least one pixel of the image to be taken or comprises data capable of being used to calculate absolute depth information of at least one pixel of the image to be taken. 3. The apparatus according to claim 1, wherein the image data generator comprises a time-of-flight camera or a sensor system based on a structured light principle configured to generate two-dimensional image data of the image to be taken and trust data indicating individual depth information for a plurality of pixels of the image to be taken. 4. The apparatus according to claim 1, wherein the image data generator comprises a stereoscopic camera configured to generate image data of a first image to be taken of a three-dimensional scene and image data of a second image to be taken of the three-dimensional scene, wherein the stereoscopic camera is configured to take the first image and the second image from different view angles, wherein at least a part of the image data of the second image represents data capable of being used to calculate a depth information of at least one pixel of the image to be taken. 5. The apparatus according to claim 1, wherein the processor is configured to encrypt data by asymmetric encryption. 6. The apparatus according to claim 1, comprising a memory unit configured to store a private key of an encryption algorithm used by the processor for generating the encrypted image data, wherein the output unit is configured to provide a public key of the encryption algorithm. 7. The apparatus according to claim 1, wherein the processor is configured to generate characteristic data derivable from at least the trust data by calculating hash data based on at least the trust data. 8. The apparatus according to claim 1, further comprising a position determiner configured to determine a position of the apparatus and enable an addition of position data indicating the determined position to the trust data. 9. The apparatus according to claim 8, further comprising an internal clock configured to enable to add time data or position confidence data to the trust data, wherein the time data indicates a time, the position of the apparatus was determined by the position determiner at last, or the position confidence data indicates a confidence level of the determined position based on the time data. 10. The apparatus according to claim 1, further comprising an internal clock configured to enable to add time data indicating a time, the image is taken, to the trust data. 11. The apparatus according to claim 10, further comprising a receiver configured to receive a clock synchronization signal from an external clock, wherein the internal clock is configured to be synchronized with the external clock based on the clock synchronization signal, wherein the internal clock is configured to enable to add synchronization data or time confidence data to the trust data, wherein the synchronization data indicates a time, the time of the internal clock was synchronized at last, or the time confidence data indicates a confidence level of the time data based on the synchronization data. 12. The apparatus according to claim 1, wherein the image data generator comprises a camera configured to generate the image data of the image to be taken and trust data indicating individual depth information for a plurality of pixels of the image to be taken and at least one parameter of the camera enabling a computation of absolute depth information of at least one pixel of the image to be taken. 13. The apparatus according to claim 1, further comprising an additional sensor comprising a compass or a three-dimensional angle sensor configured to determine orientation data to be added to the trust data. 14. The apparatus according to claim 1, wherein the output unit is configured to provide image data of a taken image of a three-dimensional scene or trust data of the three-dimensional scene. 15. An apparatus for generating trusted image data, the apparatus comprising:
an image data generator configured to generate image data of an image to be taken from a scene; a position determiner configured to determine a position of the apparatus and configured to provide trust data indicating the determined position; a clock configured to add time data or position confidence data to the trust data, wherein the time data indicates a time, the position of the apparatus was determined by the position determiner at last, or the position confidence data indicates a confidence level of the determined position based on the time data; a processor configured to generate encrypted image data by encrypting at least the trust data or characteristic data derivable from at least the trust data, so that an authentication of the image data is enabled based on the encrypted image data; and an output unit configured to provide trusted image data comprising the encrypted image data. 16. An apparatus for generating trusted image data, the apparatus comprising:
an image data generator configured to generate image data of an image to be taken from a scene; a clock configured to provide trust data representing time data indicating a time, the image is taken; a receiver configured to receive a clock synchronization signal from an external clock, wherein the clock is configured to be synchronized with the external clock based on the clock synchronization signal, wherein the clock is configured to add synchronization data or time confidence data to the trust data, wherein the synchronization data indicates a time, the time of the clock was synchronized at last, or the time confidence data indicates a confidence level of the time data based on the synchronization data; and a processor configured to generate encrypted image data by encrypting at least the trust data or characteristic data derivable from at least the trust data, so that an authentication of the image data is enabled based on the encrypted image data; and an output unit configured to provide trusted image data comprising the encrypted image data. | 2,400 |
7,550 | 7,550 | 14,337,669 | 2,488 | The disclosure provides a noise filter. The noise filter includes a motion estimation (ME) engine. The ME receives a current frame and a reference frame. The current frame comprising a current block and the reference frame includes a plurality of reference blocks. The ME engine generates final motion vectors. The current block comprises a plurality of current pixels. A motion compensation unit generates a motion compensated block based on the final motion vectors and the reference frame. The motion compensated block includes a plurality of motion compensated pixels. A weighted average filter multiplies each current pixel of the plurality current pixels and a corresponding motion compensated pixel of the plurality of motion compensated pixels with a first weight and a second weight respectively. The weighted average filter generates a filtered block. A blockiness removal unit is coupled to the weighted average filter and removes artifacts in the filtered block. | 1. A noise filter comprising:
a motion estimation (ME) engine configured to receive a current frame and a reference frame, the current frame comprising a current block and the reference frame comprising a plurality of reference blocks, the ME engine configured to generate final motion vectors, the current block comprises a plurality of current pixels; a motion compensation unit coupled to the ME engine and configured to generate a motion compensated block based on the final motion vectors and the reference frame, the motion compensated block comprises a plurality of motion compensated pixels; a weighted average filter configured to multiply each current pixel of plurality of current pixels and a corresponding motion compensated pixel of the plurality of motion compensated pixels with a first weight and a second weight respectively, wherein the product of the current pixels and the first weight is summed with the product of the corresponding motion compensated pixels and the second weight to generate a filtered block; and a blockiness removal unit coupled to the weighted average filter and configured to remove artifacts in the filtered block. 2. The noise filter of claim 1 further comprising a filtered frame buffer coupled to the blockiness removal unit and configured to store the filtered block, wherein the filtered block comprises a plurality of filtered pixels. 3. The noise filter of claim 1 further configured to receive a plurality of frames, wherein the plurality of frames comprises the current frame and the reference frame, wherein a reference block of the plurality of reference blocks comprises a plurality of reference pixels. 4. The noise filter of claim 1, wherein the ME engine is configured to estimates sum of absolute difference (SAD) between the current block and the reference block, the SAD is estimated by adding absolute value of differences between a current pixel and a corresponding reference pixel over all the pixels of the current block and the reference block respectively. 5. The noise filter of claim 1, wherein the motion vectors provides a location of the reference block with respect to the current block and the ME engine is configured to estimate a motion vector cost between the current block and the reference block, the motion vector cost is estimated from the motion vectors associated with the current block. 6. The noise filter of claim 1, wherein the ME engine is configured to estimate a cost function between the current block and the reference block, the cost function is estimated by summing the SAD, between the current block and the reference block, and a product of a motion smoothness value (MSV) and the motion vector cost between the current block and the reference block. 7. The noise filter of claim 1 further comprising a motion smoothness factor (MSF) engine coupled to the ME engine and configured to estimate the motion smoothness value (MSV) corresponding to the current block, wherein the MSV is estimated from a current noise level and the MSV is defined between a maximum MSV and a minimum MSV. 8. The noise filter of claim 7, wherein the MSV is equal to the minimum MSV when a sum of the motion vectors associated with the current block and the motion vectors associated with a set of adjacent blocks is above threshold. 9. The noise filter of claim 7, wherein the current noise level is estimated between the current frame and the reference frame by averaging SAD for all blocks with motion vectors below a predefined threshold. 10. The noise filter of claim 1, wherein the ME engine is configured to select a final SAD and the final motion vectors associated with the current block corresponding to a reference block of the plurality of reference blocks for which the cost function is minimum. 11. The noise filter of claim 10, wherein the ME engine is configured to estimate chrominance motion vectors associated with the current block, wherein the chrominance motion vectors are estimated from the final motion vectors and a last bit of the final motion vectors is masked with zero when the final motion vectors are odd. 12. The noise filter of claim 11, wherein the ME engine is configured to estimate a chrominance SAD between the current block and the reference block for which the cost function is minimum, the chrominance SAD is estimated from the chrominance vectors. 13. The noise filter of claim 1, wherein the ME engine is configured to estimate a combined SAD associated with the current block by summing the final SAD and the chrominance SAD. 14. The noise filter of claim 1 further comprising a weight computation unit coupled to the ME engine and configured to estimate the first weight and the second weight using the combined SAD associated with the current block, an average combined SAD associated with a previous frame and a blending factor. 15. The noise filter of claim 14, wherein the blending factor for a motion block is lower than the blending factor for a static block. 16. The noise filter of claim 14, wherein the blending factor is defined between a maximum blending factor and a minimum blending factor such that the blending factor for the motion block is minimum blending factor and the blending factor for the static block is between the maximum blending factor and the minimum blending factor. 17. The noise filter of claim 1, wherein the blockiness removal unit is a de-blocking filter and a set of parameters associated with the de-blocking filter are adjusted to perform at least one of a strong filtering, a moderate filtering and a weak filtering. 18. The noise filter of claim 1 coupled to a spatial filter, the spatial filter comprising:
a median filter configured to receive a filtered pixel of the plurality of filtered pixels from the noise filter and configured to generate a median pixel;
a subtracter coupled to the median filter and configured to subtract the median pixel from the filtered pixel to generate a subtracted pixel;
a soft coring unit configured to receive the subtracted pixel and a noise function, the noise function is estimated from the current noise level, the soft coring unit configured to perform a soft coring function on the subtracted pixel and generates an adjusted pixel; and
an adder coupled to the soft coring unit and configured to sum the adjusted pixel and the median pixel to generate a spatial filtered pixel. 19. A method of filtering noise comprising:
generating final motion vectors from a current frame and a reference frame, wherein the current frame comprises a current block and, the current block comprises a plurality of current pixels and the reference frame comprises a plurality of reference blocks; generating a motion compensated block based on the final motion vectors and the reference frame, the motion compensated block comprises a plurality of motion compensated pixels; generating a filtered block by summing a product of the current pixels and a first weight and a product corresponding motion compensated pixels and a second weight, the filtered block comprises a plurality of filtered pixels; removing artifacts in the filtered block; and storing the filtered block. 20. The method of claim 19, wherein generating the final motion vectors comprises:
estimating a motion smoothness value (MSV) corresponding to the current block based on a current noise level; estimating a sum of absolute difference (SAD) between the current block and a reference block; estimating a motion vector cost between the current block and the reference block; estimate a cost function between the current block and the reference block by summing the SAD, between the current block and the reference block, and a product of the MSV and the motion vector cost between the current block and the reference block; and selecting a final SAD and motion vectors associated with the current block corresponding to a reference block of the plurality of reference blocks for which the cost function is minimum. 21. The method of claim 19 further comprising:
estimating a chrominance SAD between the current block and the reference block;
estimating a combined SAD associated with the current block by summing the final SAD and the chrominance SAD; and
estimating the first weight and the second weight using the combined SAD associated with the current block, an average SAD associated with a previous frame and a blending factor. 22. The method of claim 19 further comprising:
generating a median pixel from a filtered pixel of the plurality of filtered pixels;
subtracting the median pixel from the filtered pixel to generate a subtracted pixel;
estimating a noise function from the current noise level;
performing a set coring function on the subtracted pixel to generate an adjusted pixel, wherein the soft coring function is estimated from the noise function; and
summing the adjusted pixel and the median pixel to generate a spatial filtered pixel. 23. A computing device comprising:
a processing unit; a memory module coupled to the processing unit; video processing unit coupled to the processing unit and the memory module, the video processing unit comprising a noise filter, the noise filter comprising:
a motion estimation (ME) engine configured to receive a current frame and a reference frame, the current frame comprising a current block and the reference frame comprising a plurality of reference blocks, the ME engine configured to generate final motion vectors the current block comprises a plurality of current pixels;
a motion compensation unit coupled to the ME engine and configured to generate a motion compensated block based on the final motion vectors and the reference frame, the motion compensated block comprises a plurality of motion compensated pixels;
a weighted average filter configured to multiply each current pixel of the plurality of current pixels and a corresponding motion compensated pixel of the plurality of motion compensated pixels with a first weight and a second weight respectively, wherein the product of the current pixels and the first weight is summed with the product of the corresponding motion compensated pixels and the second weight to generate a filtered block; and
a blockiness removal unit coupled to the weighted average filter and configured to remove artifacts in the filtered block. | The disclosure provides a noise filter. The noise filter includes a motion estimation (ME) engine. The ME receives a current frame and a reference frame. The current frame comprising a current block and the reference frame includes a plurality of reference blocks. The ME engine generates final motion vectors. The current block comprises a plurality of current pixels. A motion compensation unit generates a motion compensated block based on the final motion vectors and the reference frame. The motion compensated block includes a plurality of motion compensated pixels. A weighted average filter multiplies each current pixel of the plurality current pixels and a corresponding motion compensated pixel of the plurality of motion compensated pixels with a first weight and a second weight respectively. The weighted average filter generates a filtered block. A blockiness removal unit is coupled to the weighted average filter and removes artifacts in the filtered block.1. A noise filter comprising:
a motion estimation (ME) engine configured to receive a current frame and a reference frame, the current frame comprising a current block and the reference frame comprising a plurality of reference blocks, the ME engine configured to generate final motion vectors, the current block comprises a plurality of current pixels; a motion compensation unit coupled to the ME engine and configured to generate a motion compensated block based on the final motion vectors and the reference frame, the motion compensated block comprises a plurality of motion compensated pixels; a weighted average filter configured to multiply each current pixel of plurality of current pixels and a corresponding motion compensated pixel of the plurality of motion compensated pixels with a first weight and a second weight respectively, wherein the product of the current pixels and the first weight is summed with the product of the corresponding motion compensated pixels and the second weight to generate a filtered block; and a blockiness removal unit coupled to the weighted average filter and configured to remove artifacts in the filtered block. 2. The noise filter of claim 1 further comprising a filtered frame buffer coupled to the blockiness removal unit and configured to store the filtered block, wherein the filtered block comprises a plurality of filtered pixels. 3. The noise filter of claim 1 further configured to receive a plurality of frames, wherein the plurality of frames comprises the current frame and the reference frame, wherein a reference block of the plurality of reference blocks comprises a plurality of reference pixels. 4. The noise filter of claim 1, wherein the ME engine is configured to estimates sum of absolute difference (SAD) between the current block and the reference block, the SAD is estimated by adding absolute value of differences between a current pixel and a corresponding reference pixel over all the pixels of the current block and the reference block respectively. 5. The noise filter of claim 1, wherein the motion vectors provides a location of the reference block with respect to the current block and the ME engine is configured to estimate a motion vector cost between the current block and the reference block, the motion vector cost is estimated from the motion vectors associated with the current block. 6. The noise filter of claim 1, wherein the ME engine is configured to estimate a cost function between the current block and the reference block, the cost function is estimated by summing the SAD, between the current block and the reference block, and a product of a motion smoothness value (MSV) and the motion vector cost between the current block and the reference block. 7. The noise filter of claim 1 further comprising a motion smoothness factor (MSF) engine coupled to the ME engine and configured to estimate the motion smoothness value (MSV) corresponding to the current block, wherein the MSV is estimated from a current noise level and the MSV is defined between a maximum MSV and a minimum MSV. 8. The noise filter of claim 7, wherein the MSV is equal to the minimum MSV when a sum of the motion vectors associated with the current block and the motion vectors associated with a set of adjacent blocks is above threshold. 9. The noise filter of claim 7, wherein the current noise level is estimated between the current frame and the reference frame by averaging SAD for all blocks with motion vectors below a predefined threshold. 10. The noise filter of claim 1, wherein the ME engine is configured to select a final SAD and the final motion vectors associated with the current block corresponding to a reference block of the plurality of reference blocks for which the cost function is minimum. 11. The noise filter of claim 10, wherein the ME engine is configured to estimate chrominance motion vectors associated with the current block, wherein the chrominance motion vectors are estimated from the final motion vectors and a last bit of the final motion vectors is masked with zero when the final motion vectors are odd. 12. The noise filter of claim 11, wherein the ME engine is configured to estimate a chrominance SAD between the current block and the reference block for which the cost function is minimum, the chrominance SAD is estimated from the chrominance vectors. 13. The noise filter of claim 1, wherein the ME engine is configured to estimate a combined SAD associated with the current block by summing the final SAD and the chrominance SAD. 14. The noise filter of claim 1 further comprising a weight computation unit coupled to the ME engine and configured to estimate the first weight and the second weight using the combined SAD associated with the current block, an average combined SAD associated with a previous frame and a blending factor. 15. The noise filter of claim 14, wherein the blending factor for a motion block is lower than the blending factor for a static block. 16. The noise filter of claim 14, wherein the blending factor is defined between a maximum blending factor and a minimum blending factor such that the blending factor for the motion block is minimum blending factor and the blending factor for the static block is between the maximum blending factor and the minimum blending factor. 17. The noise filter of claim 1, wherein the blockiness removal unit is a de-blocking filter and a set of parameters associated with the de-blocking filter are adjusted to perform at least one of a strong filtering, a moderate filtering and a weak filtering. 18. The noise filter of claim 1 coupled to a spatial filter, the spatial filter comprising:
a median filter configured to receive a filtered pixel of the plurality of filtered pixels from the noise filter and configured to generate a median pixel;
a subtracter coupled to the median filter and configured to subtract the median pixel from the filtered pixel to generate a subtracted pixel;
a soft coring unit configured to receive the subtracted pixel and a noise function, the noise function is estimated from the current noise level, the soft coring unit configured to perform a soft coring function on the subtracted pixel and generates an adjusted pixel; and
an adder coupled to the soft coring unit and configured to sum the adjusted pixel and the median pixel to generate a spatial filtered pixel. 19. A method of filtering noise comprising:
generating final motion vectors from a current frame and a reference frame, wherein the current frame comprises a current block and, the current block comprises a plurality of current pixels and the reference frame comprises a plurality of reference blocks; generating a motion compensated block based on the final motion vectors and the reference frame, the motion compensated block comprises a plurality of motion compensated pixels; generating a filtered block by summing a product of the current pixels and a first weight and a product corresponding motion compensated pixels and a second weight, the filtered block comprises a plurality of filtered pixels; removing artifacts in the filtered block; and storing the filtered block. 20. The method of claim 19, wherein generating the final motion vectors comprises:
estimating a motion smoothness value (MSV) corresponding to the current block based on a current noise level; estimating a sum of absolute difference (SAD) between the current block and a reference block; estimating a motion vector cost between the current block and the reference block; estimate a cost function between the current block and the reference block by summing the SAD, between the current block and the reference block, and a product of the MSV and the motion vector cost between the current block and the reference block; and selecting a final SAD and motion vectors associated with the current block corresponding to a reference block of the plurality of reference blocks for which the cost function is minimum. 21. The method of claim 19 further comprising:
estimating a chrominance SAD between the current block and the reference block;
estimating a combined SAD associated with the current block by summing the final SAD and the chrominance SAD; and
estimating the first weight and the second weight using the combined SAD associated with the current block, an average SAD associated with a previous frame and a blending factor. 22. The method of claim 19 further comprising:
generating a median pixel from a filtered pixel of the plurality of filtered pixels;
subtracting the median pixel from the filtered pixel to generate a subtracted pixel;
estimating a noise function from the current noise level;
performing a set coring function on the subtracted pixel to generate an adjusted pixel, wherein the soft coring function is estimated from the noise function; and
summing the adjusted pixel and the median pixel to generate a spatial filtered pixel. 23. A computing device comprising:
a processing unit; a memory module coupled to the processing unit; video processing unit coupled to the processing unit and the memory module, the video processing unit comprising a noise filter, the noise filter comprising:
a motion estimation (ME) engine configured to receive a current frame and a reference frame, the current frame comprising a current block and the reference frame comprising a plurality of reference blocks, the ME engine configured to generate final motion vectors the current block comprises a plurality of current pixels;
a motion compensation unit coupled to the ME engine and configured to generate a motion compensated block based on the final motion vectors and the reference frame, the motion compensated block comprises a plurality of motion compensated pixels;
a weighted average filter configured to multiply each current pixel of the plurality of current pixels and a corresponding motion compensated pixel of the plurality of motion compensated pixels with a first weight and a second weight respectively, wherein the product of the current pixels and the first weight is summed with the product of the corresponding motion compensated pixels and the second weight to generate a filtered block; and
a blockiness removal unit coupled to the weighted average filter and configured to remove artifacts in the filtered block. | 2,400 |
7,551 | 7,551 | 13,199,964 | 2,432 | According to a first aspect of the present invention there is provided a method of scanning a computer device in order to detect potential malware when an operating system running on the computer device prevents applications installed on the device from accessing installed files of other applications installed on the device. The method includes the steps of detecting installation of an application on the device, identifying one or more installation files that are required to perform the installation of the application, and performing a malware scan of the identified installation files and/or information obtained from the installation files. | 1. A method of scanning a computer device in order to detect potential malware when an operating system running on the computer device prevents applications installed on the device from accessing installed files of other applications installed on the device, the method comprising the steps of:
detecting installation of an application on the device; identifying one or more installation files that are required to perform the installation of the application; and performing a malware scan of the identified installation files and/or information obtained from the installation files. 2. A method as claimed in claim 1, wherein the step of performing a malware scan of the identified installation files and/or information obtained from these installation files is implemented at one or more of:
installation of the application; and after the installation of the application has been completed. 3. A method as claimed in claim 1, wherein the information obtained from the installation files comprise one or more of:
a hash of the installation files; a hash of any files contained within the installation files; and a hash of a signer certificate data relating to the components of the application. 4. A method as claimed in claim 1, wherein the step of detecting installation of an application on the device comprises one or more of:
receiving a notification that an application is to be installed or has been installed on the device; and intercepting a function call, message or event indicating that an application is to be installed or has been installed on the device. 5. A method as claimed in claim 1, wherein the step of performing a malware scan of the identified installation files and/or information obtained from these installation files comprises:
comparing the installation files and/or information obtained from these installation files with malware identification information. 6. A method as claimed in claim 5, wherein the malware identification information is provided by a malware identification database. 7. A method as claimed in claim 5, wherein the step of comparing the installation files and/or information obtained from these installation files with malware identification information further comprises one or more of:
comparing the installation files with signatures that identify potential malware; and comparing the installation files with heuristic rules that identify potential malware. 8. A method as claimed in claim 2, and further comprising:
when it is desired to perform a malware scan of the device after the installation of the application has been completed, performing a malware scan of the installation files that were used to perform the installation of the application. 9. A method as claimed in claim 8, and further comprising:
identifying applications installed on the device, and performing a malware scan of installation files stored on the device that were used to perform installation of each installed application. 10. A method as claimed in claim 1, and further comprising:
at installation of the application, storing the information obtained from the installation files; and when it is desired to perform a malware scan of the device after the installation of the application has been completed, performing a malware scan of the stored information obtained from the installation files. 11. A computer program, comprising computer readable code which, when run on a computer device, causes the computer device to perform the method as claimed in claim 1. 12. A computer program product comprising a computer readable medium and a computer program as claimed in claim 11, wherein the computer program is stored on the computer readable medium. 13. A computer device comprising:
a processor for detecting installation of an application on the device, identifying one or more installation files that are required to perform the installation of the application, and for performing a malware scan of the identified installation files and/or information obtained from the installation files. 14. A computer device as claimed in claim 13, wherein the processor is configured to perform a malware scan of the identified installation files and/or information obtained from these installation files at one or more of:
installation of the application; and after the installation of the application has been completed. 15. A computer device as claimed in claim 13, wherein the processor is configured to the obtain information from the installation files that comprises one or more of:
a hash of the installation files; a hash of any files contained within the installation files; and a hash of a signer certificate data relating to the components of the application. 16. A computer device as claimed in claim 13, wherein, to detect installation of an application on the device, the processor is configured to perform one or more of:
receiving a notification that an application is to be installed or has been installed on the device; and intercepting a function call, message or event indicating that an application is to be installed or has been installed on the device. 17. A computer device as claimed in claim 13, wherein the processor is configured to perform a malware scan of the identified installation files and/or information obtained from these installation files that comprises:
comparing the installation files and/or information obtained from these installation files with malware identification information. 18. A computer device as claimed in claim 17, wherein the computer device is configured to obtain the malware identification information from a malware identification database. 19. A computer device as claimed in claim 17, wherein, to compare the installation files and/or information obtained from these installation files with malware identification information, the processor is configured to perform one or more of:
comparing the installation files with signatures that identify potential malware; and comparing the installation files with heuristic rules that identify potential malware. 20. A computer device as claimed in claim 13, wherein, when it is desired to perform a malware scan of the device after the installation of the application has been completed, the processor is configured to perform a malware scan of the installation files that were used to perform the installation of the application. 21. A computer device as claimed in claim 20, wherein the processor is configured to identify applications installed on the device and perform a malware scan of installation files stored on the device that were used to perform installation of each installed application. 22. A computer device as claimed in claim 13, wherein the processor is configured to store the information obtained from the installation files at installation of the application, and, when it is desired to perform a malware scan of the device after the installation of the application has been completed, to perform a malware scan of the stored information obtained from the installation files. 23. A method of scanning a computer device in order to detect potential malware when an operating system running on the computer device prevents applications installed on the device from accessing installed files of other applications installed on the device, the method comprising:
detecting installation of an application on the device; identifying one or more installation files that are required to perform the installation of the application; obtaining information from the identified installation files and storing the information; and when it is desired to perform a malware scan of the device after the installation of the application has been completed, performing a malware scan of the stored information obtained from the installation files. 24. A computer program, comprising computer readable code which, when run on a computer device, causes the computer device to perform the method as claimed in claim 23. 25. A computer program product comprising a computer readable medium and a computer program as claimed in claim 24, wherein the computer program is stored on the computer readable medium. 26. A computer device comprising:
a processor for detecting installation of an application on the device, identifying one or more installation files that are required to perform the installation of the application, obtaining information from the identified installation files and ensuring that the information is stored, and, when it is desired to perform a malware scan of the device after the installation of the application has been completed, performing a malware scan of the stored information obtained from the installation files. | According to a first aspect of the present invention there is provided a method of scanning a computer device in order to detect potential malware when an operating system running on the computer device prevents applications installed on the device from accessing installed files of other applications installed on the device. The method includes the steps of detecting installation of an application on the device, identifying one or more installation files that are required to perform the installation of the application, and performing a malware scan of the identified installation files and/or information obtained from the installation files.1. A method of scanning a computer device in order to detect potential malware when an operating system running on the computer device prevents applications installed on the device from accessing installed files of other applications installed on the device, the method comprising the steps of:
detecting installation of an application on the device; identifying one or more installation files that are required to perform the installation of the application; and performing a malware scan of the identified installation files and/or information obtained from the installation files. 2. A method as claimed in claim 1, wherein the step of performing a malware scan of the identified installation files and/or information obtained from these installation files is implemented at one or more of:
installation of the application; and after the installation of the application has been completed. 3. A method as claimed in claim 1, wherein the information obtained from the installation files comprise one or more of:
a hash of the installation files; a hash of any files contained within the installation files; and a hash of a signer certificate data relating to the components of the application. 4. A method as claimed in claim 1, wherein the step of detecting installation of an application on the device comprises one or more of:
receiving a notification that an application is to be installed or has been installed on the device; and intercepting a function call, message or event indicating that an application is to be installed or has been installed on the device. 5. A method as claimed in claim 1, wherein the step of performing a malware scan of the identified installation files and/or information obtained from these installation files comprises:
comparing the installation files and/or information obtained from these installation files with malware identification information. 6. A method as claimed in claim 5, wherein the malware identification information is provided by a malware identification database. 7. A method as claimed in claim 5, wherein the step of comparing the installation files and/or information obtained from these installation files with malware identification information further comprises one or more of:
comparing the installation files with signatures that identify potential malware; and comparing the installation files with heuristic rules that identify potential malware. 8. A method as claimed in claim 2, and further comprising:
when it is desired to perform a malware scan of the device after the installation of the application has been completed, performing a malware scan of the installation files that were used to perform the installation of the application. 9. A method as claimed in claim 8, and further comprising:
identifying applications installed on the device, and performing a malware scan of installation files stored on the device that were used to perform installation of each installed application. 10. A method as claimed in claim 1, and further comprising:
at installation of the application, storing the information obtained from the installation files; and when it is desired to perform a malware scan of the device after the installation of the application has been completed, performing a malware scan of the stored information obtained from the installation files. 11. A computer program, comprising computer readable code which, when run on a computer device, causes the computer device to perform the method as claimed in claim 1. 12. A computer program product comprising a computer readable medium and a computer program as claimed in claim 11, wherein the computer program is stored on the computer readable medium. 13. A computer device comprising:
a processor for detecting installation of an application on the device, identifying one or more installation files that are required to perform the installation of the application, and for performing a malware scan of the identified installation files and/or information obtained from the installation files. 14. A computer device as claimed in claim 13, wherein the processor is configured to perform a malware scan of the identified installation files and/or information obtained from these installation files at one or more of:
installation of the application; and after the installation of the application has been completed. 15. A computer device as claimed in claim 13, wherein the processor is configured to the obtain information from the installation files that comprises one or more of:
a hash of the installation files; a hash of any files contained within the installation files; and a hash of a signer certificate data relating to the components of the application. 16. A computer device as claimed in claim 13, wherein, to detect installation of an application on the device, the processor is configured to perform one or more of:
receiving a notification that an application is to be installed or has been installed on the device; and intercepting a function call, message or event indicating that an application is to be installed or has been installed on the device. 17. A computer device as claimed in claim 13, wherein the processor is configured to perform a malware scan of the identified installation files and/or information obtained from these installation files that comprises:
comparing the installation files and/or information obtained from these installation files with malware identification information. 18. A computer device as claimed in claim 17, wherein the computer device is configured to obtain the malware identification information from a malware identification database. 19. A computer device as claimed in claim 17, wherein, to compare the installation files and/or information obtained from these installation files with malware identification information, the processor is configured to perform one or more of:
comparing the installation files with signatures that identify potential malware; and comparing the installation files with heuristic rules that identify potential malware. 20. A computer device as claimed in claim 13, wherein, when it is desired to perform a malware scan of the device after the installation of the application has been completed, the processor is configured to perform a malware scan of the installation files that were used to perform the installation of the application. 21. A computer device as claimed in claim 20, wherein the processor is configured to identify applications installed on the device and perform a malware scan of installation files stored on the device that were used to perform installation of each installed application. 22. A computer device as claimed in claim 13, wherein the processor is configured to store the information obtained from the installation files at installation of the application, and, when it is desired to perform a malware scan of the device after the installation of the application has been completed, to perform a malware scan of the stored information obtained from the installation files. 23. A method of scanning a computer device in order to detect potential malware when an operating system running on the computer device prevents applications installed on the device from accessing installed files of other applications installed on the device, the method comprising:
detecting installation of an application on the device; identifying one or more installation files that are required to perform the installation of the application; obtaining information from the identified installation files and storing the information; and when it is desired to perform a malware scan of the device after the installation of the application has been completed, performing a malware scan of the stored information obtained from the installation files. 24. A computer program, comprising computer readable code which, when run on a computer device, causes the computer device to perform the method as claimed in claim 23. 25. A computer program product comprising a computer readable medium and a computer program as claimed in claim 24, wherein the computer program is stored on the computer readable medium. 26. A computer device comprising:
a processor for detecting installation of an application on the device, identifying one or more installation files that are required to perform the installation of the application, obtaining information from the identified installation files and ensuring that the information is stored, and, when it is desired to perform a malware scan of the device after the installation of the application has been completed, performing a malware scan of the stored information obtained from the installation files. | 2,400 |
7,552 | 7,552 | 14,852,519 | 2,435 | Techniques are provided for blocking forgiveness in a system that mitigates distributed denial of service (DDoS) attacks on a network. A user's network address can be blocked as a result performing human behavior analysis on network resource request activity from the user's address. The system can block an address temporarily based on their behavior, classifying legitimate human users as a malicious attacker performing a DDoS attack. But subsequent behavioral analysis of network resource requests can identify that the user should not have been blocked. The system can automatically unblock the user's address, and allow further network resource requests. Previously blocked requests can also be unblocked. The number of infractions (e.g., action classified as malicious) can be tracked and compared to a threshold. If the number is less than the threshold, then that address is not blocked, thereby allowing forgiveness of a certain number of infractions. | 1. A method comprising:
receiving, at a mitigation system, a plurality of requests for one or more network resources to which the mitigation system is providing a mitigation service; identifying a first request of the plurality of requests as occurring within the first observation cycle; classifying the first request as a bad request based on one or more properties of the first request; adding a first address associated with the first request to a block list for blocking requests from the first address for a specified time period; identifying a second request of the plurality of requests as being transmitted from the first address and as occurring within a second observation cycle, the second request occurring within the specified time period; classifying the second request as a good request based on one or more properties of the second request; and removing the first address from the block list, thereby allowing a future request from the first address to be transmitted to the one or more network resources. 2. The method of claim 1, wherein the second request is blocked. 3. The method of claim 1, further comprising:
transmitting the future request to the one or more network resources, wherein the future request would have been blocked in the specified time period without the removal of the first address from the block list. 4. The method of claim 1, further comprising:
adding a first address associated with the first request to a bad address list of the first observation cycle; and reconciling a good address list of the first observation cycle with the bad address list to determine the block list. 5. The method of claim 1, wherein adding a first address associated with the first request to a bad address list is based on one or more other requests in the first observation cycle that are from the first address. 6. The method of claim 1, wherein analyzing the first request to determine the one or more properties of the first request. 7. The method of claim 1, wherein the one or more properties of the first request indicate a request for network resource that is on a list of prohibited network resources. 8. The method of claim 1, wherein the one or more properties of the second request indicate a request for network resource that is on a list of allowed network resources. 9. The method of claim 1, wherein the plurality of requests are received as a set of web server log files, wherein each request comprises a network resource, and the requesting address. 10. The method of claim 9, further comprising:
analyzing each request from the first set and placing each requesting address into the bad address list if the request is not a member of the allowed network resource list and the request is a member of the prohibited network resource list, and otherwise placing the requesting address in a good address list. 11. The method of claim 10, further comprising:
placing each address in the first bad address list into a block list if the address is not contained in a whitelisted addresses list. 12. The method of claim 11, further comprising:
sending the block list to a firewall system. 13. A method of operating a mitigation system, the method comprising:
analyzing a plurality of requests for one or more network resources corresponding to a network to which the mitigation system is providing a mitigation service; classifying a first request as a bad request based on one or more properties of the first request, the first request occurring in a first observation cycle and associated with a first address; incrementing a counter for the first address based on the classification of the first request as a bad request; incrementing the counter for the first address for each additional request of the plurality of requests that is associated with the first address and that is classified as a bad request; comparing the counter to a threshold number; adding the first address to a bad address list when the counter exceeds the threshold number. | Techniques are provided for blocking forgiveness in a system that mitigates distributed denial of service (DDoS) attacks on a network. A user's network address can be blocked as a result performing human behavior analysis on network resource request activity from the user's address. The system can block an address temporarily based on their behavior, classifying legitimate human users as a malicious attacker performing a DDoS attack. But subsequent behavioral analysis of network resource requests can identify that the user should not have been blocked. The system can automatically unblock the user's address, and allow further network resource requests. Previously blocked requests can also be unblocked. The number of infractions (e.g., action classified as malicious) can be tracked and compared to a threshold. If the number is less than the threshold, then that address is not blocked, thereby allowing forgiveness of a certain number of infractions.1. A method comprising:
receiving, at a mitigation system, a plurality of requests for one or more network resources to which the mitigation system is providing a mitigation service; identifying a first request of the plurality of requests as occurring within the first observation cycle; classifying the first request as a bad request based on one or more properties of the first request; adding a first address associated with the first request to a block list for blocking requests from the first address for a specified time period; identifying a second request of the plurality of requests as being transmitted from the first address and as occurring within a second observation cycle, the second request occurring within the specified time period; classifying the second request as a good request based on one or more properties of the second request; and removing the first address from the block list, thereby allowing a future request from the first address to be transmitted to the one or more network resources. 2. The method of claim 1, wherein the second request is blocked. 3. The method of claim 1, further comprising:
transmitting the future request to the one or more network resources, wherein the future request would have been blocked in the specified time period without the removal of the first address from the block list. 4. The method of claim 1, further comprising:
adding a first address associated with the first request to a bad address list of the first observation cycle; and reconciling a good address list of the first observation cycle with the bad address list to determine the block list. 5. The method of claim 1, wherein adding a first address associated with the first request to a bad address list is based on one or more other requests in the first observation cycle that are from the first address. 6. The method of claim 1, wherein analyzing the first request to determine the one or more properties of the first request. 7. The method of claim 1, wherein the one or more properties of the first request indicate a request for network resource that is on a list of prohibited network resources. 8. The method of claim 1, wherein the one or more properties of the second request indicate a request for network resource that is on a list of allowed network resources. 9. The method of claim 1, wherein the plurality of requests are received as a set of web server log files, wherein each request comprises a network resource, and the requesting address. 10. The method of claim 9, further comprising:
analyzing each request from the first set and placing each requesting address into the bad address list if the request is not a member of the allowed network resource list and the request is a member of the prohibited network resource list, and otherwise placing the requesting address in a good address list. 11. The method of claim 10, further comprising:
placing each address in the first bad address list into a block list if the address is not contained in a whitelisted addresses list. 12. The method of claim 11, further comprising:
sending the block list to a firewall system. 13. A method of operating a mitigation system, the method comprising:
analyzing a plurality of requests for one or more network resources corresponding to a network to which the mitigation system is providing a mitigation service; classifying a first request as a bad request based on one or more properties of the first request, the first request occurring in a first observation cycle and associated with a first address; incrementing a counter for the first address based on the classification of the first request as a bad request; incrementing the counter for the first address for each additional request of the plurality of requests that is associated with the first address and that is classified as a bad request; comparing the counter to a threshold number; adding the first address to a bad address list when the counter exceeds the threshold number. | 2,400 |
7,553 | 7,553 | 14,596,796 | 2,457 | Methods and apparatuses for managing received data by a client device and indicating data removal management by a server. A method for managing received data by a client device includes receiving a message including information about a number of modes for removal of the data from a buffer at the client. The method also includes selecting a mode for removal of the data from the buffer with a maximum required buffer size among the modes indicated by the information about the modes in the received message and removing the data from the buffer based on the identified mode. A method for indicating data removal management by a server includes generating and sending a message including information about a number of modes for removal of received data from a buffer at a client device. The information indicates, for each of the modes, a type of mode for removal of the data. | 1. A method for managing received data by a client device, the method comprising:
receiving a message including information about a number of modes for removal of the data from a buffer at the client device; selecting a mode for removal of the data from the buffer with a maximum required buffer size among the modes indicated by the information about the modes in the received message; and removing the data from the buffer based on the identified mode. 2. The method of claim 1, wherein removing the data from the buffer based on the identified mode comprises:
calculating an initial delay before starting removal of the data from the buffer; and calculating a rate of removing the data from the buffer. 3. The method of claim 1, wherein the modes comprise a mode where the client device removes complete Moving Picture Experts Group (MPEG) media transport (MMT) processing units (MPUs), a mode where the client device removes complete movie fragments, and a mode where the client device removes complete MMT fragmentation units (MFUs). 4. The method of claim 1, wherein the message is a hypothetical receiver buffer model (HRBM) removal message. 5. The method of claim 1, wherein the buffer is a Moving Picture Experts Group (MPEG) media transport protocol (MMTP) de-capsulation buffer. 6. The method of claim 1, wherein removing the data from the buffer comprises forwarding the data for presentation to a user. 7. A method for indicating data removal management by a server, the method comprising:
generating a message including information about a number of modes for removal of received data from a buffer at a client device, the information indicating, for each of the modes, a type of mode for removal of the data, and sending the message to the client device. 8. The method of claim 7, wherein the modes comprise a mode where the client device removes complete Moving Picture Experts Group (MPEG) media transport (MMT) processing units (MPUs), a mode where the client device removes complete movie fragments, and a mode where the client device removes complete MMT fragmentation units (MFUs). 9. The method of claim 7, wherein the information in the message further indicates a required maximum size of the buffer. 10. The method of claim 7, wherein the message is a hypothetical receiver buffer model (HRBM) removal message. 11. An apparatus in a client device for managing received data, the apparatus comprising:
a memory comprising a buffer configured to at least temporarily store the data; a receiver configured to receive a message including information about a number of modes for removal of the data from a buffer at the client device; and a controller configured to select a mode for removal of the data from the buffer with a maximum required buffer size among the modes indicated by the information about the modes in the received message, and remove the data from the buffer based on the identified type of mode. 12. The apparatus of claim 11, wherein the controller is further configured to calculate an initial delay before starting removal of the data from the buffer, and calculate a rate of removing the data from the buffer 13. The apparatus of claim 11, wherein the modes comprise a mode where the client device removes complete Moving Picture Experts Group (MPEG) media transport (MMT) processing units (MPUs), a mode where the client device removes complete movie fragments, and a mode where the client device removes complete MMT fragmentation units (MFUs). 14. The apparatus of claim 11, wherein the message is a hypothetical receiver buffer model (HRBM) removal message. 15. The apparatus of claim 11, wherein the buffer is a Moving Picture Experts Group (MPEG) media transport protocol (MMTP) de-capsulation buffer. 16. The apparatus of claim 11, wherein the controller is further configured to forward the data from the buffer for presentation to a user. 17. An apparatus for indicating data removal management, the apparatus comprising:
a controller configured to generate a message including information about a number of modes for removal of received data from a buffer at a client device, the information indicating, for each of the modes, a type of mode for removal of the data; and a transmitter configured to send the message to the client device. 18. The apparatus of claim 17, wherein the modes comprise a mode where the client device removes complete Moving Picture Experts Group (MPEG) media transport (MMT) processing units (MPUs), a mode where the client device removes complete movie fragments, and a mode where the client device removes complete MMT fragmentation units (MFUs). 19. The apparatus of claim 17, wherein the information in the message further indicates a required maximum size of the buffer. 20. The apparatus of claim 17, wherein the message is a hypothetical receiver buffer model (HRBM) removal message. | Methods and apparatuses for managing received data by a client device and indicating data removal management by a server. A method for managing received data by a client device includes receiving a message including information about a number of modes for removal of the data from a buffer at the client. The method also includes selecting a mode for removal of the data from the buffer with a maximum required buffer size among the modes indicated by the information about the modes in the received message and removing the data from the buffer based on the identified mode. A method for indicating data removal management by a server includes generating and sending a message including information about a number of modes for removal of received data from a buffer at a client device. The information indicates, for each of the modes, a type of mode for removal of the data.1. A method for managing received data by a client device, the method comprising:
receiving a message including information about a number of modes for removal of the data from a buffer at the client device; selecting a mode for removal of the data from the buffer with a maximum required buffer size among the modes indicated by the information about the modes in the received message; and removing the data from the buffer based on the identified mode. 2. The method of claim 1, wherein removing the data from the buffer based on the identified mode comprises:
calculating an initial delay before starting removal of the data from the buffer; and calculating a rate of removing the data from the buffer. 3. The method of claim 1, wherein the modes comprise a mode where the client device removes complete Moving Picture Experts Group (MPEG) media transport (MMT) processing units (MPUs), a mode where the client device removes complete movie fragments, and a mode where the client device removes complete MMT fragmentation units (MFUs). 4. The method of claim 1, wherein the message is a hypothetical receiver buffer model (HRBM) removal message. 5. The method of claim 1, wherein the buffer is a Moving Picture Experts Group (MPEG) media transport protocol (MMTP) de-capsulation buffer. 6. The method of claim 1, wherein removing the data from the buffer comprises forwarding the data for presentation to a user. 7. A method for indicating data removal management by a server, the method comprising:
generating a message including information about a number of modes for removal of received data from a buffer at a client device, the information indicating, for each of the modes, a type of mode for removal of the data, and sending the message to the client device. 8. The method of claim 7, wherein the modes comprise a mode where the client device removes complete Moving Picture Experts Group (MPEG) media transport (MMT) processing units (MPUs), a mode where the client device removes complete movie fragments, and a mode where the client device removes complete MMT fragmentation units (MFUs). 9. The method of claim 7, wherein the information in the message further indicates a required maximum size of the buffer. 10. The method of claim 7, wherein the message is a hypothetical receiver buffer model (HRBM) removal message. 11. An apparatus in a client device for managing received data, the apparatus comprising:
a memory comprising a buffer configured to at least temporarily store the data; a receiver configured to receive a message including information about a number of modes for removal of the data from a buffer at the client device; and a controller configured to select a mode for removal of the data from the buffer with a maximum required buffer size among the modes indicated by the information about the modes in the received message, and remove the data from the buffer based on the identified type of mode. 12. The apparatus of claim 11, wherein the controller is further configured to calculate an initial delay before starting removal of the data from the buffer, and calculate a rate of removing the data from the buffer 13. The apparatus of claim 11, wherein the modes comprise a mode where the client device removes complete Moving Picture Experts Group (MPEG) media transport (MMT) processing units (MPUs), a mode where the client device removes complete movie fragments, and a mode where the client device removes complete MMT fragmentation units (MFUs). 14. The apparatus of claim 11, wherein the message is a hypothetical receiver buffer model (HRBM) removal message. 15. The apparatus of claim 11, wherein the buffer is a Moving Picture Experts Group (MPEG) media transport protocol (MMTP) de-capsulation buffer. 16. The apparatus of claim 11, wherein the controller is further configured to forward the data from the buffer for presentation to a user. 17. An apparatus for indicating data removal management, the apparatus comprising:
a controller configured to generate a message including information about a number of modes for removal of received data from a buffer at a client device, the information indicating, for each of the modes, a type of mode for removal of the data; and a transmitter configured to send the message to the client device. 18. The apparatus of claim 17, wherein the modes comprise a mode where the client device removes complete Moving Picture Experts Group (MPEG) media transport (MMT) processing units (MPUs), a mode where the client device removes complete movie fragments, and a mode where the client device removes complete MMT fragmentation units (MFUs). 19. The apparatus of claim 17, wherein the information in the message further indicates a required maximum size of the buffer. 20. The apparatus of claim 17, wherein the message is a hypothetical receiver buffer model (HRBM) removal message. | 2,400 |
7,554 | 7,554 | 14,630,577 | 2,451 | A process composes, at a first computing device, a plurality of messages according to a messaging protocol that has a single predetermined immutable message structure for the plurality of messages. Further, the process sends, from the first computing device, the plurality of messages to a second computing device. | 1. A method comprising:
composing, at a first computing device, a plurality of messages according to a messaging protocol that has a single predetermined immutable message structure for the plurality of messages; and sending, from the first computing device, the plurality of messages to a second computing device. 2. The method of claim 1, wherein each of the plurality of messages includes a request for data from the second computing device. 3. The method of claim 2, wherein each of the plurality of messages includes a service from which the data is requested. 4. The method of claim 3, wherein each of the plurality of messages includes a verb that specifies an action for the service to perform. 5. The method of claim 4, wherein the verb is selected from the group consisting of: a create command, a read command, an update command, a delete command, and a merge command. 6. The method of claim 4, wherein each of the plurality of messages includes a function that is modified by the verb. 7. The method of claim 6, wherein the function is a transactional function. 8. The method of claim 6, wherein the function is a metadata function. 9. The method of claim 1, wherein the single predetermined immutable message structure is utilized by original code or updated code processed by the first computing device. 10. The method of claim 1, wherein the single predetermined immutable message structure is utilized by original code or updated code processed by the second computing device. 11. A computer program product comprising a computer useable storage device having a computer readable program, wherein the computer readable program when executed on a computer causes the computer to:
compose, at a first computing device, a plurality of messages according to a messaging protocol that has a single predetermined immutable message structure for the plurality of messages; and send, from the first computing device, the plurality of messages to a second computing device. 12. The computer program product of claim 11, wherein each of the plurality of messages includes a request for data from the second computing device. 13. The computer program product of claim 12, wherein each of the plurality of messages includes a service from which the data is requested. 14. The computer program product of claim 13, wherein each of the plurality of messages includes a verb that specifies an action for the service to perform. 15. The computer program product of claim 14, wherein the verb is selected from the group consisting of: a create command, a read command, an update command, a delete command, and a merge command. 16. The computer program product of claim 14, wherein each of the plurality of messages includes a function that is modified by the verb. 17. The computer program product of claim 16, wherein the function is a transactional function. 18. The computer program product of claim 16, wherein the function is a metadata function. 19. A method comprising:
receiving, at a first computing device, a first plurality of messages according to a messaging protocol that has a single predetermined immutable message structure for the first plurality of messages; performing a plurality of actions based upon content of the first plurality of messages; composing, at the first computing device, a plurality of second messages according to the messaging protocol, the plurality of second messages having content associated with the plurality of actions; and sending, from the first computing device, the plurality of second messages to a second computing device. 20. A system comprising:
a first computing device that composes a first plurality of messages according to a messaging protocol that has a single predetermined immutable message structure for the first plurality of messages and sends the first plurality of messages; and a second computing device that receives the first plurality of messages, performs a plurality of actions based upon content of the first plurality of messages, composes a plurality of second messages according to the messaging protocol, and sends the plurality of second messages to the first computing device, the plurality of second messages having content associated with the plurality of actions. | A process composes, at a first computing device, a plurality of messages according to a messaging protocol that has a single predetermined immutable message structure for the plurality of messages. Further, the process sends, from the first computing device, the plurality of messages to a second computing device.1. A method comprising:
composing, at a first computing device, a plurality of messages according to a messaging protocol that has a single predetermined immutable message structure for the plurality of messages; and sending, from the first computing device, the plurality of messages to a second computing device. 2. The method of claim 1, wherein each of the plurality of messages includes a request for data from the second computing device. 3. The method of claim 2, wherein each of the plurality of messages includes a service from which the data is requested. 4. The method of claim 3, wherein each of the plurality of messages includes a verb that specifies an action for the service to perform. 5. The method of claim 4, wherein the verb is selected from the group consisting of: a create command, a read command, an update command, a delete command, and a merge command. 6. The method of claim 4, wherein each of the plurality of messages includes a function that is modified by the verb. 7. The method of claim 6, wherein the function is a transactional function. 8. The method of claim 6, wherein the function is a metadata function. 9. The method of claim 1, wherein the single predetermined immutable message structure is utilized by original code or updated code processed by the first computing device. 10. The method of claim 1, wherein the single predetermined immutable message structure is utilized by original code or updated code processed by the second computing device. 11. A computer program product comprising a computer useable storage device having a computer readable program, wherein the computer readable program when executed on a computer causes the computer to:
compose, at a first computing device, a plurality of messages according to a messaging protocol that has a single predetermined immutable message structure for the plurality of messages; and send, from the first computing device, the plurality of messages to a second computing device. 12. The computer program product of claim 11, wherein each of the plurality of messages includes a request for data from the second computing device. 13. The computer program product of claim 12, wherein each of the plurality of messages includes a service from which the data is requested. 14. The computer program product of claim 13, wherein each of the plurality of messages includes a verb that specifies an action for the service to perform. 15. The computer program product of claim 14, wherein the verb is selected from the group consisting of: a create command, a read command, an update command, a delete command, and a merge command. 16. The computer program product of claim 14, wherein each of the plurality of messages includes a function that is modified by the verb. 17. The computer program product of claim 16, wherein the function is a transactional function. 18. The computer program product of claim 16, wherein the function is a metadata function. 19. A method comprising:
receiving, at a first computing device, a first plurality of messages according to a messaging protocol that has a single predetermined immutable message structure for the first plurality of messages; performing a plurality of actions based upon content of the first plurality of messages; composing, at the first computing device, a plurality of second messages according to the messaging protocol, the plurality of second messages having content associated with the plurality of actions; and sending, from the first computing device, the plurality of second messages to a second computing device. 20. A system comprising:
a first computing device that composes a first plurality of messages according to a messaging protocol that has a single predetermined immutable message structure for the first plurality of messages and sends the first plurality of messages; and a second computing device that receives the first plurality of messages, performs a plurality of actions based upon content of the first plurality of messages, composes a plurality of second messages according to the messaging protocol, and sends the plurality of second messages to the first computing device, the plurality of second messages having content associated with the plurality of actions. | 2,400 |
7,555 | 7,555 | 14,029,725 | 2,486 | Provided herein is an apparatus comprising a photon detecting array configured to take images of an article, and a mount configured to mount and translate the article in a direction by a sub-pixel distance. In some embodiments, the sub-pixel distance is based on a pixel size of the photon detecting array. | 1. An apparatus, comprising:
a light source for illuminating an article; a photon detecting array comprising a fixed pixel resolution; and a means for producing a composite image of the article, or a portion thereof, at a greater pixel resolution than the fixed pixel resolution of the photon detecting array by translating and imaging the article at sub-pixel distances. 2. The apparatus of claim 1 further comprising a lens, wherein the lens is a telecentric lens. 3. The apparatus of claim 1, wherein the photon detecting array comprises a complementary metal-oxide semiconductor (“CMOS”), a scientific complementary metal-oxide semiconductor (“sCMOS”), or a charge-coupled device (“CCD”). 4. The apparatus of claim 1, wherein the fixed pixel resolution of the photon detecting array is at least 5 megapixels. 5. The apparatus of claim 1, wherein the greater pixel resolution is at least two times greater than the fixed pixel resolution of the photon detecting array. 6. The apparatus of claim 1, wherein the greater pixel resolution is at least 100 times greater than the fixed pixel resolution of the photon detecting array. 7. The apparatus of claim 1 wherein the means for producing a composite image of the article includes a computer configured to:
record an initial image of the article at an initial location;
iteratively cause the mount to translate the article a sub-pixel distance to a subsequent location and image the article in the subsequent location; and
combine the images from each location to produce the composite image at the greater pixel resolution than the fixed pixel resolution of the photon detecting array. 8. The apparatus of claim 7, wherein the computer is further configured to:
determine the sub-pixel distance to translate the mount to the subsequent location based on a pixel size of the photon detecting array, a magnification value of a lens of the apparatus, and the greater pixel resolution. 9. The apparatus of claim 7, wherein images from each location are enhanced by a predetermined value. 10. The apparatus of claim 7, wherein
the physical position of the photon detecting array and the light source are fixed; the article is a disk; and the computer is further configured to identify disk defects. 11. An apparatus comprising:
a photon detecting array configured to take images of an article; and a mount configured to support and translate the article by a sub-pixel distance, wherein the sub-pixel distance is based on a pixel size of the photon detecting array. 12. The apparatus of claim 11, wherein the apparatus is configured to produce an image of the article that is of the pixel size of the photon detecting array and is at a greater pixel resolution than a pixel resolution of the photon detecting array. 13. The apparatus of claim 11 further comprising a computer configured to:
record an initial image of the article at an initial location;
iteratively cause the mount to translate the article the sub-pixel distance to a subsequent location and record a subsequent image the article in the subsequent location; and
combine the images from each location to produce a composite image at a greater pixel resolution than a pixel resolution of the photon detecting array. 14. The apparatus of claim 13, wherein the computer is further configured to:
determine the sub-pixel distance to translate the article, wherein the determining is based
on the pixel size of the photon detecting array,
on a magnification value of a lens of the apparatus, and
an enhancement value n, wherein n is between 2 and 10,000, inclusive; and
produce the composite image with a pixel resolution that is n times greater than the pixel resolution of the photon detecting array. 15. The apparatus of claim 11, wherein the photon detecting array remains in a fixed position while the article is translated in the direction by the sub-pixel distance. 16. A method, comprising:
receiving from a photon detecting array an initial image of an article at an initial location; translating the article a sub-pixel distance to a subsequent location and generating a subsequent image of the article at the subsequent location; and combining the initial image and the subsequent image to generate a composite image at a greater pixel resolution than a pixel resolution of the photon detecting array. 17. The method of claim 16, wherein
generating the composite image comprises combining n2 number of images, and the composite image includes a pixel resolution that is n times greater than the pixel resolution of the photon detecting array. 18. The method of claim 17, wherein translating the article the sub-pixel distance comprises translating the article 1/n of a pixel size of the photon detecting array. 19. The method of claim 17, wherein n is between 2 and 10,000, inclusive. 20. The method of claim 16, further comprising:
determining the sub-pixel distance based on a pixel size of the photon detecting array, a magnification value of a lens, and an enhancement value n, wherein
the greater pixel resolution is n times greater than the pixel resolution of the photon detecting array, and
a camera includes the photon detecting array and the lens. | Provided herein is an apparatus comprising a photon detecting array configured to take images of an article, and a mount configured to mount and translate the article in a direction by a sub-pixel distance. In some embodiments, the sub-pixel distance is based on a pixel size of the photon detecting array.1. An apparatus, comprising:
a light source for illuminating an article; a photon detecting array comprising a fixed pixel resolution; and a means for producing a composite image of the article, or a portion thereof, at a greater pixel resolution than the fixed pixel resolution of the photon detecting array by translating and imaging the article at sub-pixel distances. 2. The apparatus of claim 1 further comprising a lens, wherein the lens is a telecentric lens. 3. The apparatus of claim 1, wherein the photon detecting array comprises a complementary metal-oxide semiconductor (“CMOS”), a scientific complementary metal-oxide semiconductor (“sCMOS”), or a charge-coupled device (“CCD”). 4. The apparatus of claim 1, wherein the fixed pixel resolution of the photon detecting array is at least 5 megapixels. 5. The apparatus of claim 1, wherein the greater pixel resolution is at least two times greater than the fixed pixel resolution of the photon detecting array. 6. The apparatus of claim 1, wherein the greater pixel resolution is at least 100 times greater than the fixed pixel resolution of the photon detecting array. 7. The apparatus of claim 1 wherein the means for producing a composite image of the article includes a computer configured to:
record an initial image of the article at an initial location;
iteratively cause the mount to translate the article a sub-pixel distance to a subsequent location and image the article in the subsequent location; and
combine the images from each location to produce the composite image at the greater pixel resolution than the fixed pixel resolution of the photon detecting array. 8. The apparatus of claim 7, wherein the computer is further configured to:
determine the sub-pixel distance to translate the mount to the subsequent location based on a pixel size of the photon detecting array, a magnification value of a lens of the apparatus, and the greater pixel resolution. 9. The apparatus of claim 7, wherein images from each location are enhanced by a predetermined value. 10. The apparatus of claim 7, wherein
the physical position of the photon detecting array and the light source are fixed; the article is a disk; and the computer is further configured to identify disk defects. 11. An apparatus comprising:
a photon detecting array configured to take images of an article; and a mount configured to support and translate the article by a sub-pixel distance, wherein the sub-pixel distance is based on a pixel size of the photon detecting array. 12. The apparatus of claim 11, wherein the apparatus is configured to produce an image of the article that is of the pixel size of the photon detecting array and is at a greater pixel resolution than a pixel resolution of the photon detecting array. 13. The apparatus of claim 11 further comprising a computer configured to:
record an initial image of the article at an initial location;
iteratively cause the mount to translate the article the sub-pixel distance to a subsequent location and record a subsequent image the article in the subsequent location; and
combine the images from each location to produce a composite image at a greater pixel resolution than a pixel resolution of the photon detecting array. 14. The apparatus of claim 13, wherein the computer is further configured to:
determine the sub-pixel distance to translate the article, wherein the determining is based
on the pixel size of the photon detecting array,
on a magnification value of a lens of the apparatus, and
an enhancement value n, wherein n is between 2 and 10,000, inclusive; and
produce the composite image with a pixel resolution that is n times greater than the pixel resolution of the photon detecting array. 15. The apparatus of claim 11, wherein the photon detecting array remains in a fixed position while the article is translated in the direction by the sub-pixel distance. 16. A method, comprising:
receiving from a photon detecting array an initial image of an article at an initial location; translating the article a sub-pixel distance to a subsequent location and generating a subsequent image of the article at the subsequent location; and combining the initial image and the subsequent image to generate a composite image at a greater pixel resolution than a pixel resolution of the photon detecting array. 17. The method of claim 16, wherein
generating the composite image comprises combining n2 number of images, and the composite image includes a pixel resolution that is n times greater than the pixel resolution of the photon detecting array. 18. The method of claim 17, wherein translating the article the sub-pixel distance comprises translating the article 1/n of a pixel size of the photon detecting array. 19. The method of claim 17, wherein n is between 2 and 10,000, inclusive. 20. The method of claim 16, further comprising:
determining the sub-pixel distance based on a pixel size of the photon detecting array, a magnification value of a lens, and an enhancement value n, wherein
the greater pixel resolution is n times greater than the pixel resolution of the photon detecting array, and
a camera includes the photon detecting array and the lens. | 2,400 |
7,556 | 7,556 | 14,910,166 | 2,463 | A Radio Network Node, RNN, ( 106 ) and a method therein for load balancing in a wireless communications network ( 100 ). The RNN is configured to communicate with a wireless device ( 108 ) supporting both normal coverage and extended coverage. The RNN and the wireless device are operating in the wireless communications network.
The RNN determines an indication informing the wireless device that it, when in normal coverage, is to monitor one of a downlink legacy resource and an extended coverage resource for information and/or is to use one of an uplink legacy resource and an extended coverage resource when accessing the wireless communications network.
Further, the RNN transmits the indication to the wireless device. | 1-50. (canceled) 51. A method performed by a radio network node for load balancing in a wireless communications network, wherein the radio network node is configured to communicate with a wireless device supporting both normal coverage and extended coverage, wherein the radio network node and the wireless device are operating in the wireless communications network, and wherein the method comprises:
determining an indication informing the wireless device that the wireless device, when in normal coverage, is to monitor one of a downlink legacy resource and an extended coverage resource for information and/or is to use one of an uplink legacy resource and an extended coverage resource when accessing the wireless communications network; and transmitting the indication to the wireless device. 52. The method of claim 51, further comprising:
determining a need for load balancing between a legacy resource and an extended coverage resource, wherein the legacy and the extended coverage resources are downlink legacy and extended coverage resources or uplink legacy and extended overage resources; and wherein the determining of the indication comprises: determining the indication based on the need for load balancing. 53. The method of claim 52, wherein the determining of the need for load balancing between the legacy resource and the extended coverage resource, further comprises:
determining a load balancing between the legacy resource and the extended coverage resource based on one or more device identities. 54. The method of claim 51, wherein the determining of the indication further comprises:
determining the indication based on one or more device identities. 55. The method of claim 53, wherein the one or more device identities are given by one or more of a Packet Temporary Mobile Subscriber Identity (P-TMSI), a Temporary Logical Link Identifier (TLLI), or an International Mobile Subscriber Identity (IMSI). 56. The method of claim 52, wherein the determining of the need for load balancing between the legacy resource and the extended coverage resource, further comprises:
determining a load balancing between the legacy resource and the extended coverage resource based on a priority associated with the wireless device. 57. The method of claim 51, wherein the downlink legacy resource is a Synchronization Channel (SCH), a Broadcast Channel (BCCH), or a Common Control Channel (CCCH), and wherein the downlink extended coverage resource is an Extended Coverage SCH (EC-SCH), an Extended Coverage BCCH (EC-BCCH), or an Extended Coverage CCCH (EC-CCCH). 58. The method of claim 57, wherein the transmitting of the indication comprises transmitting the indication in one of: a System Information message using the BCCH and/or the EC-BCCH; a synchronization message using the SCH and/or the EC-SCH; and an access grant or paging message using the CCCH and/or the EC-CCCH. 59. The method of claim 51, wherein the uplink legacy resource is a Common Control Channel (CCCH), and wherein the uplink extended coverage resource is an Extended Coverage CCCH (EC-CCCH). 60. A radio network node for load balancing in a wireless communications network, wherein the radio network node is configured to communicate with a wireless device supporting both normal coverage and extended coverage, wherein the radio network node and the wireless device are operating in the wireless communications network, and wherein the radio network node is configured to:
determine an indication informing the wireless device that it, when in normal coverage, is to monitor one of a downlink legacy resource and an extended coverage resource for information and/or is to use one of an uplink legacy resource and an extended coverage resource when accessing the wireless communications network; and transmit the indication to the wireless device. 61. The radio network node of claim 60, further being configured to:
determine the indication based on one or more device identities. 62. The radio network node of claim 60, further being configured to:
determine a need for load balancing between a legacy resource and an extended coverage resource, wherein the legacy and the extended coverage resources are downlink legacy and extended coverage resources or uplink legacy and extended overage resources; and wherein the radio network node is configured to determine the indication by further being configured to: determine the indication based on the need for load balancing. 63. The radio network node of claim 62, wherein the radio network node is configured to determine the need for load balancing between the legacy resource and the extended coverage resource, by further being configured to:
determine a load balancing between the legacy resource and the extended coverage resource based on one or more device identities. 64. The radio network node of claim 61, wherein the one or more device identities are given by one or more of a Packet Temporary Mobile Subscriber Identity (P-TMSI), a Temporary Logical Link Identifier (TLLI), or an International Mobile Subscriber Identity (IMSI). 65. The radio network node of claim 62, wherein the radio network node is configured to determine the need for load balancing between the legacy resource and the extended coverage resource, by further being configured to:
determine a load balancing between the legacy resource and the extended coverage resource based on a priority associated with the wireless device. 66. The radio network node of claim 61, wherein the downlink legacy resource is a Synchronization Channel (SCH), a Broadcast Channel (BCCH), or a Common Control Channel (CCCH), and wherein the downlink extended coverage resource is an Extended Coverage SCH (EC-SCH), an Extended Coverage BCCH (EC-BCCH), or an Extended Coverage CCCH (EC-CCCH). 67. The radio network node of claim 66, wherein the radio network node is configured to transmit the indication by further being configured to transmit the indication in one of: a System Information message using the BCCH and/or the EC-BCCH; a synchronization message using the SCH and/or the EC-SCH; and an access grant or paging message using the CCCH and/or the EC-CCCH. 68. The radio network node of claim 60, wherein the uplink legacy resource is a Common Control Channel (CCCH), and wherein the uplink extended coverage resource is an Extended Coverage CCCH (EC-CCCH). 69. A method performed by a wireless device for load balancing in a wireless communications network, wherein the wireless device supports both normal coverage and extended coverage, wherein the wireless device and a radio network node are operating in the wireless communications network, and wherein the method comprises:
receiving, from the radio network node, an indication informing the wireless device that a wireless device in normal coverage is to monitor one of a downlink legacy resource and a downlink extended coverage resource for information and/or is to use one of an uplink legacy resource and an uplink extended coverage resource when accessing the wireless communications network, and when the wireless device is in normal coverage, monitoring the one of the downlink legacy and extended coverage resources for information and/or accessing the wireless communications network using the one of the uplink legacy and extended coverage resources. 70. The method of claim 69, wherein the downlink legacy resource is a Synchronization Channel (SCH), a Broadcast Channel (BCCH), or a Common Control Channel (CCCH), and wherein the downlink extended coverage resource is an Extended Coverage SCH (EC-SCH), an Extended Coverage BCCH (EC-BCCH), or an Extended Coverage CCCH (EC-CCCH). 71. The method of claim 70, wherein the receiving of the indication comprises receiving the indication in one of: a System Information message on the BCCH and/or the EC-BCCH; a synchronization message on the SCH and/or the EC-SCH; and an access grant or paging message on the CCCH and/or the EC-CCCH. 72. The method of claim 69, wherein the uplink legacy resource is a Common Control Channel (CCCH), and wherein the uplink extended coverage resource is an Extended Coverage CCCH (EC-CCCH). 73. The method of claim 69, wherein the indication is based on one or more device identities, and wherein the monitoring of the one of the downlink legacy and extended coverage resources for information further comprises:
monitoring the one of the downlink legacy and extended coverage resources when a device identity of the wireless device corresponds to the one or more device identities, and/or wherein the accessing of the wireless communications network using the one of the uplink legacy and extended coverage resources further comprises: accessing the communications network using the one of the uplink legacy and extended coverage resources when the identity of the wireless device corresponds to the one or more device identities. 74. The method of claim 73, wherein the one or more device identities are given by one or more of a Packet Temporary Mobile Subscriber Identity (P-TMSI), a Temporary Logical Link Identifier (TLLI), or an International Mobile Subscriber Identity (IMSI). 75. A wireless device for load balancing in a wireless communications network, wherein the wireless device supports both normal coverage and extended coverage, wherein the wireless device and a radio network node are operating in the wireless communications network, and wherein the wireless device is configured to:
receive, from the radio network node, an indication informing the wireless device that a wireless device in normal coverage is to monitor one of a downlink legacy resource and a downlink extended coverage resource for information and/or is to use one of an uplink legacy resource and an uplink extended coverage resource when accessing the wireless communications network, and when the wireless device is in normal coverage, the wireless device is configured to monitor the one of the downlink legacy and extended coverage resources for information and/or access the wireless communications network using the one of the uplink legacy and extended coverage resources. 76. The wireless device of claim 75, wherein the downlink legacy resource is a Synchronization Channel (SCH), a Broadcast Channel (BCCH), or a Common Control Channel (CCCH), and wherein the downlink extended coverage resource is an Extended Coverage SCH (EC-SCH), an Extended Coverage BCCH (EC-BCCH), or an Extended Coverage CCCH (EC-CCCH). 77. The wireless device of claim 76, wherein the wireless device is configured to receive the indication by further being configured to receive the indication in one of: a System Information message on the BCCH and/or the EC-BCCH; a synchronization message on the SCH and/or the EC-SCH; and an access grant or paging message on the CCCH and/or the EC-CCCH. 78. The wireless device of claim 75, wherein the uplink legacy resource is a Common Control Channel (CCCH), and wherein the uplink extended coverage resource is an Extended Coverage CCCH (EC-CCCH). 79. The wireless device of claim 77, wherein the indication is based on one or more device identities, and wherein the wireless device is configured to monitor the one of the downlink legacy and extended coverage resources for information by further being configured to:
monitor the one of the downlink legacy and extended coverage resources when a device identity of the wireless device corresponds to the one or more device identities, and/or wherein the wireless device is configured to access the wireless communications network using the one of the uplink legacy and extended coverage resources further by further being configured to: access the communications network using the one of the uplink legacy and extended coverage resources when the identity of the wireless device corresponds to the one or more device identities. 80. The wireless device of claim 79, wherein the one or more device identities are given by one or more of a Packet Temporary Mobile Subscriber Identity (P-TMSI), a Temporary Logical Link Identifier (TLLI), or an International Mobile Subscriber Identity (IMSI). 81. A method performed by a network node for load balancing in a wireless communications network, wherein the network node is configured to communicate with a radio network node that is configured to communicate with a wireless device supporting both normal coverage and extended coverage, wherein the network node, the radio network node and the wireless device are operating in the wireless communications network, and wherein the method comprises:
determining an indication informing the wireless device that it, when in normal coverage, is to monitor one of a downlink legacy and an extended coverage resources for information and/or is to use one of an uplink legacy and an extended coverage resource when accessing the wireless communications network; and transmitting the indication to the radio network node. 82. The method of claim 81, further comprising:
determining a need for load balancing between a legacy resource and an extended coverage resource, wherein the legacy and the extended coverage resources are downlink legacy and extended coverage resources or uplink legacy and extended overage resources; and wherein the determining of the indication comprises: determining the indication based on the need for load balancing. 83. The method of claim 82, wherein the determining of the need for load balancing between the legacy resource and the extended coverage resource, further comprises:
determining a load balancing between the legacy resource and the extended coverage resource. 84. The method of claim 83, wherein the determining of the load balancing between the legacy resource and the extended coverage resource further comprises:
determining the load balancing based on a priority associated with the wireless device. 85. The method of claim 83, further comprising:
allocating a device identity for the wireless device when determining the load balancing. 86. The method of claim 81, wherein the determining of the indication further comprises:
determining the indication based on one or more device identities. 87. The method of claim 85, wherein the device identity is given by a Packet Temporary Mobile Subscriber Identity (P-TMSI); a Temporary Logical Link Identifier (TLLI); or an International Mobile Subscriber Identity (IMSI). 88. The method of claim 81, wherein the downlink legacy resource is a Synchronization Channel (SCH), a Broadcast Channel (BCCH), or a Common Control Channel (CCCH), and wherein the downlink extended coverage resource is an Extended Coverage SCH (EC-SCH), an Extended Coverage BCCH (EC-BCCH), or an Extended Coverage CCCH (EC-CCCH). 89. The method of claim 81, wherein the uplink legacy resource is a Common Control Channel (CCCH), and wherein the uplink extended coverage resource is an Extended Coverage CCCH (EC-CCCH). 90. A network node for load balancing in a wireless communications network, wherein the network node is configured to communicate with a radio network node that is configured to communicate with a wireless device supporting both normal coverage and extended coverage, wherein the network node, the radio network node and the wireless device are operating in the wireless communications network, and wherein the network node is configured to:
determine an indication informing the wireless device that it, when in normal coverage, is to monitor one of a downlink legacy and an extended coverage resources for information and/or is to use one of an uplink legacy and an extended coverage resource when accessing the wireless communications network; and transmit the indication to the radio network node. 91. The network node of claim 90, further being configured to:
determine a need for load balancing between a legacy resource and an extended coverage resource, wherein the legacy and the extended coverage resources are downlink legacy and extended coverage resources or uplink legacy and extended overage resources; and wherein the network node is configured to determine the indication by further being configured to: determine the indication based on the need for load balancing. 92. The network node of claim 91, wherein the network node is configured to determine the need for load balancing between the legacy resource and the extended coverage resource, by further being configured to:
determine the load balancing between the legacy resource and the extended coverage resource. 93. The network node of claim 92, wherein network node is configured to determine the load balancing between the legacy resource and the extended coverage resource by further being configured to:
determine the load balancing based on a priority associated with the wireless device. 94. The network node of claim 92, further being configured to:
allocate a device identity for the wireless device when determining the load balancing. 95. The network node of claim 90, further being configured to:
determine the indication based on one or more device identities. 96. The network node of claim 94, wherein the device identity is given by a Packet Temporary Mobile Subscriber Identity (P-TMSI); a Temporary Logical Link Identifier (TLLI); or an International Mobile Subscriber Identity (IMSI). 97. The network node of claim 90, wherein the downlink legacy resource is a Synchronization Channel (SCH), a Broadcast Channel (BCCH), or a Common Control Channel (CCCH), and wherein the downlink extended coverage resource is an Extended Coverage SCH (EC-SCH), an Extended Coverage BCCH (EC-BCCH), or an Extended Coverage CCCH (EC-CCCH). 98. The network node of claim 90, wherein the uplink legacy resource is a Common Control Channel (CCCH), and wherein the uplink extended coverage resource is an Extended Coverage CCCH (EC-CCCH). | A Radio Network Node, RNN, ( 106 ) and a method therein for load balancing in a wireless communications network ( 100 ). The RNN is configured to communicate with a wireless device ( 108 ) supporting both normal coverage and extended coverage. The RNN and the wireless device are operating in the wireless communications network.
The RNN determines an indication informing the wireless device that it, when in normal coverage, is to monitor one of a downlink legacy resource and an extended coverage resource for information and/or is to use one of an uplink legacy resource and an extended coverage resource when accessing the wireless communications network.
Further, the RNN transmits the indication to the wireless device.1-50. (canceled) 51. A method performed by a radio network node for load balancing in a wireless communications network, wherein the radio network node is configured to communicate with a wireless device supporting both normal coverage and extended coverage, wherein the radio network node and the wireless device are operating in the wireless communications network, and wherein the method comprises:
determining an indication informing the wireless device that the wireless device, when in normal coverage, is to monitor one of a downlink legacy resource and an extended coverage resource for information and/or is to use one of an uplink legacy resource and an extended coverage resource when accessing the wireless communications network; and transmitting the indication to the wireless device. 52. The method of claim 51, further comprising:
determining a need for load balancing between a legacy resource and an extended coverage resource, wherein the legacy and the extended coverage resources are downlink legacy and extended coverage resources or uplink legacy and extended overage resources; and wherein the determining of the indication comprises: determining the indication based on the need for load balancing. 53. The method of claim 52, wherein the determining of the need for load balancing between the legacy resource and the extended coverage resource, further comprises:
determining a load balancing between the legacy resource and the extended coverage resource based on one or more device identities. 54. The method of claim 51, wherein the determining of the indication further comprises:
determining the indication based on one or more device identities. 55. The method of claim 53, wherein the one or more device identities are given by one or more of a Packet Temporary Mobile Subscriber Identity (P-TMSI), a Temporary Logical Link Identifier (TLLI), or an International Mobile Subscriber Identity (IMSI). 56. The method of claim 52, wherein the determining of the need for load balancing between the legacy resource and the extended coverage resource, further comprises:
determining a load balancing between the legacy resource and the extended coverage resource based on a priority associated with the wireless device. 57. The method of claim 51, wherein the downlink legacy resource is a Synchronization Channel (SCH), a Broadcast Channel (BCCH), or a Common Control Channel (CCCH), and wherein the downlink extended coverage resource is an Extended Coverage SCH (EC-SCH), an Extended Coverage BCCH (EC-BCCH), or an Extended Coverage CCCH (EC-CCCH). 58. The method of claim 57, wherein the transmitting of the indication comprises transmitting the indication in one of: a System Information message using the BCCH and/or the EC-BCCH; a synchronization message using the SCH and/or the EC-SCH; and an access grant or paging message using the CCCH and/or the EC-CCCH. 59. The method of claim 51, wherein the uplink legacy resource is a Common Control Channel (CCCH), and wherein the uplink extended coverage resource is an Extended Coverage CCCH (EC-CCCH). 60. A radio network node for load balancing in a wireless communications network, wherein the radio network node is configured to communicate with a wireless device supporting both normal coverage and extended coverage, wherein the radio network node and the wireless device are operating in the wireless communications network, and wherein the radio network node is configured to:
determine an indication informing the wireless device that it, when in normal coverage, is to monitor one of a downlink legacy resource and an extended coverage resource for information and/or is to use one of an uplink legacy resource and an extended coverage resource when accessing the wireless communications network; and transmit the indication to the wireless device. 61. The radio network node of claim 60, further being configured to:
determine the indication based on one or more device identities. 62. The radio network node of claim 60, further being configured to:
determine a need for load balancing between a legacy resource and an extended coverage resource, wherein the legacy and the extended coverage resources are downlink legacy and extended coverage resources or uplink legacy and extended overage resources; and wherein the radio network node is configured to determine the indication by further being configured to: determine the indication based on the need for load balancing. 63. The radio network node of claim 62, wherein the radio network node is configured to determine the need for load balancing between the legacy resource and the extended coverage resource, by further being configured to:
determine a load balancing between the legacy resource and the extended coverage resource based on one or more device identities. 64. The radio network node of claim 61, wherein the one or more device identities are given by one or more of a Packet Temporary Mobile Subscriber Identity (P-TMSI), a Temporary Logical Link Identifier (TLLI), or an International Mobile Subscriber Identity (IMSI). 65. The radio network node of claim 62, wherein the radio network node is configured to determine the need for load balancing between the legacy resource and the extended coverage resource, by further being configured to:
determine a load balancing between the legacy resource and the extended coverage resource based on a priority associated with the wireless device. 66. The radio network node of claim 61, wherein the downlink legacy resource is a Synchronization Channel (SCH), a Broadcast Channel (BCCH), or a Common Control Channel (CCCH), and wherein the downlink extended coverage resource is an Extended Coverage SCH (EC-SCH), an Extended Coverage BCCH (EC-BCCH), or an Extended Coverage CCCH (EC-CCCH). 67. The radio network node of claim 66, wherein the radio network node is configured to transmit the indication by further being configured to transmit the indication in one of: a System Information message using the BCCH and/or the EC-BCCH; a synchronization message using the SCH and/or the EC-SCH; and an access grant or paging message using the CCCH and/or the EC-CCCH. 68. The radio network node of claim 60, wherein the uplink legacy resource is a Common Control Channel (CCCH), and wherein the uplink extended coverage resource is an Extended Coverage CCCH (EC-CCCH). 69. A method performed by a wireless device for load balancing in a wireless communications network, wherein the wireless device supports both normal coverage and extended coverage, wherein the wireless device and a radio network node are operating in the wireless communications network, and wherein the method comprises:
receiving, from the radio network node, an indication informing the wireless device that a wireless device in normal coverage is to monitor one of a downlink legacy resource and a downlink extended coverage resource for information and/or is to use one of an uplink legacy resource and an uplink extended coverage resource when accessing the wireless communications network, and when the wireless device is in normal coverage, monitoring the one of the downlink legacy and extended coverage resources for information and/or accessing the wireless communications network using the one of the uplink legacy and extended coverage resources. 70. The method of claim 69, wherein the downlink legacy resource is a Synchronization Channel (SCH), a Broadcast Channel (BCCH), or a Common Control Channel (CCCH), and wherein the downlink extended coverage resource is an Extended Coverage SCH (EC-SCH), an Extended Coverage BCCH (EC-BCCH), or an Extended Coverage CCCH (EC-CCCH). 71. The method of claim 70, wherein the receiving of the indication comprises receiving the indication in one of: a System Information message on the BCCH and/or the EC-BCCH; a synchronization message on the SCH and/or the EC-SCH; and an access grant or paging message on the CCCH and/or the EC-CCCH. 72. The method of claim 69, wherein the uplink legacy resource is a Common Control Channel (CCCH), and wherein the uplink extended coverage resource is an Extended Coverage CCCH (EC-CCCH). 73. The method of claim 69, wherein the indication is based on one or more device identities, and wherein the monitoring of the one of the downlink legacy and extended coverage resources for information further comprises:
monitoring the one of the downlink legacy and extended coverage resources when a device identity of the wireless device corresponds to the one or more device identities, and/or wherein the accessing of the wireless communications network using the one of the uplink legacy and extended coverage resources further comprises: accessing the communications network using the one of the uplink legacy and extended coverage resources when the identity of the wireless device corresponds to the one or more device identities. 74. The method of claim 73, wherein the one or more device identities are given by one or more of a Packet Temporary Mobile Subscriber Identity (P-TMSI), a Temporary Logical Link Identifier (TLLI), or an International Mobile Subscriber Identity (IMSI). 75. A wireless device for load balancing in a wireless communications network, wherein the wireless device supports both normal coverage and extended coverage, wherein the wireless device and a radio network node are operating in the wireless communications network, and wherein the wireless device is configured to:
receive, from the radio network node, an indication informing the wireless device that a wireless device in normal coverage is to monitor one of a downlink legacy resource and a downlink extended coverage resource for information and/or is to use one of an uplink legacy resource and an uplink extended coverage resource when accessing the wireless communications network, and when the wireless device is in normal coverage, the wireless device is configured to monitor the one of the downlink legacy and extended coverage resources for information and/or access the wireless communications network using the one of the uplink legacy and extended coverage resources. 76. The wireless device of claim 75, wherein the downlink legacy resource is a Synchronization Channel (SCH), a Broadcast Channel (BCCH), or a Common Control Channel (CCCH), and wherein the downlink extended coverage resource is an Extended Coverage SCH (EC-SCH), an Extended Coverage BCCH (EC-BCCH), or an Extended Coverage CCCH (EC-CCCH). 77. The wireless device of claim 76, wherein the wireless device is configured to receive the indication by further being configured to receive the indication in one of: a System Information message on the BCCH and/or the EC-BCCH; a synchronization message on the SCH and/or the EC-SCH; and an access grant or paging message on the CCCH and/or the EC-CCCH. 78. The wireless device of claim 75, wherein the uplink legacy resource is a Common Control Channel (CCCH), and wherein the uplink extended coverage resource is an Extended Coverage CCCH (EC-CCCH). 79. The wireless device of claim 77, wherein the indication is based on one or more device identities, and wherein the wireless device is configured to monitor the one of the downlink legacy and extended coverage resources for information by further being configured to:
monitor the one of the downlink legacy and extended coverage resources when a device identity of the wireless device corresponds to the one or more device identities, and/or wherein the wireless device is configured to access the wireless communications network using the one of the uplink legacy and extended coverage resources further by further being configured to: access the communications network using the one of the uplink legacy and extended coverage resources when the identity of the wireless device corresponds to the one or more device identities. 80. The wireless device of claim 79, wherein the one or more device identities are given by one or more of a Packet Temporary Mobile Subscriber Identity (P-TMSI), a Temporary Logical Link Identifier (TLLI), or an International Mobile Subscriber Identity (IMSI). 81. A method performed by a network node for load balancing in a wireless communications network, wherein the network node is configured to communicate with a radio network node that is configured to communicate with a wireless device supporting both normal coverage and extended coverage, wherein the network node, the radio network node and the wireless device are operating in the wireless communications network, and wherein the method comprises:
determining an indication informing the wireless device that it, when in normal coverage, is to monitor one of a downlink legacy and an extended coverage resources for information and/or is to use one of an uplink legacy and an extended coverage resource when accessing the wireless communications network; and transmitting the indication to the radio network node. 82. The method of claim 81, further comprising:
determining a need for load balancing between a legacy resource and an extended coverage resource, wherein the legacy and the extended coverage resources are downlink legacy and extended coverage resources or uplink legacy and extended overage resources; and wherein the determining of the indication comprises: determining the indication based on the need for load balancing. 83. The method of claim 82, wherein the determining of the need for load balancing between the legacy resource and the extended coverage resource, further comprises:
determining a load balancing between the legacy resource and the extended coverage resource. 84. The method of claim 83, wherein the determining of the load balancing between the legacy resource and the extended coverage resource further comprises:
determining the load balancing based on a priority associated with the wireless device. 85. The method of claim 83, further comprising:
allocating a device identity for the wireless device when determining the load balancing. 86. The method of claim 81, wherein the determining of the indication further comprises:
determining the indication based on one or more device identities. 87. The method of claim 85, wherein the device identity is given by a Packet Temporary Mobile Subscriber Identity (P-TMSI); a Temporary Logical Link Identifier (TLLI); or an International Mobile Subscriber Identity (IMSI). 88. The method of claim 81, wherein the downlink legacy resource is a Synchronization Channel (SCH), a Broadcast Channel (BCCH), or a Common Control Channel (CCCH), and wherein the downlink extended coverage resource is an Extended Coverage SCH (EC-SCH), an Extended Coverage BCCH (EC-BCCH), or an Extended Coverage CCCH (EC-CCCH). 89. The method of claim 81, wherein the uplink legacy resource is a Common Control Channel (CCCH), and wherein the uplink extended coverage resource is an Extended Coverage CCCH (EC-CCCH). 90. A network node for load balancing in a wireless communications network, wherein the network node is configured to communicate with a radio network node that is configured to communicate with a wireless device supporting both normal coverage and extended coverage, wherein the network node, the radio network node and the wireless device are operating in the wireless communications network, and wherein the network node is configured to:
determine an indication informing the wireless device that it, when in normal coverage, is to monitor one of a downlink legacy and an extended coverage resources for information and/or is to use one of an uplink legacy and an extended coverage resource when accessing the wireless communications network; and transmit the indication to the radio network node. 91. The network node of claim 90, further being configured to:
determine a need for load balancing between a legacy resource and an extended coverage resource, wherein the legacy and the extended coverage resources are downlink legacy and extended coverage resources or uplink legacy and extended overage resources; and wherein the network node is configured to determine the indication by further being configured to: determine the indication based on the need for load balancing. 92. The network node of claim 91, wherein the network node is configured to determine the need for load balancing between the legacy resource and the extended coverage resource, by further being configured to:
determine the load balancing between the legacy resource and the extended coverage resource. 93. The network node of claim 92, wherein network node is configured to determine the load balancing between the legacy resource and the extended coverage resource by further being configured to:
determine the load balancing based on a priority associated with the wireless device. 94. The network node of claim 92, further being configured to:
allocate a device identity for the wireless device when determining the load balancing. 95. The network node of claim 90, further being configured to:
determine the indication based on one or more device identities. 96. The network node of claim 94, wherein the device identity is given by a Packet Temporary Mobile Subscriber Identity (P-TMSI); a Temporary Logical Link Identifier (TLLI); or an International Mobile Subscriber Identity (IMSI). 97. The network node of claim 90, wherein the downlink legacy resource is a Synchronization Channel (SCH), a Broadcast Channel (BCCH), or a Common Control Channel (CCCH), and wherein the downlink extended coverage resource is an Extended Coverage SCH (EC-SCH), an Extended Coverage BCCH (EC-BCCH), or an Extended Coverage CCCH (EC-CCCH). 98. The network node of claim 90, wherein the uplink legacy resource is a Common Control Channel (CCCH), and wherein the uplink extended coverage resource is an Extended Coverage CCCH (EC-CCCH). | 2,400 |
7,557 | 7,557 | 12,895,321 | 2,422 | An apparatus includes a display, a communication interface, circuitry configured to receive information via the communication interface via a handshake process where the information includes identifying information for a television unit, and circuitry configured to render a graphical remote control to the display based at least in part on received identifying information for a television unit. Various other apparatuses, systems, methods, etc., are also disclosed. | 1. An apparatus comprising:
a display; a wireless communication interface; circuitry configured to receive information via the wireless communication interface via a handshake process wherein the information comprises identifying information for a television unit; and circuitry configured to render a graphical remote control to the display based at least in part on received identifying information for a television unit. 2. The apparatus of claim 1 wherein the identifying information comprises identifying information for a television unit that comprises a set-top box. 3. The apparatus of claim 1 wherein the display comprises a touch-sensitive display and further comprising circuitry configured to receive input via the touch-sensitive display, the input corresponding to one or more graphical buttons of the graphical remote control. 4. The apparatus of claim 1 further comprising circuitry configured to render instructional information to the display wherein the instructional information comprises instructions for operation of the graphical remote control. 5. The apparatus of claim 4 wherein the instructional information comprises instructions for operation of a physical remote control associated with the identifying information for a television unit. 6. The apparatus of claim 1 wherein the display comprises a touch-sensitive display, further comprising circuitry configured to render thumbnails of video content to the display, further comprising circuitry configured to receive input corresponding to one or more of the thumbnails, and further comprising circuitry configured to transmit information responsive to receipt of input via the touch-sensitive display, the input corresponding to one or more of the thumbnails. 7. The apparatus of claim 6 wherein at least some of the thumbnails comprise video thumbnails of video content currently available via one or more broadcast networks. 8. The apparatus of claim 6 wherein the information transmitted responsive to receipt of input via the touch-sensitive display comprises information to instruct a television unit to render video content associated with one or more of the thumbnails. 9. The apparatus of claim 8 wherein the information to instruct comprises information to instruct a television unit to render simultaneously video content associated with two or more of the thumbnails. 10. The apparatus of claim 1 wherein the circuitry configured to render renders an arrangement of at least some features of the graphical remote control wherein the arrangement corresponds to an arrangement of features of a physical remote control associated with the identifying information. 11. The apparatus of claim 1 further comprising circuitry configured to enable an interrupt mode, the interrupt mode configured to transmit at least one instruction to a television unit responsive to receipt of a phone call or an email by the apparatus. 12. A method comprising:
receiving identifying information for a television unit; associating the identifying information with a remote control; rendering a graphical representation of the remote control to a touch-sensitive display; receiving input via the touch-sensitive display; and transmitting information according to a television unit-implementable communication protocol wherein the information comprises information to instruct a television unit to receive video content from a broadcast network. 13. The method of claim 12 wherein the receiving receives information according to a Bluetooth® communication protocol. 14. The method of claim 12 wherein the associating comprises transmitting at least some of the identifying information via an IP network interface and, responsive to the transmitting, receiving information via the IP network interface. 15. The method of claim 12 wherein the receiving input via the touch-sensitive display comprises receiving input from one or more rendered graphical buttons which correspond to one or more physical buttons of the represented remote control. 16. The method of claim 12 further comprising rendering thumbnails of video content to the touch-sensitive display. 17. The method of claim 12 wherein the identifying information comprises identifying information for a television unit that comprises a set-top box. 18. One or more computer-readable media comprising processor-executable instructions to instruct a computing device to:
receive identifying information for a television unit; associate the identifying information with a remote control; render a graphical representation of the remote control to a touch-sensitive display; receive input via the touch-sensitive display; and transmit information according to a television unit-implementable communication protocol wherein the information comprises information to instruct a television unit to receive video content from a broadcast network. 19. The one or more computer-readable media of claim 18 comprising instructions to receive a keyword, to associate the keyword with a feature of a television unit and to render to the display information describing the feature. 20. The one or more computer-readable media of claim 18 comprising instructions to receive and render thumbnails of video content to the touch-sensitive display. | An apparatus includes a display, a communication interface, circuitry configured to receive information via the communication interface via a handshake process where the information includes identifying information for a television unit, and circuitry configured to render a graphical remote control to the display based at least in part on received identifying information for a television unit. Various other apparatuses, systems, methods, etc., are also disclosed.1. An apparatus comprising:
a display; a wireless communication interface; circuitry configured to receive information via the wireless communication interface via a handshake process wherein the information comprises identifying information for a television unit; and circuitry configured to render a graphical remote control to the display based at least in part on received identifying information for a television unit. 2. The apparatus of claim 1 wherein the identifying information comprises identifying information for a television unit that comprises a set-top box. 3. The apparatus of claim 1 wherein the display comprises a touch-sensitive display and further comprising circuitry configured to receive input via the touch-sensitive display, the input corresponding to one or more graphical buttons of the graphical remote control. 4. The apparatus of claim 1 further comprising circuitry configured to render instructional information to the display wherein the instructional information comprises instructions for operation of the graphical remote control. 5. The apparatus of claim 4 wherein the instructional information comprises instructions for operation of a physical remote control associated with the identifying information for a television unit. 6. The apparatus of claim 1 wherein the display comprises a touch-sensitive display, further comprising circuitry configured to render thumbnails of video content to the display, further comprising circuitry configured to receive input corresponding to one or more of the thumbnails, and further comprising circuitry configured to transmit information responsive to receipt of input via the touch-sensitive display, the input corresponding to one or more of the thumbnails. 7. The apparatus of claim 6 wherein at least some of the thumbnails comprise video thumbnails of video content currently available via one or more broadcast networks. 8. The apparatus of claim 6 wherein the information transmitted responsive to receipt of input via the touch-sensitive display comprises information to instruct a television unit to render video content associated with one or more of the thumbnails. 9. The apparatus of claim 8 wherein the information to instruct comprises information to instruct a television unit to render simultaneously video content associated with two or more of the thumbnails. 10. The apparatus of claim 1 wherein the circuitry configured to render renders an arrangement of at least some features of the graphical remote control wherein the arrangement corresponds to an arrangement of features of a physical remote control associated with the identifying information. 11. The apparatus of claim 1 further comprising circuitry configured to enable an interrupt mode, the interrupt mode configured to transmit at least one instruction to a television unit responsive to receipt of a phone call or an email by the apparatus. 12. A method comprising:
receiving identifying information for a television unit; associating the identifying information with a remote control; rendering a graphical representation of the remote control to a touch-sensitive display; receiving input via the touch-sensitive display; and transmitting information according to a television unit-implementable communication protocol wherein the information comprises information to instruct a television unit to receive video content from a broadcast network. 13. The method of claim 12 wherein the receiving receives information according to a Bluetooth® communication protocol. 14. The method of claim 12 wherein the associating comprises transmitting at least some of the identifying information via an IP network interface and, responsive to the transmitting, receiving information via the IP network interface. 15. The method of claim 12 wherein the receiving input via the touch-sensitive display comprises receiving input from one or more rendered graphical buttons which correspond to one or more physical buttons of the represented remote control. 16. The method of claim 12 further comprising rendering thumbnails of video content to the touch-sensitive display. 17. The method of claim 12 wherein the identifying information comprises identifying information for a television unit that comprises a set-top box. 18. One or more computer-readable media comprising processor-executable instructions to instruct a computing device to:
receive identifying information for a television unit; associate the identifying information with a remote control; render a graphical representation of the remote control to a touch-sensitive display; receive input via the touch-sensitive display; and transmit information according to a television unit-implementable communication protocol wherein the information comprises information to instruct a television unit to receive video content from a broadcast network. 19. The one or more computer-readable media of claim 18 comprising instructions to receive a keyword, to associate the keyword with a feature of a television unit and to render to the display information describing the feature. 20. The one or more computer-readable media of claim 18 comprising instructions to receive and render thumbnails of video content to the touch-sensitive display. | 2,400 |
7,558 | 7,558 | 14,362,113 | 2,488 | A method, device, and system are provided for placing a port ( 12, 22, 32 ) for a surgical tool ( 20, 30 ) relative to real-time anatomical data. The method comprises: placing an endoscope ( 10 ) in a standard port ( 12 ); determining real-time anatomical data from an image from the endoscope; using a port localization apparatus ( 210 ) to identify an optimal location for an instrument port relative to the image from the endoscope; and creating an instrument port at the identified location. | 1. A method for placing a port for a surgical tool relative to real-time anatomical data, comprising the steps of:
placing an endoscope in a standard port; determining real-time anatomical data from an image from the endoscope; using a port localization apparatus to identify an optimal location for an instrument port relative to the image from the endoscope; and creating an instrument port at the identified location. 2. The method of claim 1, wherein the port localization apparatus is affixed to the endoscope at a predetermined anchor point, and wherein the step of using a port localization apparatus to identify an optimal location for an instrument port comprises the steps of:
locate a potential port location; determining a projection of an instrument through the potential port location onto the plane of the endoscope image; overlaying a representation of the instrument onto the endoscope image corresponding to the potential port location; and receiving an indication of whether or not the potential port location is an optimal port location. 3. (canceled) 4. (canceled) 5. The method of claim 2, further comprising the step of:
manipulating a positioning and orientation apparatus to project the projection of the instrument onto the endoscope image, the positioning and orientation apparatus capturing the angles of projection and determining the location of the projection on the endoscope image corresponding to the captured angles. 6. (canceled) 7. (canceled) 8. (canceled) 9. A device for locating a port for a surgical tool relative to real-time anatomical data from an endoscope, comprising:
an endoscope; and a port localization apparatus affixed to the endoscope at a predetermined anchor point characterized in the port localization apparatus defines a location for a port at a known spatial relationship to the endoscope. 10. The device of claim 9, further comprising a processor, the processor:
determining the port location in an image space of the endoscope; determining a projection of an instrument through the port location onto an image from the endoscope; and overlaying a representation of the instrument onto the endoscope image corresponding to the potential port location. 11. The device of claim 10, further comprising a positioning and orientation apparatus operably connected to the port localization apparatus, the positioning and orientation apparatus being adapted to be manipulated to providing angles of projection at the port location for a projection of an instrument onto the endoscope image, the positioning and orientation apparatus capturing the angles of projection and determining the location of the projection on the endoscope image. 12. The device of claim 11, wherein the port localization apparatus is a shape sensing tether and the positioning and orientation apparatus is a stylus. 13. The device of claim 10, wherein the port localization apparatus is at least one rigid member and the positioning and orientation apparatus is at least one joint connected to the at least one rigid member and having an encoder measuring an angle of the joint. 14. A system for locating a port for a surgical tool relative to an endoscope, comprising:
a processor; a memory operably connected with the processor; an endoscope providing an endoscope image; a port localization apparatus affixed to the endoscope at a predetermined anchor point; and locating a port at a known location relative to the endoscope; and a program of instruction encoded on the memory and executed by the processor; characterized in the port localization apparatus defines a port at a known spatial relationship relative to the endoscope; and the program of instruction encoded on the memory and executed by the processor determines the location of the port. 15. The system of claim 14, wherein the program of instruction executed by the processor:
determines the port location in an image space of the endoscope; determines a projection of an instrument through the port location onto an image from the endoscope; and overlays a representation of the instrument onto the endoscope image corresponding to the potential port location. 16. The system of claim 15, further comprising a positioning and orientation apparatus operably connected to the port localization apparatus, the positioning and orientation apparatus being adapted to be manipulated to providing angles of projection at the port location for a projection of an instrument onto the endoscope image, the positioning and orientation apparatus capturing the angles of projection and determining the location of the projection on the endoscope image. 17. The system of claim 14, wherein the port localization apparatus is a shape sensing tether and the positioning and orientation apparatus is a stylus. 18. The system of claim 14, wherein the port localization apparatus is at least one rigid member and the positioning and orientation apparatus is at least one joint connected to the at least one rigid member and having an encoder measuring an angle of the joint. 19. A computer program product comprising a computer-readable storage device having encoded thereon a program of instruction executable by a computer processor to place a port for a surgical tool relative to real-time anatomical data, the program of instruction comprising:
program instructions for determining real-time anatomical data from an image from an endoscope; characterized in the program of instructions further comprising program instructions for using a port localization apparatus to identify an optimal location for an instrument port relative to the image from the endoscope. 20. The computer program product of claim 19, further comprising:
program instructions for locate a potential port location; program instructions for determining a projection of an instrument through the potential port location onto the plane of the endoscope image; program instructions for overlaying a representation of the instrument onto the endoscope image corresponding to the potential port location; and program instructions for receiving an indication of whether or not the potential port location is an optimal port location; wherein the program instructions are encoded on the computer-readable storage device. | A method, device, and system are provided for placing a port ( 12, 22, 32 ) for a surgical tool ( 20, 30 ) relative to real-time anatomical data. The method comprises: placing an endoscope ( 10 ) in a standard port ( 12 ); determining real-time anatomical data from an image from the endoscope; using a port localization apparatus ( 210 ) to identify an optimal location for an instrument port relative to the image from the endoscope; and creating an instrument port at the identified location.1. A method for placing a port for a surgical tool relative to real-time anatomical data, comprising the steps of:
placing an endoscope in a standard port; determining real-time anatomical data from an image from the endoscope; using a port localization apparatus to identify an optimal location for an instrument port relative to the image from the endoscope; and creating an instrument port at the identified location. 2. The method of claim 1, wherein the port localization apparatus is affixed to the endoscope at a predetermined anchor point, and wherein the step of using a port localization apparatus to identify an optimal location for an instrument port comprises the steps of:
locate a potential port location; determining a projection of an instrument through the potential port location onto the plane of the endoscope image; overlaying a representation of the instrument onto the endoscope image corresponding to the potential port location; and receiving an indication of whether or not the potential port location is an optimal port location. 3. (canceled) 4. (canceled) 5. The method of claim 2, further comprising the step of:
manipulating a positioning and orientation apparatus to project the projection of the instrument onto the endoscope image, the positioning and orientation apparatus capturing the angles of projection and determining the location of the projection on the endoscope image corresponding to the captured angles. 6. (canceled) 7. (canceled) 8. (canceled) 9. A device for locating a port for a surgical tool relative to real-time anatomical data from an endoscope, comprising:
an endoscope; and a port localization apparatus affixed to the endoscope at a predetermined anchor point characterized in the port localization apparatus defines a location for a port at a known spatial relationship to the endoscope. 10. The device of claim 9, further comprising a processor, the processor:
determining the port location in an image space of the endoscope; determining a projection of an instrument through the port location onto an image from the endoscope; and overlaying a representation of the instrument onto the endoscope image corresponding to the potential port location. 11. The device of claim 10, further comprising a positioning and orientation apparatus operably connected to the port localization apparatus, the positioning and orientation apparatus being adapted to be manipulated to providing angles of projection at the port location for a projection of an instrument onto the endoscope image, the positioning and orientation apparatus capturing the angles of projection and determining the location of the projection on the endoscope image. 12. The device of claim 11, wherein the port localization apparatus is a shape sensing tether and the positioning and orientation apparatus is a stylus. 13. The device of claim 10, wherein the port localization apparatus is at least one rigid member and the positioning and orientation apparatus is at least one joint connected to the at least one rigid member and having an encoder measuring an angle of the joint. 14. A system for locating a port for a surgical tool relative to an endoscope, comprising:
a processor; a memory operably connected with the processor; an endoscope providing an endoscope image; a port localization apparatus affixed to the endoscope at a predetermined anchor point; and locating a port at a known location relative to the endoscope; and a program of instruction encoded on the memory and executed by the processor; characterized in the port localization apparatus defines a port at a known spatial relationship relative to the endoscope; and the program of instruction encoded on the memory and executed by the processor determines the location of the port. 15. The system of claim 14, wherein the program of instruction executed by the processor:
determines the port location in an image space of the endoscope; determines a projection of an instrument through the port location onto an image from the endoscope; and overlays a representation of the instrument onto the endoscope image corresponding to the potential port location. 16. The system of claim 15, further comprising a positioning and orientation apparatus operably connected to the port localization apparatus, the positioning and orientation apparatus being adapted to be manipulated to providing angles of projection at the port location for a projection of an instrument onto the endoscope image, the positioning and orientation apparatus capturing the angles of projection and determining the location of the projection on the endoscope image. 17. The system of claim 14, wherein the port localization apparatus is a shape sensing tether and the positioning and orientation apparatus is a stylus. 18. The system of claim 14, wherein the port localization apparatus is at least one rigid member and the positioning and orientation apparatus is at least one joint connected to the at least one rigid member and having an encoder measuring an angle of the joint. 19. A computer program product comprising a computer-readable storage device having encoded thereon a program of instruction executable by a computer processor to place a port for a surgical tool relative to real-time anatomical data, the program of instruction comprising:
program instructions for determining real-time anatomical data from an image from an endoscope; characterized in the program of instructions further comprising program instructions for using a port localization apparatus to identify an optimal location for an instrument port relative to the image from the endoscope. 20. The computer program product of claim 19, further comprising:
program instructions for locate a potential port location; program instructions for determining a projection of an instrument through the potential port location onto the plane of the endoscope image; program instructions for overlaying a representation of the instrument onto the endoscope image corresponding to the potential port location; and program instructions for receiving an indication of whether or not the potential port location is an optimal port location; wherein the program instructions are encoded on the computer-readable storage device. | 2,400 |
7,559 | 7,559 | 14,217,753 | 2,474 | The present invention discloses a method and an electronic device for processing information. The method for processing information is applied in a first electronic device, wherein there is a first correspondence relation between the first electronic device and N second electronic devices, where N is an integer greater than or equal to 1. The method comprises: detecting to acquire a first operation for the first electronic device; judging whether the first operation meets a first preset condition; and generating, by the first electronic device, a first prompt effect for the first operation and controlling each of the N second electronic devices to generate the first prompt effect when the first operation meets the first preset condition. The above solution achieves a technical effect that the correspondence relation among multiple electronic devices can be determined more conveniently. A prompting method and an electronic device are also provided to accurately prompt the data transmission process. | 1. A method for processing information for use in a first electronic device, wherein there is a first correspondence relation between the first electronic device and N second electronic devices, where N is an integer greater than or equal to 1, the method comprising:
detecting to acquire a first operation for the first electronic device; judging whether the first operation meets a first preset condition; and generating, by the first electronic device, a first prompt effect for the first operation and controlling each of the N second electronic devices to generate the first prompt effect when the first operation meets the first preset condition. 2. The method according to claim 1, wherein the judging whether the first operation meets a first preset condition comprises:
judging whether the first operation is an operation of data transmission. 3. The method according to claim 2, wherein the controlling each of the N second electronic devices to generate the first prompt effect comprises at least one of:
transmitting first connection requests to the N second electronic devices such that the N second electronic devices generate the first prompt effect based on the first connection requests in case that there are data connections between the N second electronic devices and the first electronic device; or. transmitting a first connection request to a server such that the N second electronic devices are controlled by the server to generate the first prompt effect in case that the first electronic device is connected to the N second electronic devices via the server; or broadcasting a first connection request in the network system in which the first electronic device resides such that the N second electronic devices generate the first prompt effect after they have received the first connection request. 4. The method according to claim 2, wherein after the judging whether the first operation meets a first preset condition, the method further comprises:
establishing data transmission channels between the first electronic device and L second electronic devices of the N second electronic devices when the first operation meets the first preset condition, where L is an integer less than or equal to N. 5. The method according to claim 4, wherein the establishing data transmission channels between the first electronic device and the L second electronic devices of the N second electronic devices comprises:
receiving K pieces of feedback information sent from K second electronic devices of the N second electronic devices, wherein the feedback information represents consent to establish the data connection with the first electronic device, and K is an integer greater than or equal to L and less than or equal to N; and establishing the data transmission channels between the first electronic device and the L second electronic devices based on the K pieces of feedback information. 6. The method according to claim 5, wherein the establishing the data transmission channels between the first electronic device and the L second electronic devices based on the K pieces of feedback information comprises:
displaying K pieces of identification information corresponding to the K pieces of feedback information on a display unit of the first electronic device; determining L pieces of identification information of the K identification information corresponding to the L second electronic devices based on a first selection operation from the user of the first electronic device; and establishing the data transmission channels between the first electronic device and the L second electronic devices based on the L pieces of identification information. 7. The method according to claim 4, wherein the data transmission comprises at least P sub transmission stages and the P sub transmission stages correspond to P prompt effects in a first prompt mode in a one-to-one manner, wherein P is an integer greater than or equal to 2, and wherein the establishing the data transmission channels between the first electronic device and the L second electronic devices of the N second electronic devices comprises:
determining a first prompt effect of the P prompt effects corresponding to a first sub transmission stage at the time of T1 when the transmission between the first electronic device and the L second electronic devices is in the first sub transmission stage of the P sub transmission stages, wherein the fact that the first operation meets the first preset condition represents the transmission between the first electronic device and the L second electronic devices is in the first sub transmission stage; generating and outputting the first prompt effect; determining a second prompt effect of the P prompt effects corresponding to a second sub transmission stage at the time of T2 which is later than T1 when the transmission between the first electronic device and the L second electronic devices is in the second sub transmission stage of the P sub transmission stages, wherein the second prompt effect is a prompt effect different from the first prompt effect; and generating and outputting the second prompt effect. 8. The method according to claim 7, wherein the first sub transmission stage is a stage where there are data transmission channels established between the first electronic device and the L second electronic devices, and the first prompt effect is a prompt effect which prompts with light of a first preset intensity; or
the first sub transmission stage is a stage where there is a first correspondence relation between the first electronic device and the L second electronic devices but no data connection is established, and the first prompt effect is a prompt effect which prompts with light of a second preset intensity; or the second sub transmission stage is a stage where the first electronic device and the L second electronic devices perform data transmission therebetween, and the second prompt effect is a prompt effect which prompts by emitting light at a preset frequency; or the second sub transmission stage is a stage where there are the data transmission channels established between the first electronic device and the L second electronic devices but no data transmission is performed, and the second prompt effect is a prompt effect which prompts with light of a third preset intensity. 9. The method according to claim 4, wherein after the establishing the data transmission channels between the first electronic device and the L second electronic devices of the N second electronic devices, the method further comprises:
acquiring first data from the L second electronic devices. 10. The method according to claim 9, wherein the acquiring first data from the L second electronic devices comprises:
acquiring the first data from preset directories of the L second electronic devices; or acquiring the first data based on a second selection operation of the user. 11. The method according to claim 4, wherein after the establishing the data transmission channels between the first electronic device and the L second electronic devices of the N second electronic devices, the method further comprises:
judging whether there is a second operation for the first electronic device which meets a second preset condition; and disconnecting the data transmission channels when there is a second operation. 12. An electronic device, wherein there is a first correspondence relation between the electronic device and N second electronic devices, where N is an integer greater than or equal to 1, the electronic device comprising:
a detecting module configured to detect to acquire a first operation for the electronic device; a judging module configured to judge whether the first operation meets a first preset condition; and a generating module configured to generate, by the electronic device, a first prompt effect for the first operation and controlling each of the N second electronic devices to generate the first prompt effect when the first operation meets the first preset condition. 13. The electronic device according to claim 12, wherein the judging module is further configured to:
judge whether the first operation is an operation of data transmission. 14. The electronic device according to claim 13, wherein, the generating module is further configured to:
transmit first connection requests to the N second electronic devices such that the N second electronic devices generate the first prompt effect based on the first connection requests in case that there are data connections between the electronic device and the N second electronic devices; or transmit a first connection request to a server such that the N second electronic devices are controlled by the server to generate the first prompt effect in case that the electronic device is connected to the N second electronic devices via the server; broadcast a first connection request in the network system in which the electronic device resides such that the N second electronic devices generate the first prompt effect after they have received the first connection requests. 15. The electronic device according to claim 12, wherein the electronic device further comprises:
an establishing module configured to, after it is judged whether the first operation meets a first preset condition, establish data transmission channels between the electronic device and L second electronic devices of the N second electronic devices when the first operation meets the first preset condition, where L is an integer less than or equal to N. 16. The electronic device according to claim 15, wherein the establishing module comprises:
a receiving unit configured to receive K pieces of feedback information sent from K second electronic devices of the N second electronic devices, wherein the feedback information represents consent to establish the data connection with the electronic device, and K is an integer greater than or equal to L and less than or equal to N; and an establishing unit configured to establish the data transmission channels between the electronic device and the L second electronic devices based on the K pieces of feedback information. 17. The electronic device according to claim 16, wherein the establishing unit further comprises:
a displaying sub-unit configured to display K pieces of identification information corresponding to the K pieces of feedback information on a display unit of the electronic device; a determining sub-unit configured to determine L pieces of identification information of the K identification information corresponding to the L second electronic devices based on a first selection operation from the user of the electronic device; and an establishing sub-unit configured to establish the data transmission channels between the electronic device and the L second electronic devices based on the L pieces of identification information. 18. The electronic device according to claim 15, wherein when the data transmission comprises at least P sub transmission stages and the P sub transmission stages correspond to P prompt effects in a first prompt mode in a one-to-one manner, where P is an integer greater than or equal to 2, the establishing module comprises:
a first determining unit configured to determine a first prompt effect of the P prompt effects corresponding to a first sub transmission stage at the time of T1 when the transmission between the electronic device and the L second electronic devices is in the first sub transmission stage of the P sub transmission stages, where the fact that the first operation meets the first preset condition represents the transmission between the electronic device and the L second electronic devices is in the first sub transmission stage; a first generating unit configured to generate and output the first prompt effect; a second determining unit configured to determine a second prompt effect of the P prompt effects corresponding to a second sub transmission stage at the time of T2 which is later than T1 when the transmission between the electronic device and the L second electronic devices is in the second sub transmission stage of the P sub transmission stages, where the second prompt effect is a prompt effect different from the first prompt effect; and a second generating unit configured to generate and output the second prompt effect. 19. The electronic device according to claim 18, wherein the first sub transmission stage is a stage where there are data transmission channels established between the electronic device and the L second electronic devices, and the first prompt effect is a prompt effect which prompts with light of a first preset intensity; or
the first sub transmission stage is a stage where there is a first correspondence relation between the electronic device and the L second electronic devices but no data connection is established, and the first prompt effect is a prompt effect which prompts with light of a second preset intensity; or the second sub transmission stage is a stage where the electronic device and the L second electronic devices perform data transmission therebetween, and the second prompt effect is a prompt effect which prompts by emitting light at a preset frequency; or the second sub transmission stage is a stage where there are data transmission channels established between the electronic device and the L second electronic devices but no data transmission is performed, and the second prompt effect is a prompt effect which prompts with light of a third preset intensity. 20. The electronic device according to claim 15, wherein the electronic device further comprises:
an acquiring module configured to, after the data transmission channels between the electronic device and the L second electronic devices of the N second electronic devices have been established, acquire first data from the L second electronic devices. 21. The electronic device according to claim 20, wherein the acquiring module is further configured to:
acquire the first data from preset directories of the L second electronic devices; or acquire the first data based on a second selection operation of the user. 22. The electronic device according to claim 15, wherein the electronic device further comprises:
a judging module configured to, after the data transmission channels between the electronic device and the L second electronic devices of the N second electronic devices have been established, judge whether there is a second operation for the electronic device which meets a second preset condition; and a disconnecting module configured to disconnect the data transmission channels when there is a second operation. 23. A prompting method for use in a first electronic device, wherein there is data transmission between the first electronic device and L second electronic devices, where L is an integer greater than or equal to 1, the data transmission comprises at least P sub transmission stages, and the P sub transmission stages correspond to P prompt effects in a first prompt mode in a one-to-one manner, where P is an integer greater than or equal to 2, the method comprising:
determining a first prompt effect of the P prompt effects corresponding to a first sub transmission stage at the time of T1 when the transmission between the first electronic device and the L second electronic devices is in the first sub transmission stage of the P sub transmission stages; generating and outputting the first prompt effect; determining a second prompt effect of the P prompt effects corresponding to a second sub transmission stage at the time of T2 which is later than T1 when the transmission between the first electronic device and the L second electronic devices is in the second sub transmission stage of the P sub transmission stages, wherein the second prompt effect is a prompt effect different from the first prompt effect; and generating and outputting the second prompt effect. 24. The method according to claim 23, wherein the first sub transmission stage is a stage where there are data transmission channels established between the first electronic device and the L second electronic devices, and the first prompt effect is a prompt effect which prompts with light of a first preset intensity; or
the first sub transmission stage is a stage where there is a first correspondence relation between the first electronic device and the L second electronic devices but no data connection is established, and the first prompt effect is a prompt effect which prompts with light of a second preset intensity; or the second sub transmission stage is a stage where the first electronic device and the L second electronic devices perform data transmission therebetween, and the second prompt effect is a prompt effect which prompts by emitting light at a preset frequency; or the second sub transmission stage is a stage where there are data transmission channels established between the first electronic device and the L second electronic devices but no data transmission is performed, and the second prompt effect is a prompt effect which prompts with light of a third preset intensity. 25. An electronic device, wherein there is data transmission between the electronic device and L second electronic devices, where L is an integer greater than or equal to 1, the data transmission comprises at least P sub transmission stages, and the P sub transmission stages correspond to P prompt effects in a first prompt mode in a one-to-one manner, where P is an integer greater than or equal to 2, the electronic device comprising:
a first determining module configured to determine a first prompt effect of the P prompt effects corresponding to a first sub transmission stage at the time of T1 when the transmission between the electronic device and the L second electronic devices is in the first sub transmission stage of the P sub transmission stages; a first generating module configured to generate and output the first prompt effect; a second determining module configured to determine a second prompt effect of the P prompt effects corresponding to a second sub transmission stage at the time of T2 which is later than T1 when the transmission between the first electronic device and the L second electronic devices is in the second sub transmission stage of the P sub transmission stages, wherein the second prompt effect is a prompt effect different from the first prompt effect; and a second generating module configured to generate and output the second prompt effect. 26. The electronic device according to claim 25, wherein the first sub transmission stage is a stage where there are data transmission channels established between the electronic device and the L second electronic devices, and the first prompt effect is a prompt effect which prompts with light of a first preset intensity; or
the first sub transmission stage is a stage where there is a first correspondence relation between the electronic device and the L second electronic devices but no data connection is established, and the first prompt effect is a prompt effect which prompts with light of a second preset intensity; or the second sub transmission stage is a stage where the electronic device and the L second electronic devices perform data transmission therebetween, and the second prompt effect is a prompt effect which prompts by emitting light at a preset frequency; or the second sub transmission stage is a stage where there are data transmission channels established between the electronic device and the L second electronic devices but no data transmission is performed, and the second prompt effect is a prompt effect which prompts with light of a third preset intensity. | The present invention discloses a method and an electronic device for processing information. The method for processing information is applied in a first electronic device, wherein there is a first correspondence relation between the first electronic device and N second electronic devices, where N is an integer greater than or equal to 1. The method comprises: detecting to acquire a first operation for the first electronic device; judging whether the first operation meets a first preset condition; and generating, by the first electronic device, a first prompt effect for the first operation and controlling each of the N second electronic devices to generate the first prompt effect when the first operation meets the first preset condition. The above solution achieves a technical effect that the correspondence relation among multiple electronic devices can be determined more conveniently. A prompting method and an electronic device are also provided to accurately prompt the data transmission process.1. A method for processing information for use in a first electronic device, wherein there is a first correspondence relation between the first electronic device and N second electronic devices, where N is an integer greater than or equal to 1, the method comprising:
detecting to acquire a first operation for the first electronic device; judging whether the first operation meets a first preset condition; and generating, by the first electronic device, a first prompt effect for the first operation and controlling each of the N second electronic devices to generate the first prompt effect when the first operation meets the first preset condition. 2. The method according to claim 1, wherein the judging whether the first operation meets a first preset condition comprises:
judging whether the first operation is an operation of data transmission. 3. The method according to claim 2, wherein the controlling each of the N second electronic devices to generate the first prompt effect comprises at least one of:
transmitting first connection requests to the N second electronic devices such that the N second electronic devices generate the first prompt effect based on the first connection requests in case that there are data connections between the N second electronic devices and the first electronic device; or. transmitting a first connection request to a server such that the N second electronic devices are controlled by the server to generate the first prompt effect in case that the first electronic device is connected to the N second electronic devices via the server; or broadcasting a first connection request in the network system in which the first electronic device resides such that the N second electronic devices generate the first prompt effect after they have received the first connection request. 4. The method according to claim 2, wherein after the judging whether the first operation meets a first preset condition, the method further comprises:
establishing data transmission channels between the first electronic device and L second electronic devices of the N second electronic devices when the first operation meets the first preset condition, where L is an integer less than or equal to N. 5. The method according to claim 4, wherein the establishing data transmission channels between the first electronic device and the L second electronic devices of the N second electronic devices comprises:
receiving K pieces of feedback information sent from K second electronic devices of the N second electronic devices, wherein the feedback information represents consent to establish the data connection with the first electronic device, and K is an integer greater than or equal to L and less than or equal to N; and establishing the data transmission channels between the first electronic device and the L second electronic devices based on the K pieces of feedback information. 6. The method according to claim 5, wherein the establishing the data transmission channels between the first electronic device and the L second electronic devices based on the K pieces of feedback information comprises:
displaying K pieces of identification information corresponding to the K pieces of feedback information on a display unit of the first electronic device; determining L pieces of identification information of the K identification information corresponding to the L second electronic devices based on a first selection operation from the user of the first electronic device; and establishing the data transmission channels between the first electronic device and the L second electronic devices based on the L pieces of identification information. 7. The method according to claim 4, wherein the data transmission comprises at least P sub transmission stages and the P sub transmission stages correspond to P prompt effects in a first prompt mode in a one-to-one manner, wherein P is an integer greater than or equal to 2, and wherein the establishing the data transmission channels between the first electronic device and the L second electronic devices of the N second electronic devices comprises:
determining a first prompt effect of the P prompt effects corresponding to a first sub transmission stage at the time of T1 when the transmission between the first electronic device and the L second electronic devices is in the first sub transmission stage of the P sub transmission stages, wherein the fact that the first operation meets the first preset condition represents the transmission between the first electronic device and the L second electronic devices is in the first sub transmission stage; generating and outputting the first prompt effect; determining a second prompt effect of the P prompt effects corresponding to a second sub transmission stage at the time of T2 which is later than T1 when the transmission between the first electronic device and the L second electronic devices is in the second sub transmission stage of the P sub transmission stages, wherein the second prompt effect is a prompt effect different from the first prompt effect; and generating and outputting the second prompt effect. 8. The method according to claim 7, wherein the first sub transmission stage is a stage where there are data transmission channels established between the first electronic device and the L second electronic devices, and the first prompt effect is a prompt effect which prompts with light of a first preset intensity; or
the first sub transmission stage is a stage where there is a first correspondence relation between the first electronic device and the L second electronic devices but no data connection is established, and the first prompt effect is a prompt effect which prompts with light of a second preset intensity; or the second sub transmission stage is a stage where the first electronic device and the L second electronic devices perform data transmission therebetween, and the second prompt effect is a prompt effect which prompts by emitting light at a preset frequency; or the second sub transmission stage is a stage where there are the data transmission channels established between the first electronic device and the L second electronic devices but no data transmission is performed, and the second prompt effect is a prompt effect which prompts with light of a third preset intensity. 9. The method according to claim 4, wherein after the establishing the data transmission channels between the first electronic device and the L second electronic devices of the N second electronic devices, the method further comprises:
acquiring first data from the L second electronic devices. 10. The method according to claim 9, wherein the acquiring first data from the L second electronic devices comprises:
acquiring the first data from preset directories of the L second electronic devices; or acquiring the first data based on a second selection operation of the user. 11. The method according to claim 4, wherein after the establishing the data transmission channels between the first electronic device and the L second electronic devices of the N second electronic devices, the method further comprises:
judging whether there is a second operation for the first electronic device which meets a second preset condition; and disconnecting the data transmission channels when there is a second operation. 12. An electronic device, wherein there is a first correspondence relation between the electronic device and N second electronic devices, where N is an integer greater than or equal to 1, the electronic device comprising:
a detecting module configured to detect to acquire a first operation for the electronic device; a judging module configured to judge whether the first operation meets a first preset condition; and a generating module configured to generate, by the electronic device, a first prompt effect for the first operation and controlling each of the N second electronic devices to generate the first prompt effect when the first operation meets the first preset condition. 13. The electronic device according to claim 12, wherein the judging module is further configured to:
judge whether the first operation is an operation of data transmission. 14. The electronic device according to claim 13, wherein, the generating module is further configured to:
transmit first connection requests to the N second electronic devices such that the N second electronic devices generate the first prompt effect based on the first connection requests in case that there are data connections between the electronic device and the N second electronic devices; or transmit a first connection request to a server such that the N second electronic devices are controlled by the server to generate the first prompt effect in case that the electronic device is connected to the N second electronic devices via the server; broadcast a first connection request in the network system in which the electronic device resides such that the N second electronic devices generate the first prompt effect after they have received the first connection requests. 15. The electronic device according to claim 12, wherein the electronic device further comprises:
an establishing module configured to, after it is judged whether the first operation meets a first preset condition, establish data transmission channels between the electronic device and L second electronic devices of the N second electronic devices when the first operation meets the first preset condition, where L is an integer less than or equal to N. 16. The electronic device according to claim 15, wherein the establishing module comprises:
a receiving unit configured to receive K pieces of feedback information sent from K second electronic devices of the N second electronic devices, wherein the feedback information represents consent to establish the data connection with the electronic device, and K is an integer greater than or equal to L and less than or equal to N; and an establishing unit configured to establish the data transmission channels between the electronic device and the L second electronic devices based on the K pieces of feedback information. 17. The electronic device according to claim 16, wherein the establishing unit further comprises:
a displaying sub-unit configured to display K pieces of identification information corresponding to the K pieces of feedback information on a display unit of the electronic device; a determining sub-unit configured to determine L pieces of identification information of the K identification information corresponding to the L second electronic devices based on a first selection operation from the user of the electronic device; and an establishing sub-unit configured to establish the data transmission channels between the electronic device and the L second electronic devices based on the L pieces of identification information. 18. The electronic device according to claim 15, wherein when the data transmission comprises at least P sub transmission stages and the P sub transmission stages correspond to P prompt effects in a first prompt mode in a one-to-one manner, where P is an integer greater than or equal to 2, the establishing module comprises:
a first determining unit configured to determine a first prompt effect of the P prompt effects corresponding to a first sub transmission stage at the time of T1 when the transmission between the electronic device and the L second electronic devices is in the first sub transmission stage of the P sub transmission stages, where the fact that the first operation meets the first preset condition represents the transmission between the electronic device and the L second electronic devices is in the first sub transmission stage; a first generating unit configured to generate and output the first prompt effect; a second determining unit configured to determine a second prompt effect of the P prompt effects corresponding to a second sub transmission stage at the time of T2 which is later than T1 when the transmission between the electronic device and the L second electronic devices is in the second sub transmission stage of the P sub transmission stages, where the second prompt effect is a prompt effect different from the first prompt effect; and a second generating unit configured to generate and output the second prompt effect. 19. The electronic device according to claim 18, wherein the first sub transmission stage is a stage where there are data transmission channels established between the electronic device and the L second electronic devices, and the first prompt effect is a prompt effect which prompts with light of a first preset intensity; or
the first sub transmission stage is a stage where there is a first correspondence relation between the electronic device and the L second electronic devices but no data connection is established, and the first prompt effect is a prompt effect which prompts with light of a second preset intensity; or the second sub transmission stage is a stage where the electronic device and the L second electronic devices perform data transmission therebetween, and the second prompt effect is a prompt effect which prompts by emitting light at a preset frequency; or the second sub transmission stage is a stage where there are data transmission channels established between the electronic device and the L second electronic devices but no data transmission is performed, and the second prompt effect is a prompt effect which prompts with light of a third preset intensity. 20. The electronic device according to claim 15, wherein the electronic device further comprises:
an acquiring module configured to, after the data transmission channels between the electronic device and the L second electronic devices of the N second electronic devices have been established, acquire first data from the L second electronic devices. 21. The electronic device according to claim 20, wherein the acquiring module is further configured to:
acquire the first data from preset directories of the L second electronic devices; or acquire the first data based on a second selection operation of the user. 22. The electronic device according to claim 15, wherein the electronic device further comprises:
a judging module configured to, after the data transmission channels between the electronic device and the L second electronic devices of the N second electronic devices have been established, judge whether there is a second operation for the electronic device which meets a second preset condition; and a disconnecting module configured to disconnect the data transmission channels when there is a second operation. 23. A prompting method for use in a first electronic device, wherein there is data transmission between the first electronic device and L second electronic devices, where L is an integer greater than or equal to 1, the data transmission comprises at least P sub transmission stages, and the P sub transmission stages correspond to P prompt effects in a first prompt mode in a one-to-one manner, where P is an integer greater than or equal to 2, the method comprising:
determining a first prompt effect of the P prompt effects corresponding to a first sub transmission stage at the time of T1 when the transmission between the first electronic device and the L second electronic devices is in the first sub transmission stage of the P sub transmission stages; generating and outputting the first prompt effect; determining a second prompt effect of the P prompt effects corresponding to a second sub transmission stage at the time of T2 which is later than T1 when the transmission between the first electronic device and the L second electronic devices is in the second sub transmission stage of the P sub transmission stages, wherein the second prompt effect is a prompt effect different from the first prompt effect; and generating and outputting the second prompt effect. 24. The method according to claim 23, wherein the first sub transmission stage is a stage where there are data transmission channels established between the first electronic device and the L second electronic devices, and the first prompt effect is a prompt effect which prompts with light of a first preset intensity; or
the first sub transmission stage is a stage where there is a first correspondence relation between the first electronic device and the L second electronic devices but no data connection is established, and the first prompt effect is a prompt effect which prompts with light of a second preset intensity; or the second sub transmission stage is a stage where the first electronic device and the L second electronic devices perform data transmission therebetween, and the second prompt effect is a prompt effect which prompts by emitting light at a preset frequency; or the second sub transmission stage is a stage where there are data transmission channels established between the first electronic device and the L second electronic devices but no data transmission is performed, and the second prompt effect is a prompt effect which prompts with light of a third preset intensity. 25. An electronic device, wherein there is data transmission between the electronic device and L second electronic devices, where L is an integer greater than or equal to 1, the data transmission comprises at least P sub transmission stages, and the P sub transmission stages correspond to P prompt effects in a first prompt mode in a one-to-one manner, where P is an integer greater than or equal to 2, the electronic device comprising:
a first determining module configured to determine a first prompt effect of the P prompt effects corresponding to a first sub transmission stage at the time of T1 when the transmission between the electronic device and the L second electronic devices is in the first sub transmission stage of the P sub transmission stages; a first generating module configured to generate and output the first prompt effect; a second determining module configured to determine a second prompt effect of the P prompt effects corresponding to a second sub transmission stage at the time of T2 which is later than T1 when the transmission between the first electronic device and the L second electronic devices is in the second sub transmission stage of the P sub transmission stages, wherein the second prompt effect is a prompt effect different from the first prompt effect; and a second generating module configured to generate and output the second prompt effect. 26. The electronic device according to claim 25, wherein the first sub transmission stage is a stage where there are data transmission channels established between the electronic device and the L second electronic devices, and the first prompt effect is a prompt effect which prompts with light of a first preset intensity; or
the first sub transmission stage is a stage where there is a first correspondence relation between the electronic device and the L second electronic devices but no data connection is established, and the first prompt effect is a prompt effect which prompts with light of a second preset intensity; or the second sub transmission stage is a stage where the electronic device and the L second electronic devices perform data transmission therebetween, and the second prompt effect is a prompt effect which prompts by emitting light at a preset frequency; or the second sub transmission stage is a stage where there are data transmission channels established between the electronic device and the L second electronic devices but no data transmission is performed, and the second prompt effect is a prompt effect which prompts with light of a third preset intensity. | 2,400 |
7,560 | 7,560 | 14,943,559 | 2,424 | Systems and methods presented herein provide for distributing asset opportunities across COD providers. One system includes an asset load manager (ALM) that receives information from the COD providers about asset opportunities for COD content of the COD providers. The ALM selected asset opportunity and delivers it to a trading platform that offers the asset opportunity to the remaining COD providers. The ALM also receives, from the trading platform, sale information of the asset opportunity to another of the COD providers and information of an asset used to fill the asset opportunity. An asset opportunity information system (AOIS) interfaces with an asset decision system (ADS), to direct the ADS to configure asset rankings and removals according to rules of the second COD provider. The ADS directs an asset insertion into the COD content of the first COD provider based on the asset ranking. | 1. A system operable to distribute asset opportunities across a plurality of content on demand (COD) providers, the system comprising a processor operable to implement at least one of an asset load manager (ALM) and an asset opportunity information system (AOIS):
the ALM being operable to interface with the COD providers, to receive information from the COD providers about asset opportunities for COD content of the COD providers, to select a first of the asset opportunities from a first of the COD providers, and to deliver the first asset opportunity to a trading platform that offers the first asset opportunity to the remaining COD providers, wherein the first asset opportunity comprises demographic information intended for the COD content, and wherein the ALM is further operable to receive, from the trading platform, sale information of the first asset opportunity to a second of the COD providers and information of an asset used to fill the first asset opportunity; and the AOIS being operable to interface with an asset decision system (ADS), to direct the ADS to configure asset rankings and removals according to rules of the second COD provider, wherein the ADS directs an asset insertion into the COD content of the first COD provider based on the asset ranking. 2. The system of claim 1, wherein:
the ADS is operable with the first COD provider; and the AOIS is further operable to interface with an ADS of the second COD provider. 3. The system of claim 1, wherein:
the ALM is further operable to determine a number of impressions for the first asset opportunity, and to present the number of impressions with the first asset opportunity to the trading platform. 4. The system of claim 1, wherein:
the ALM is further operable to process sale price information of the first asset opportunity; and the AOIS is further operable to direct the first COD provider to update a value associated with the first asset opportunity based on the sale price information. 5. The system of claim 1, wherein:
the information of the asset used to fill the first asset opportunity includes a genre of the COD content, an intended demographic for the COD content, and a date and time when the asset was used to fill the first asset opportunity. 6. A method for distributing asset opportunities across a plurality of content on demand (COD) providers, the method comprising:
interfacing with the COD providers to receive information from the COD providers about asset opportunities for COD content of the COD providers; selecting a first of the asset opportunities from a first of the COD providers; delivering the first asset opportunity to a trading platform that offers the first asset opportunity to the remaining COD providers, wherein the first asset opportunity comprises demographic information intended for the COD content; receiving, from the trading platform, sale information of the first asset opportunity to a second of the COD providers and information of an asset used to fill the first asset opportunity; and interfacing with an asset decision system (ADS) to direct the ADS to configure asset rankings and removals according to rules of the second COD provider, wherein the ADS directs an asset insertion into the COD content of the first COD provider based on the asset ranking. 7. The method of claim 6, wherein:
the ADS is operable with the first COD provider; and the method further comprises interfacing with an ADS of the second COD provider. 8. The method of claim 6, further comprising:
determining a number of impressions for the first asset opportunity; and presenting the number of impressions with the first asset opportunity to the trading platform. 9. The method of claim 6, further comprising:
processing sale price information of the first asset opportunity; and directing the first COD provider to update a value associated with the first asset opportunity based on the sale price information. 10. The method of claim 6, wherein:
the information of the asset used to fill the first asset opportunity includes a genre of the COD content, an intended demographic for the COD content, and a date and time when the asset was used to fill the first asset opportunity. 11. A non-transitory computer readable medium comprising instructions that, when executed by a processor, direct the processor to distribute asset opportunities across a plurality of content on demand (COD) providers, the instructions further directing the processor to:
interface with the COD providers to receive information from the COD providers about asset opportunities for COD content of the COD providers; select a first of the asset opportunities from a first of the COD providers; deliver the first asset opportunity to a trading platform that offers the first asset opportunity to the remaining COD providers, wherein the first asset opportunity comprises demographic information intended for the COD content; receive, from the trading platform, sale information of the first asset opportunity to a second of the COD providers and information of an asset used to fill the first asset opportunity; and interface with an asset decision system (ADS) to direct the ADS to configure asset rankings and removals according to rules of the second COD provider, wherein the ADS directs an asset insertion into the COD content of the first COD provider based on the asset ranking. 12. The computer readable medium of claim 11, wherein:
the ADS is operable with the first COD provider; and the method further comprises interfacing with an ADS of the second COD provider. 13. The computer readable medium of claim 11, the instructions further directing the processor to:
determine a number of impressions for the first asset opportunity; and present the number of impressions with the first asset opportunity to the trading platform. 14. The computer readable medium of claim 11, the instructions further directing the processor to:
process sale price information of the first asset opportunity; and direct the first COD provider to update a value associated with the first asset opportunity based on the sale price information. 15. The computer readable medium of claim 11, wherein:
the information of the asset used to fill the first asset opportunity includes a genre of the COD content, an intended demographic for the COD content, and a date and time when the asset was used to fill the first asset opportunity. | Systems and methods presented herein provide for distributing asset opportunities across COD providers. One system includes an asset load manager (ALM) that receives information from the COD providers about asset opportunities for COD content of the COD providers. The ALM selected asset opportunity and delivers it to a trading platform that offers the asset opportunity to the remaining COD providers. The ALM also receives, from the trading platform, sale information of the asset opportunity to another of the COD providers and information of an asset used to fill the asset opportunity. An asset opportunity information system (AOIS) interfaces with an asset decision system (ADS), to direct the ADS to configure asset rankings and removals according to rules of the second COD provider. The ADS directs an asset insertion into the COD content of the first COD provider based on the asset ranking.1. A system operable to distribute asset opportunities across a plurality of content on demand (COD) providers, the system comprising a processor operable to implement at least one of an asset load manager (ALM) and an asset opportunity information system (AOIS):
the ALM being operable to interface with the COD providers, to receive information from the COD providers about asset opportunities for COD content of the COD providers, to select a first of the asset opportunities from a first of the COD providers, and to deliver the first asset opportunity to a trading platform that offers the first asset opportunity to the remaining COD providers, wherein the first asset opportunity comprises demographic information intended for the COD content, and wherein the ALM is further operable to receive, from the trading platform, sale information of the first asset opportunity to a second of the COD providers and information of an asset used to fill the first asset opportunity; and the AOIS being operable to interface with an asset decision system (ADS), to direct the ADS to configure asset rankings and removals according to rules of the second COD provider, wherein the ADS directs an asset insertion into the COD content of the first COD provider based on the asset ranking. 2. The system of claim 1, wherein:
the ADS is operable with the first COD provider; and the AOIS is further operable to interface with an ADS of the second COD provider. 3. The system of claim 1, wherein:
the ALM is further operable to determine a number of impressions for the first asset opportunity, and to present the number of impressions with the first asset opportunity to the trading platform. 4. The system of claim 1, wherein:
the ALM is further operable to process sale price information of the first asset opportunity; and the AOIS is further operable to direct the first COD provider to update a value associated with the first asset opportunity based on the sale price information. 5. The system of claim 1, wherein:
the information of the asset used to fill the first asset opportunity includes a genre of the COD content, an intended demographic for the COD content, and a date and time when the asset was used to fill the first asset opportunity. 6. A method for distributing asset opportunities across a plurality of content on demand (COD) providers, the method comprising:
interfacing with the COD providers to receive information from the COD providers about asset opportunities for COD content of the COD providers; selecting a first of the asset opportunities from a first of the COD providers; delivering the first asset opportunity to a trading platform that offers the first asset opportunity to the remaining COD providers, wherein the first asset opportunity comprises demographic information intended for the COD content; receiving, from the trading platform, sale information of the first asset opportunity to a second of the COD providers and information of an asset used to fill the first asset opportunity; and interfacing with an asset decision system (ADS) to direct the ADS to configure asset rankings and removals according to rules of the second COD provider, wherein the ADS directs an asset insertion into the COD content of the first COD provider based on the asset ranking. 7. The method of claim 6, wherein:
the ADS is operable with the first COD provider; and the method further comprises interfacing with an ADS of the second COD provider. 8. The method of claim 6, further comprising:
determining a number of impressions for the first asset opportunity; and presenting the number of impressions with the first asset opportunity to the trading platform. 9. The method of claim 6, further comprising:
processing sale price information of the first asset opportunity; and directing the first COD provider to update a value associated with the first asset opportunity based on the sale price information. 10. The method of claim 6, wherein:
the information of the asset used to fill the first asset opportunity includes a genre of the COD content, an intended demographic for the COD content, and a date and time when the asset was used to fill the first asset opportunity. 11. A non-transitory computer readable medium comprising instructions that, when executed by a processor, direct the processor to distribute asset opportunities across a plurality of content on demand (COD) providers, the instructions further directing the processor to:
interface with the COD providers to receive information from the COD providers about asset opportunities for COD content of the COD providers; select a first of the asset opportunities from a first of the COD providers; deliver the first asset opportunity to a trading platform that offers the first asset opportunity to the remaining COD providers, wherein the first asset opportunity comprises demographic information intended for the COD content; receive, from the trading platform, sale information of the first asset opportunity to a second of the COD providers and information of an asset used to fill the first asset opportunity; and interface with an asset decision system (ADS) to direct the ADS to configure asset rankings and removals according to rules of the second COD provider, wherein the ADS directs an asset insertion into the COD content of the first COD provider based on the asset ranking. 12. The computer readable medium of claim 11, wherein:
the ADS is operable with the first COD provider; and the method further comprises interfacing with an ADS of the second COD provider. 13. The computer readable medium of claim 11, the instructions further directing the processor to:
determine a number of impressions for the first asset opportunity; and present the number of impressions with the first asset opportunity to the trading platform. 14. The computer readable medium of claim 11, the instructions further directing the processor to:
process sale price information of the first asset opportunity; and direct the first COD provider to update a value associated with the first asset opportunity based on the sale price information. 15. The computer readable medium of claim 11, wherein:
the information of the asset used to fill the first asset opportunity includes a genre of the COD content, an intended demographic for the COD content, and a date and time when the asset was used to fill the first asset opportunity. | 2,400 |
7,561 | 7,561 | 14,341,481 | 2,483 | Various embodiments are generally directed to an apparatus, method and other techniques for storing, in memory, at least one master forward transform matrix comprising signed constants having a defined number of precision bits and a sign bit and determining, by processing circuitry, which forward transform matrix to use to perform a transformation based on at least a transform unit size. Further, various techniques may include performing, by the processing circuitry, the transformation on residuals of pixel values of a frame using one of the at least one master forward transform matrix or a forward transform matrix derived from one of the master forward transform matrix at least partially based on the determination. | 1. An apparatus, comprising:
processing circuitry; a memory coupled with the processing circuitry, the memory to store at least one master forward transform matrix comprising signed constants having a defined number of precision bits and a sign bit, and the processing circuitry to perform the transformation on residuals of pixel values of a frame using one of the at least one master forward transform matrix or a forward transform matrix derived from one of the at least master forward transform matrix, and determine which forward transform matrix to use to perform the transformation based on at least a transform unit size. 2. The apparatus of claim 1, the at least one master forward transform matrix comprising a 32×32 discrete cosine transform (DCT) forward transform matrix and a 4×4 asymmetric discrete sine transform (ADST) forward transform matrix. 3. The apparatus of claim 1, the at least one master forward transform matrix comprising a 32×32 discrete cosine transform (DCT) forward transform matrix further comprising 32 columns and 32 rows of signed constants, and
the processing circuitry to derive a 16×16 DCT forward transform matrix to perform the transformation by extracting the signed constants from every other row and the first 16 columns of the 32×32 DCT forward transform matrix starting with a first row and a first column. 4. The apparatus of claim 3, the processing circuitry to derive an 8×8 DCT forward transform matrix to perform the transformation by extracting the signed constants from every other row and the first eight columns of the 16×16 DCT forward transform matrix starting with the first row and the first column. 5. The apparatus of claim 1, the at least one master forward transform matrix comprising a 32×32 discrete cosine transform (DCT) forward transform matrix further comprising 32 columns and 32 rows of signed constants, and
the processing circuitry to derive an 8×8 DCT forward transform matrix to perform the transformation by extracting the signed constants from every fourth row and the first eight columns of the 32×32 DCT forward transform matrix starting with a first row and a first column. 6. The apparatus of claim 5, the processing circuitry to derive a 4×4 DCT forward transform matrix to perform the transformation by extracting the signed constants from every other row and the first four columns of the 8×8 DCT forward transform matrix starting with the first row and the first column. 7. The apparatus of claim 1, the at least one master forward transform matrix comprising a 32×32 discrete cosine transform (DCT) forward transform matrix further comprising 32 columns and 32 rows of signed constants, and
the processing circuitry to derive a 4×4 DCT forward transform matrix to perform the transformation by extracting the signed constants from every eighth row and the first four columns of the 32×32 DCT forward transform matrix starting with a first row and a first column. 8. The apparatus of claim 1, the at least one master forward transform matrix comprising a 32×32 discrete cosine transform (DCT) forward transform matrix further comprising 32 columns and 32 rows of signed constants, and
the processing circuitry to derive a 16×16 asymmetric discrete sine transform (ADST) forward transform matrix to perform the transformation by extracting the signed constants from every other row in reverse order and the first 16 columns of the 32×32 DCT forward transform matrix starting with a last row and a first column, wherein every other column is negated starting with a second column. 9. The apparatus of claim 1, the at least one master forward transform matrix comprising a 32×32 DCT forward transform matrix further comprising 32 columns and 32 rows of signed constants, and
the processing circuitry to derive an 8×8 asymmetric discrete sine transform (ADST) forward transform matrix to perform the transformation by extracting the signed constants from every fourth row in reverse order and the first eight columns of the 32×32 DCT forward transform matrix starting with a second to last row and a first column, wherein every other column is negated starting with a second column. 10. The apparatus of claim 1, the memory comprising a read-only memory. 11. The apparatus of claim 1, the processing circuitry to perform the transformation based on a VP9 video compression standard. 12. A computer-implemented method, comprising:
storing, in memory, at least one master forward transform matrix comprising signed constants having a defined number of precision bits and a sign bit; determining, by processing circuitry, which forward transform matrix to use to perform a transformation based on at least a transform unit size; and performing, by the processing circuitry, the transformation on residuals of pixel values of a frame using one of the at least one master forward transform matrix or a forward transform matrix derived from one of the master forward transform matrix at least partially based on the determination. 13. The computer-implemented method of claim 12, the at least one master forward transform matrix comprising a 32×32 discrete cosine transform (DCT) forward transform matrix and a 4×4 asymmetric discrete sine transform (ADST) forward transform matrix. 14. The computer-implemented method of claim 12, the at least one master forward transform matrix comprising a 32×32 discrete cosine transform (DCT) forward transform matrix further comprising 32 columns and 32 rows of signed constants; and
deriving a 16×16 DCT forward transform matrix to perform the transformation by extracting the signed constants from every other row and the first 16 columns of the 32×32 DCT forward transform matrix starting with a first row and a first column. 15. The computer-implemented method of claim 12, the at least one master forward transform matrix comprising a 32×32 discrete cosine transform (DCT) forward transform matrix further comprising 32 columns and 32 rows of signed constants; and
deriving an 8×8 DCT forward transform matrix to perform the transformation by extracting the signed constants from every fourth row and the first eight columns of the 32×32 DCT forward transform matrix starting with a first row and a first column. 16. The computer-implemented method of claim 12, the at least one master forward transform matrix comprising a 32×32 DCT forward transform matrix further comprising 32 columns and 32 rows of signed constants, and
deriving a 4×4 DCT forward transform matrix to perform the transformation by extracting the signed constants from every eighth row and the first four columns of the 32×32 DCT forward transform matrix starting with a first row and a first column. 17. The computer-implemented method of claim 12, the at least one master forward transform matrix comprising a 32×32 discrete cosine transform (DCT) forward transform matrix further comprising 32 columns and 32 rows of signed constants, and
deriving a 16×16 asymmetric discrete sine transform (ADST) forward transform matrix to perform the transformation by extracting the signed constants from every other row in reverse order and the first 16 columns of the 32×32 DCT forward transform matrix starting with a last row and a first column, wherein every other column is negated starting with a second column. 18. The computer-implemented method of claim 12, the at least one master forward transform matrix comprising a 32×32 discrete cosine transform (DCT) forward transform matrix further comprising 32 columns and 32 rows of signed constants, and
deriving an 8×8 asymmetric discrete sine transform (ADST) forward transform matrix to perform the transformation by extracting the signed constants from every fourth row in reverse order and the first eight columns of the 32×32 DCT forward transform matrix starting with a second to last row and a first column, wherein every other column is negated starting with a second column. 19. An article comprising a computer-readable storage medium comprising a plurality of instructions that when executed enable a system to:
store, in memory, at least one master forward transform matrix comprising signed constants having a defined number of precision bits and a sign bit; determine, by processing circuitry, which forward transform matrix to use to perform a transformation based on at least a transform unit size; and perform, by processing circuitry, the transformation on residuals of pixel values of a frame using one of the at least one master forward transform matrices or a forward transform matrix derived from one of the master forward transform matrix at least partially based on the determination. 20. The storage medium of claim 19, the at least one master forward transform matrix comprising a 32×32 discrete cosine transformation (DCT) forward transform matrix and a 4×4 asymmetric discrete sine transformation (ADST) forward transform matrix. 21. The storage medium of claim 19, the at least one master forward transform matrix comprising a 32×32 discrete cosine transform (DCT) forward transform matrix further comprising 32 columns and 32 rows of signed constants, and the storage medium comprising instructions that when executed enable the system to derive a 16×16 DCT forward transform matrix to perform the transformation by extracting the signed constants from every other row and the first 16 columns of the 32×32 DCT forward transform matrix starting with a first row and a first column. 22. The storage medium of claim 19, the at least one master forward transform matrix comprising a 32×32 discrete cosine transform (DCT) forward transform matrix further comprising 32 columns and 32 rows of signed constants, and the storage medium comprising instructions that when executed enable the system to derive an 8×8 DCT forward transform matrix to perform the transformation by extracting the signed constants from every fourth row and the first eight columns of the 32×32 DCT forward transform matrix starting with a first row and a first column. 23. The storage medium of claim 19, the at least one master forward transform matrix comprising a 32×32 discrete cosine transform (DCT) forward transform matrix further comprising 32 columns and 32 rows of signed constants, and the storage medium comprising instructions that when executed enable the system to derive a 4×4 DCT forward transform matrix to perform the transformation by extracting the signed constants from every eighth row and the first four columns of the 32×32 DCT forward transform matrix starting with a first row and a first column. 24. The storage medium of claim 19, the at least one master forward transform matrix comprising a 32×32 discrete cosine transform (DCT) forward transform matrix further comprising 32 columns and 32 rows of signed constants, and the storage medium comprising instructions that when executed enable the system to derive a 16×16 asymmetric discrete sine transform (ADST) forward transform matrix to perform the transformation by extracting the signed constants from every other row in reverse order and the first 16 columns of the 32×32 DCT forward transform matrix starting with a last row and a first column, wherein every other column is negated starting with a second column. 25. The storage medium of claim 19, the at least one master forward transform matrix comprising a 32×32 discrete cosine transform (DCT) forward transform matrix further comprising 32 columns and 32 rows of signed constants, and the storage medium comprising instructions that when executed enable the system to derive an 8×8 asymmetric discrete sine transform (ADST) forward transform matrix to perform the transformation by extracting the signed constants from every fourth row in reverse order and the first eight columns of the 32×32 DCT forward transform matrix starting with a second to last row and a first column, wherein every other column is negated starting with a second column. | Various embodiments are generally directed to an apparatus, method and other techniques for storing, in memory, at least one master forward transform matrix comprising signed constants having a defined number of precision bits and a sign bit and determining, by processing circuitry, which forward transform matrix to use to perform a transformation based on at least a transform unit size. Further, various techniques may include performing, by the processing circuitry, the transformation on residuals of pixel values of a frame using one of the at least one master forward transform matrix or a forward transform matrix derived from one of the master forward transform matrix at least partially based on the determination.1. An apparatus, comprising:
processing circuitry; a memory coupled with the processing circuitry, the memory to store at least one master forward transform matrix comprising signed constants having a defined number of precision bits and a sign bit, and the processing circuitry to perform the transformation on residuals of pixel values of a frame using one of the at least one master forward transform matrix or a forward transform matrix derived from one of the at least master forward transform matrix, and determine which forward transform matrix to use to perform the transformation based on at least a transform unit size. 2. The apparatus of claim 1, the at least one master forward transform matrix comprising a 32×32 discrete cosine transform (DCT) forward transform matrix and a 4×4 asymmetric discrete sine transform (ADST) forward transform matrix. 3. The apparatus of claim 1, the at least one master forward transform matrix comprising a 32×32 discrete cosine transform (DCT) forward transform matrix further comprising 32 columns and 32 rows of signed constants, and
the processing circuitry to derive a 16×16 DCT forward transform matrix to perform the transformation by extracting the signed constants from every other row and the first 16 columns of the 32×32 DCT forward transform matrix starting with a first row and a first column. 4. The apparatus of claim 3, the processing circuitry to derive an 8×8 DCT forward transform matrix to perform the transformation by extracting the signed constants from every other row and the first eight columns of the 16×16 DCT forward transform matrix starting with the first row and the first column. 5. The apparatus of claim 1, the at least one master forward transform matrix comprising a 32×32 discrete cosine transform (DCT) forward transform matrix further comprising 32 columns and 32 rows of signed constants, and
the processing circuitry to derive an 8×8 DCT forward transform matrix to perform the transformation by extracting the signed constants from every fourth row and the first eight columns of the 32×32 DCT forward transform matrix starting with a first row and a first column. 6. The apparatus of claim 5, the processing circuitry to derive a 4×4 DCT forward transform matrix to perform the transformation by extracting the signed constants from every other row and the first four columns of the 8×8 DCT forward transform matrix starting with the first row and the first column. 7. The apparatus of claim 1, the at least one master forward transform matrix comprising a 32×32 discrete cosine transform (DCT) forward transform matrix further comprising 32 columns and 32 rows of signed constants, and
the processing circuitry to derive a 4×4 DCT forward transform matrix to perform the transformation by extracting the signed constants from every eighth row and the first four columns of the 32×32 DCT forward transform matrix starting with a first row and a first column. 8. The apparatus of claim 1, the at least one master forward transform matrix comprising a 32×32 discrete cosine transform (DCT) forward transform matrix further comprising 32 columns and 32 rows of signed constants, and
the processing circuitry to derive a 16×16 asymmetric discrete sine transform (ADST) forward transform matrix to perform the transformation by extracting the signed constants from every other row in reverse order and the first 16 columns of the 32×32 DCT forward transform matrix starting with a last row and a first column, wherein every other column is negated starting with a second column. 9. The apparatus of claim 1, the at least one master forward transform matrix comprising a 32×32 DCT forward transform matrix further comprising 32 columns and 32 rows of signed constants, and
the processing circuitry to derive an 8×8 asymmetric discrete sine transform (ADST) forward transform matrix to perform the transformation by extracting the signed constants from every fourth row in reverse order and the first eight columns of the 32×32 DCT forward transform matrix starting with a second to last row and a first column, wherein every other column is negated starting with a second column. 10. The apparatus of claim 1, the memory comprising a read-only memory. 11. The apparatus of claim 1, the processing circuitry to perform the transformation based on a VP9 video compression standard. 12. A computer-implemented method, comprising:
storing, in memory, at least one master forward transform matrix comprising signed constants having a defined number of precision bits and a sign bit; determining, by processing circuitry, which forward transform matrix to use to perform a transformation based on at least a transform unit size; and performing, by the processing circuitry, the transformation on residuals of pixel values of a frame using one of the at least one master forward transform matrix or a forward transform matrix derived from one of the master forward transform matrix at least partially based on the determination. 13. The computer-implemented method of claim 12, the at least one master forward transform matrix comprising a 32×32 discrete cosine transform (DCT) forward transform matrix and a 4×4 asymmetric discrete sine transform (ADST) forward transform matrix. 14. The computer-implemented method of claim 12, the at least one master forward transform matrix comprising a 32×32 discrete cosine transform (DCT) forward transform matrix further comprising 32 columns and 32 rows of signed constants; and
deriving a 16×16 DCT forward transform matrix to perform the transformation by extracting the signed constants from every other row and the first 16 columns of the 32×32 DCT forward transform matrix starting with a first row and a first column. 15. The computer-implemented method of claim 12, the at least one master forward transform matrix comprising a 32×32 discrete cosine transform (DCT) forward transform matrix further comprising 32 columns and 32 rows of signed constants; and
deriving an 8×8 DCT forward transform matrix to perform the transformation by extracting the signed constants from every fourth row and the first eight columns of the 32×32 DCT forward transform matrix starting with a first row and a first column. 16. The computer-implemented method of claim 12, the at least one master forward transform matrix comprising a 32×32 DCT forward transform matrix further comprising 32 columns and 32 rows of signed constants, and
deriving a 4×4 DCT forward transform matrix to perform the transformation by extracting the signed constants from every eighth row and the first four columns of the 32×32 DCT forward transform matrix starting with a first row and a first column. 17. The computer-implemented method of claim 12, the at least one master forward transform matrix comprising a 32×32 discrete cosine transform (DCT) forward transform matrix further comprising 32 columns and 32 rows of signed constants, and
deriving a 16×16 asymmetric discrete sine transform (ADST) forward transform matrix to perform the transformation by extracting the signed constants from every other row in reverse order and the first 16 columns of the 32×32 DCT forward transform matrix starting with a last row and a first column, wherein every other column is negated starting with a second column. 18. The computer-implemented method of claim 12, the at least one master forward transform matrix comprising a 32×32 discrete cosine transform (DCT) forward transform matrix further comprising 32 columns and 32 rows of signed constants, and
deriving an 8×8 asymmetric discrete sine transform (ADST) forward transform matrix to perform the transformation by extracting the signed constants from every fourth row in reverse order and the first eight columns of the 32×32 DCT forward transform matrix starting with a second to last row and a first column, wherein every other column is negated starting with a second column. 19. An article comprising a computer-readable storage medium comprising a plurality of instructions that when executed enable a system to:
store, in memory, at least one master forward transform matrix comprising signed constants having a defined number of precision bits and a sign bit; determine, by processing circuitry, which forward transform matrix to use to perform a transformation based on at least a transform unit size; and perform, by processing circuitry, the transformation on residuals of pixel values of a frame using one of the at least one master forward transform matrices or a forward transform matrix derived from one of the master forward transform matrix at least partially based on the determination. 20. The storage medium of claim 19, the at least one master forward transform matrix comprising a 32×32 discrete cosine transformation (DCT) forward transform matrix and a 4×4 asymmetric discrete sine transformation (ADST) forward transform matrix. 21. The storage medium of claim 19, the at least one master forward transform matrix comprising a 32×32 discrete cosine transform (DCT) forward transform matrix further comprising 32 columns and 32 rows of signed constants, and the storage medium comprising instructions that when executed enable the system to derive a 16×16 DCT forward transform matrix to perform the transformation by extracting the signed constants from every other row and the first 16 columns of the 32×32 DCT forward transform matrix starting with a first row and a first column. 22. The storage medium of claim 19, the at least one master forward transform matrix comprising a 32×32 discrete cosine transform (DCT) forward transform matrix further comprising 32 columns and 32 rows of signed constants, and the storage medium comprising instructions that when executed enable the system to derive an 8×8 DCT forward transform matrix to perform the transformation by extracting the signed constants from every fourth row and the first eight columns of the 32×32 DCT forward transform matrix starting with a first row and a first column. 23. The storage medium of claim 19, the at least one master forward transform matrix comprising a 32×32 discrete cosine transform (DCT) forward transform matrix further comprising 32 columns and 32 rows of signed constants, and the storage medium comprising instructions that when executed enable the system to derive a 4×4 DCT forward transform matrix to perform the transformation by extracting the signed constants from every eighth row and the first four columns of the 32×32 DCT forward transform matrix starting with a first row and a first column. 24. The storage medium of claim 19, the at least one master forward transform matrix comprising a 32×32 discrete cosine transform (DCT) forward transform matrix further comprising 32 columns and 32 rows of signed constants, and the storage medium comprising instructions that when executed enable the system to derive a 16×16 asymmetric discrete sine transform (ADST) forward transform matrix to perform the transformation by extracting the signed constants from every other row in reverse order and the first 16 columns of the 32×32 DCT forward transform matrix starting with a last row and a first column, wherein every other column is negated starting with a second column. 25. The storage medium of claim 19, the at least one master forward transform matrix comprising a 32×32 discrete cosine transform (DCT) forward transform matrix further comprising 32 columns and 32 rows of signed constants, and the storage medium comprising instructions that when executed enable the system to derive an 8×8 asymmetric discrete sine transform (ADST) forward transform matrix to perform the transformation by extracting the signed constants from every fourth row in reverse order and the first eight columns of the 32×32 DCT forward transform matrix starting with a second to last row and a first column, wherein every other column is negated starting with a second column. | 2,400 |
7,562 | 7,562 | 14,582,310 | 2,413 | Example embodiments relate to a communication system including an antenna and a plurality of multi-band remote radio heads operationally coupled to the antenna. The remote radio heads are configured to support transmission and reception at two or more frequency bands. Example embodiments relate to a communication system including an antenna and a plurality of multi-band remote radio heads operationally coupled to the antenna, each multi-band remote radio head including a plurality of multi-band duplexers. | 1. A communication system, comprising:
an antenna; and a plurality of multi-band remote radio heads operationally coupled to the antenna; the remote radio heads being configured to support transmission and reception at two or more frequency bands; and the remote radio heads being in a housing. 2. The system of claim 1, wherein one or more of the remote radio heads comprises at least two multi-band duplexers. 3. The system of claim 2, wherein at least two of the remote radio heads comprises:
two power amplifiers for each of the two or more frequency bands; and two low-noise amplifiers for each of the two or more frequency bands. 4. The system of claim 1, wherein the remote radio heads comprise 2Tx/2Rx remote radio heads. 5. The system of claim 4, wherein the antenna comprises a plurality of dual polarized antennas, each dual-polarized antenna being coupled to a corresponding one of the 2Tx/2Rx remote radio heads. 6. The system of claim 5, wherein the antenna is a four-port antenna for a 4×4 MIMO system configuration. 7. The system of claim 5, wherein the antenna comprises two dual-polarized vertical antenna arrays. 8. The system of claim 1, wherein the antenna is a six-port antenna for a 6×6 MIMO system configuration including three dual-polarized antennas and three multi-band remote radio heads. 9. A communication system, comprising:
an antenna; and a plurality of multi-band remote radio heads operationally coupled to the antenna, each multi-band remote radio head including a plurality of multi-band duplexers; and the multi-band remote radio heads being co-located and configured to support transmission and reception at two or more frequency bands. 10. The communication system of claim 9, wherein each of the plurality of multi-band duplexers comprises a combination of single-band duplexers. 11. The communication system of claim 9, wherein the multi-band remote radio heads are not connected to multi-band diplexers via radio-frequency connectors. 12. The communication system of claim 9, wherein the multi-band remote radio heads are co-located with the antenna. | Example embodiments relate to a communication system including an antenna and a plurality of multi-band remote radio heads operationally coupled to the antenna. The remote radio heads are configured to support transmission and reception at two or more frequency bands. Example embodiments relate to a communication system including an antenna and a plurality of multi-band remote radio heads operationally coupled to the antenna, each multi-band remote radio head including a plurality of multi-band duplexers.1. A communication system, comprising:
an antenna; and a plurality of multi-band remote radio heads operationally coupled to the antenna; the remote radio heads being configured to support transmission and reception at two or more frequency bands; and the remote radio heads being in a housing. 2. The system of claim 1, wherein one or more of the remote radio heads comprises at least two multi-band duplexers. 3. The system of claim 2, wherein at least two of the remote radio heads comprises:
two power amplifiers for each of the two or more frequency bands; and two low-noise amplifiers for each of the two or more frequency bands. 4. The system of claim 1, wherein the remote radio heads comprise 2Tx/2Rx remote radio heads. 5. The system of claim 4, wherein the antenna comprises a plurality of dual polarized antennas, each dual-polarized antenna being coupled to a corresponding one of the 2Tx/2Rx remote radio heads. 6. The system of claim 5, wherein the antenna is a four-port antenna for a 4×4 MIMO system configuration. 7. The system of claim 5, wherein the antenna comprises two dual-polarized vertical antenna arrays. 8. The system of claim 1, wherein the antenna is a six-port antenna for a 6×6 MIMO system configuration including three dual-polarized antennas and three multi-band remote radio heads. 9. A communication system, comprising:
an antenna; and a plurality of multi-band remote radio heads operationally coupled to the antenna, each multi-band remote radio head including a plurality of multi-band duplexers; and the multi-band remote radio heads being co-located and configured to support transmission and reception at two or more frequency bands. 10. The communication system of claim 9, wherein each of the plurality of multi-band duplexers comprises a combination of single-band duplexers. 11. The communication system of claim 9, wherein the multi-band remote radio heads are not connected to multi-band diplexers via radio-frequency connectors. 12. The communication system of claim 9, wherein the multi-band remote radio heads are co-located with the antenna. | 2,400 |
7,563 | 7,563 | 14,261,908 | 2,433 | A tool and method examine error report information from a computer to determine not only whether a virus or other malware may be present on the computer but also may determine what vulnerability a particular exploit was attempting to use to subvert security mechanism to install the virus. A system monitor may collect both error reports and information about the error report, such as geographic location, hardware configuration, and software/operating system version information to build a profile of the spread of an attack and to be able to issue notifications related to increased data collection for errors, including crashes related to suspected services under attack. | 1. A computer-implemented method comprising:
obtaining an error report generated by a computing system that includes error data related to one or more errors within the computing system; analyzing, with a computer processor, the error report to identify information indicative of an attempt to subvert a security mechanism of the computing system; analyzing the error report for information indicative of a point of attack within the computing system of the attempt to subvert the security mechanism; and storing data associated with the attempt to subvert the security mechanism. 2. The method of claim 1, and further comprising:
analyzing, at a system monitor, a collection of error report data to determine a pattern of attack. 3. The method of claim 2, and further comprising instructing, using the system monitor, the computing system to adjust an amount of data obtained when experiencing an error related to the pattern of attack. 4. The method of claim 3, and further comprising adjusting, using the system monitor, a computing system policy that governs parameters concerning one or more of error reporting, response actions, and reporting configuration within the computing system. 5. The method of claim 1, and further comprising:
determining, based on the error data, one or more of a type of service under attack, a geographic region under attack, or a system configuration under attack. 6. The method of claim 1, and further comprising updating intrusion detection settings based on the error data. 7. The method of claim 1, wherein analyzing the error report for information indicative of a point of attack comprises:
identifying a hijacked control structure; and identifying a location of a vulnerability as indicated by the point of attack. 8. The method of claim 1, and further comprising:
modifying an exploit detection and deterrence process based on analysis of the error report. 9. A system for analyzing error report data, the system comprising:
a network connection for receiving error reports from a plurality of networked computers; a data store that stores error report data, from the error reports, related to errors that occurred on one or more of the networked computers; a system monitor that analyzes the error report data to identify an attempted exploit in a service of the one or more networked computers, and determines one or more of a location of attack, a type of service under attack, or a system configuration under attack; and a computer processor that is a functional part of the system and is activated by the system monitor to facilitate analyzing the error report data. 10. The system of claim 9, wherein the plurality of networked computers comprise an enterprise network. 11. The system of claim 10, wherein the system monitor and the plurality of networked computers communicate through a local area network. 12. The system of claim 9, wherein the system monitor identifies a particular service that was targeted in an attempt to subvert a security mechanism and, in response, sends a request to one or more of the networked computers for error data associated with the particular service. 13. The system of claim 9, wherein the system monitor obtains state data regarding the service from the one or more networked computers. 14. The system of claim 13, wherein the state data comprises one or more of a security update, a firewall setting, or an intrusion detection setting on the one or more networked computers. 15. The system of claim 9, wherein the system monitor sends an alert to an operator based on the attempted exploit in the service. 16. The system of claim 9, wherein the system monitor updates intrusion detection settings based on the error data. 17. The system of claim 16, wherein the system monitor identifies a pattern of attack from the error data and instructs the plurality of networked computers to adjust an amount of data obtained when experiencing an error related to the pattern of attack. 18. The system of claim 17, wherein the system monitor adjusts a system policy that governs parameters concerning one or more of error reporting, response actions, and reporting configuration. 19. A computer-implemented method of determining whether an error report contains evidence of an exploit, the method comprising:
receiving an error report including error data related to one or more errors within a computing system; performing, with a computer processor, exploit analysis on the error report, comprising at least one of:
identifying, from the error report, information indicative of a known exploit at an executable memory location;
identifying, from the error report, information indicative of NOPSleds;
identifying, from the error report, information indicative of a decoder loop;
identifying, from the error report, information indicative of a malicious text, a malicious string, or a malicious binary sequence;
identifying, from the error report, information indicative of a disabled defense program; or
identifying, from the error report, information indicative of a hijacked control structure; and
identifying, from the error report, a location of a vulnerability that indicates a point of attack. 20. The method of claim 19, wherein identifying a location comprises identifying an attempted exploit in a particular service of the computing system, and sending a request for error data associated with the particular service. | A tool and method examine error report information from a computer to determine not only whether a virus or other malware may be present on the computer but also may determine what vulnerability a particular exploit was attempting to use to subvert security mechanism to install the virus. A system monitor may collect both error reports and information about the error report, such as geographic location, hardware configuration, and software/operating system version information to build a profile of the spread of an attack and to be able to issue notifications related to increased data collection for errors, including crashes related to suspected services under attack.1. A computer-implemented method comprising:
obtaining an error report generated by a computing system that includes error data related to one or more errors within the computing system; analyzing, with a computer processor, the error report to identify information indicative of an attempt to subvert a security mechanism of the computing system; analyzing the error report for information indicative of a point of attack within the computing system of the attempt to subvert the security mechanism; and storing data associated with the attempt to subvert the security mechanism. 2. The method of claim 1, and further comprising:
analyzing, at a system monitor, a collection of error report data to determine a pattern of attack. 3. The method of claim 2, and further comprising instructing, using the system monitor, the computing system to adjust an amount of data obtained when experiencing an error related to the pattern of attack. 4. The method of claim 3, and further comprising adjusting, using the system monitor, a computing system policy that governs parameters concerning one or more of error reporting, response actions, and reporting configuration within the computing system. 5. The method of claim 1, and further comprising:
determining, based on the error data, one or more of a type of service under attack, a geographic region under attack, or a system configuration under attack. 6. The method of claim 1, and further comprising updating intrusion detection settings based on the error data. 7. The method of claim 1, wherein analyzing the error report for information indicative of a point of attack comprises:
identifying a hijacked control structure; and identifying a location of a vulnerability as indicated by the point of attack. 8. The method of claim 1, and further comprising:
modifying an exploit detection and deterrence process based on analysis of the error report. 9. A system for analyzing error report data, the system comprising:
a network connection for receiving error reports from a plurality of networked computers; a data store that stores error report data, from the error reports, related to errors that occurred on one or more of the networked computers; a system monitor that analyzes the error report data to identify an attempted exploit in a service of the one or more networked computers, and determines one or more of a location of attack, a type of service under attack, or a system configuration under attack; and a computer processor that is a functional part of the system and is activated by the system monitor to facilitate analyzing the error report data. 10. The system of claim 9, wherein the plurality of networked computers comprise an enterprise network. 11. The system of claim 10, wherein the system monitor and the plurality of networked computers communicate through a local area network. 12. The system of claim 9, wherein the system monitor identifies a particular service that was targeted in an attempt to subvert a security mechanism and, in response, sends a request to one or more of the networked computers for error data associated with the particular service. 13. The system of claim 9, wherein the system monitor obtains state data regarding the service from the one or more networked computers. 14. The system of claim 13, wherein the state data comprises one or more of a security update, a firewall setting, or an intrusion detection setting on the one or more networked computers. 15. The system of claim 9, wherein the system monitor sends an alert to an operator based on the attempted exploit in the service. 16. The system of claim 9, wherein the system monitor updates intrusion detection settings based on the error data. 17. The system of claim 16, wherein the system monitor identifies a pattern of attack from the error data and instructs the plurality of networked computers to adjust an amount of data obtained when experiencing an error related to the pattern of attack. 18. The system of claim 17, wherein the system monitor adjusts a system policy that governs parameters concerning one or more of error reporting, response actions, and reporting configuration. 19. A computer-implemented method of determining whether an error report contains evidence of an exploit, the method comprising:
receiving an error report including error data related to one or more errors within a computing system; performing, with a computer processor, exploit analysis on the error report, comprising at least one of:
identifying, from the error report, information indicative of a known exploit at an executable memory location;
identifying, from the error report, information indicative of NOPSleds;
identifying, from the error report, information indicative of a decoder loop;
identifying, from the error report, information indicative of a malicious text, a malicious string, or a malicious binary sequence;
identifying, from the error report, information indicative of a disabled defense program; or
identifying, from the error report, information indicative of a hijacked control structure; and
identifying, from the error report, a location of a vulnerability that indicates a point of attack. 20. The method of claim 19, wherein identifying a location comprises identifying an attempted exploit in a particular service of the computing system, and sending a request for error data associated with the particular service. | 2,400 |
7,564 | 7,564 | 14,895,293 | 2,413 | Systems and methods of discontinuous operation for wireless devices are provided. In one exemplary embodiment, a method may include preconfiguring ( 1501 ), by a user equipment (UE) ( 1012 ), the UE for discontinuous receive (DRX) operation in a connected state. The DRX operation may include modes of DRX operation of the UE with each mode corresponding to a level of connectivity of the UE. Further, while ( 1503 ) the UE is in the connected state, the method may include determining ( 1505 ), by the UE, the level of connectivity of the UE and sending ( 1507 ), by the UE, to a network node, a request for the DRX operation. Also, the request may include an indication of the level of connectivity of the UE. | 1-26. (canceled) 27. A method, by a user equipment (UE), for performing discontinuous receive (DRX) operation, the method comprising:
preconfiguring, by the UE, the UE for the DRX operation in a connected state, wherein the DRX operation includes DRX operation modes with each mode corresponding to a level of connectivity of the UE; and while the UE is in the connected state:
determining, by the UE, the level of connectivity of the UE; and
sending, by the UE and to a network node, a request for the DRX operation, the request including an indication of the level of connectivity of the UE. 28. The method of claim 27, wherein the request for the DRX operation includes a request to start the DRX operation. 29. The method of claim 27, wherein the request for the DRX operation includes a request to continue the DRX operation. 30. The method of claim 27, wherein the request for the DRX operation includes an indication of a change in the level of connectivity of the UE. 31. The method of claim 27, further comprising:
monitoring, by the UE, a connection between the UE and the network node; and in response to determining a change in the connection, sending, by the UE and to the network node, a request to stop the DRX operation. 32. The method of claim 31, wherein the change in the connection is associated with at least one of a handoff or a handover. 33. The method of claim 31, wherein monitoring the connection includes monitoring events associated with the connection. 34. The method of claim 31, wherein monitoring the connection includes monitoring a quality of the connection. 35. The method of claim 31, wherein monitoring the connection including monitoring radio measurements of the connection. 36. The method of claim 27, wherein sending the request is responsive to initiating, by the UE, a random access procedure. 37. The method of claim 27, wherein sending the request for the DRX operation is responsive to sending, by the UE, to the network node, a request to be scheduled. 38. The method of claim 27, wherein the level of connectivity includes a first level of connectivity associated with a first mode of the DRX operation and a second level of connectivity associated with a second mode of the DRX operation. 39. A user equipment (UE) for performing discontinuous receive (DRX) operation, the UE comprising:
memory configured to store data and computer-executable instructions; and a processing circuit operatively coupled to the memory; wherein the processing circuit and the memory are configured to:
preconfigure the UE for the DRX operation in a connected state, wherein the DRX operation includes modes of the DRX operation with each mode corresponding to a level of connectivity of the UE; and
while the UE is in the connected state:
determine the level of connectivity of the UE; and
send, to a network node, a request for the DRX operation, wherein the request includes an indication of the level of connectivity of the UE. 40. The UE of claim 39, wherein the processing circuit and the memory are further configured to:
monitor, by the UE, a connection between the UE and the network node; and in response to determining a change in the connection, send, by the UE, to the network node, a request to stop the DRX operation. 41. A computer program product stored in a non-transitory computer readable medium for controlling a User Equipment (UE) for performing discontinuous receive (DRX) operation, the computer program product comprising software instructions which, when run on a processing circuit of the UE, causes the UE to:
preconfigure itself for the DRX operation in a connected state, wherein the DRX operation includes modes of the DRX operation with each mode corresponding to a level of connectivity of the UE; and while the UE is in the connected state:
determine the level of connectivity of the UE; and
send, to a network node, a request for the DRX operation, wherein the request includes an indication of the level of connectivity of the UE. 42. The computer program product of claim 41, wherein the software instructions, when run on the processing circuit, further cause the UE to:
monitor a connection between the UE and the network node; and in response to determining a change in the connection, send, to the network node, a request to stop the DRX operation. 43. A user equipment (UE) for performing discontinuous receive (DRX) operation, the UE comprising:
means for preconfiguring the UE for the DRX operation in a connected state, wherein the DRX operation includes DRX operation modes with each mode corresponding to a level of connectivity of the UE; and means for, while the UE is in the connected state, determining the level of connectivity of the UE; and means for, while the UE is in the connected state, sending, to a network node, a request for the DRX operation, the request including an indication of the level of connectivity of the UE. 44. The UE of claim 43, further comprising:
means for monitoring a connection between the UE and the network node; and means for, in response to determining a change in the connection, sending, to the network node, a request to stop the DRX operation. 45. A method, by a network node, for performing discontinuous receive (DRX) operation, the method comprising:
the network node receiving, from a user terminal (UE) operating in a connected state, a request for the DRX operation, wherein the request includes an indication of a level of connectivity of the UE; the network node selecting one of a plurality of DRX operation modes that is associated with the level of connectivity; and the network node sending, to the UE, a response to activate the selected DRX operation mode of the UE. 46. The method of claim 45, wherein sending the response is responsive to verifying that a network policy associated with the UE allows for the selected DRX operation mode. 47. A network node for performing discontinuous receive (DRX) operation, comprising:
memory configured to store data and computer-executable instructions; and a processing circuit operatively coupled to the memory; wherein the processing circuit and the memory are configured to:
receive, from a user terminal (UE) operating in a connected state, a request for the DRX operation, wherein the request includes an indication of a level of connectivity of the UE;
select one of a plurality of DRX operation modes that is associated with the level of connectivity; and
send, to the UE, a response to activate the selected DRX operation mode of the UE. 48. The network node of claim 47, wherein the processing circuit and the memory are further configured to send the response responsive to verifying that a network policy associated with the UE allows for the selected DRX operation mode. 49. A computer program product stored in a non-transitory computer readable medium for controlling a network node for discontinuous receive (DRX) operation, the computer program product comprising software instructions which, when run on a processing circuit of the network node, causes the network node to:
receive, from a user terminal (UE) operating in a connected state, a request to start the DRX operation, wherein the request includes an indication of a level of connectivity of the UE; select one of a plurality of DRX operation modes that is associated with the level of connectivity; and send, to the UE, a response to activate the selected DRX operation mode of the UE. 50. The computer program product of claim 49, wherein the wherein the software instructions, when run on the processing circuit, further cause the network node to send the response responsive to verifying that a network policy associated with the UE allows for the selected DRX operation mode. 51. A network node for performing discontinuous receive (DRX) operation, the network node comprising:
means for receiving, from a user terminal (UE) operating in a connected state, a request for the DRX operation, wherein the request includes an indication of a level of connectivity of the UE; means for selecting one of a plurality of DRX operation modes that is associated with the level of connectivity; and means for sending, to the UE, a response to activate the selected DRX operation mode of the UE. 52. The network node of claim 51, further comprising means for sending the response responsive to verifying that a network policy associated with the UE allows for the selected DRX operation mode. | Systems and methods of discontinuous operation for wireless devices are provided. In one exemplary embodiment, a method may include preconfiguring ( 1501 ), by a user equipment (UE) ( 1012 ), the UE for discontinuous receive (DRX) operation in a connected state. The DRX operation may include modes of DRX operation of the UE with each mode corresponding to a level of connectivity of the UE. Further, while ( 1503 ) the UE is in the connected state, the method may include determining ( 1505 ), by the UE, the level of connectivity of the UE and sending ( 1507 ), by the UE, to a network node, a request for the DRX operation. Also, the request may include an indication of the level of connectivity of the UE.1-26. (canceled) 27. A method, by a user equipment (UE), for performing discontinuous receive (DRX) operation, the method comprising:
preconfiguring, by the UE, the UE for the DRX operation in a connected state, wherein the DRX operation includes DRX operation modes with each mode corresponding to a level of connectivity of the UE; and while the UE is in the connected state:
determining, by the UE, the level of connectivity of the UE; and
sending, by the UE and to a network node, a request for the DRX operation, the request including an indication of the level of connectivity of the UE. 28. The method of claim 27, wherein the request for the DRX operation includes a request to start the DRX operation. 29. The method of claim 27, wherein the request for the DRX operation includes a request to continue the DRX operation. 30. The method of claim 27, wherein the request for the DRX operation includes an indication of a change in the level of connectivity of the UE. 31. The method of claim 27, further comprising:
monitoring, by the UE, a connection between the UE and the network node; and in response to determining a change in the connection, sending, by the UE and to the network node, a request to stop the DRX operation. 32. The method of claim 31, wherein the change in the connection is associated with at least one of a handoff or a handover. 33. The method of claim 31, wherein monitoring the connection includes monitoring events associated with the connection. 34. The method of claim 31, wherein monitoring the connection includes monitoring a quality of the connection. 35. The method of claim 31, wherein monitoring the connection including monitoring radio measurements of the connection. 36. The method of claim 27, wherein sending the request is responsive to initiating, by the UE, a random access procedure. 37. The method of claim 27, wherein sending the request for the DRX operation is responsive to sending, by the UE, to the network node, a request to be scheduled. 38. The method of claim 27, wherein the level of connectivity includes a first level of connectivity associated with a first mode of the DRX operation and a second level of connectivity associated with a second mode of the DRX operation. 39. A user equipment (UE) for performing discontinuous receive (DRX) operation, the UE comprising:
memory configured to store data and computer-executable instructions; and a processing circuit operatively coupled to the memory; wherein the processing circuit and the memory are configured to:
preconfigure the UE for the DRX operation in a connected state, wherein the DRX operation includes modes of the DRX operation with each mode corresponding to a level of connectivity of the UE; and
while the UE is in the connected state:
determine the level of connectivity of the UE; and
send, to a network node, a request for the DRX operation, wherein the request includes an indication of the level of connectivity of the UE. 40. The UE of claim 39, wherein the processing circuit and the memory are further configured to:
monitor, by the UE, a connection between the UE and the network node; and in response to determining a change in the connection, send, by the UE, to the network node, a request to stop the DRX operation. 41. A computer program product stored in a non-transitory computer readable medium for controlling a User Equipment (UE) for performing discontinuous receive (DRX) operation, the computer program product comprising software instructions which, when run on a processing circuit of the UE, causes the UE to:
preconfigure itself for the DRX operation in a connected state, wherein the DRX operation includes modes of the DRX operation with each mode corresponding to a level of connectivity of the UE; and while the UE is in the connected state:
determine the level of connectivity of the UE; and
send, to a network node, a request for the DRX operation, wherein the request includes an indication of the level of connectivity of the UE. 42. The computer program product of claim 41, wherein the software instructions, when run on the processing circuit, further cause the UE to:
monitor a connection between the UE and the network node; and in response to determining a change in the connection, send, to the network node, a request to stop the DRX operation. 43. A user equipment (UE) for performing discontinuous receive (DRX) operation, the UE comprising:
means for preconfiguring the UE for the DRX operation in a connected state, wherein the DRX operation includes DRX operation modes with each mode corresponding to a level of connectivity of the UE; and means for, while the UE is in the connected state, determining the level of connectivity of the UE; and means for, while the UE is in the connected state, sending, to a network node, a request for the DRX operation, the request including an indication of the level of connectivity of the UE. 44. The UE of claim 43, further comprising:
means for monitoring a connection between the UE and the network node; and means for, in response to determining a change in the connection, sending, to the network node, a request to stop the DRX operation. 45. A method, by a network node, for performing discontinuous receive (DRX) operation, the method comprising:
the network node receiving, from a user terminal (UE) operating in a connected state, a request for the DRX operation, wherein the request includes an indication of a level of connectivity of the UE; the network node selecting one of a plurality of DRX operation modes that is associated with the level of connectivity; and the network node sending, to the UE, a response to activate the selected DRX operation mode of the UE. 46. The method of claim 45, wherein sending the response is responsive to verifying that a network policy associated with the UE allows for the selected DRX operation mode. 47. A network node for performing discontinuous receive (DRX) operation, comprising:
memory configured to store data and computer-executable instructions; and a processing circuit operatively coupled to the memory; wherein the processing circuit and the memory are configured to:
receive, from a user terminal (UE) operating in a connected state, a request for the DRX operation, wherein the request includes an indication of a level of connectivity of the UE;
select one of a plurality of DRX operation modes that is associated with the level of connectivity; and
send, to the UE, a response to activate the selected DRX operation mode of the UE. 48. The network node of claim 47, wherein the processing circuit and the memory are further configured to send the response responsive to verifying that a network policy associated with the UE allows for the selected DRX operation mode. 49. A computer program product stored in a non-transitory computer readable medium for controlling a network node for discontinuous receive (DRX) operation, the computer program product comprising software instructions which, when run on a processing circuit of the network node, causes the network node to:
receive, from a user terminal (UE) operating in a connected state, a request to start the DRX operation, wherein the request includes an indication of a level of connectivity of the UE; select one of a plurality of DRX operation modes that is associated with the level of connectivity; and send, to the UE, a response to activate the selected DRX operation mode of the UE. 50. The computer program product of claim 49, wherein the wherein the software instructions, when run on the processing circuit, further cause the network node to send the response responsive to verifying that a network policy associated with the UE allows for the selected DRX operation mode. 51. A network node for performing discontinuous receive (DRX) operation, the network node comprising:
means for receiving, from a user terminal (UE) operating in a connected state, a request for the DRX operation, wherein the request includes an indication of a level of connectivity of the UE; means for selecting one of a plurality of DRX operation modes that is associated with the level of connectivity; and means for sending, to the UE, a response to activate the selected DRX operation mode of the UE. 52. The network node of claim 51, further comprising means for sending the response responsive to verifying that a network policy associated with the UE allows for the selected DRX operation mode. | 2,400 |
7,565 | 7,565 | 15,085,562 | 2,488 | Various codecs and methods of using the same are disclosed. In one aspect, a method of processing video data is provided that includes encoding or decoding the video data with a codec in aggressive deployment and correcting one or more errors in the encoding or decoding wherein the error correction includes re-encoding or re-decoding the video data in a non-aggressive deployment or generating a skip picture. | 1. A method of processing video data, comprising:
encoding or decoding the video data with a codec in aggressive deployment; and correcting one or more errors in the encoding or decoding wherein the error correction includes re-encoding or re-decoding the video data in a non-aggressive deployment or generating a skip picture. 2. The method of claim 1, comprising detecting the one or more errors. 3. The method of claim 2, wherein the error detection comprises also encoding or decoding the video data in non-aggressive deployment and comparing the non-aggressive deployment encoded or decoded video data with the aggressive deployment encoded or decoded video data. 4. The method of claim 2, wherein the error detection comprises checking the video data for syntax errors or semantics errors. 5. The method of claim 1, wherein the encoding or decoding the video data is timed by a watchdog timer that is operable to time encoding or decoding operations and trigger an error signal in the event a duration of a given encoding or decoding operation exceeds some threshold that is indicative of a hang. 6. The method of claim 1, wherein the error correction comprises applying blind error correction to the video data. 7. The method of claim 6, wherein the encoded or decoded video data comprises a series of video frames and the blind error correction comprises selectively refreshing at least portions of the some of the video frames with video data encoded or decoded in non-aggressive deployment. 8. A computing device, comprising:
a codec operable to encode or decode video data in aggressive deployment; and logic and/or instructions to correct one or more errors in the encoding or decoding wherein the error correction includes re-encoding or re-decoding the video data in a non-aggressive deployment or generating a skip picture. 9. The computing device of claim 8, comprising logic and/or instructions to detect the one or more errors. 10. The computing device of claim 9, wherein the error detection logic and/or instructions is operable to encode or decode the video data in non-aggressive deployment and compare the non-aggressive deployment encoded or decoded video data with the aggressive deployment encoded or decoded video data. 11. The computing device of claim 9, wherein the error detection logic and/or instructions is operable to check the video data for syntax errors or semantics errors. 12. The computing device of claim 8, wherein the computing device includes a watchdog timer to time encoding or decoding operations and trigger an error signal in the event a duration of a given encoding or decoding operation exceeds some threshold that is indicative of a hang. 13. The computing device of claim 8, wherein the error correction logic and/or instructions is operable to apply blind error correction to the video data. 14. The computing device of claim 13, wherein the encoded or decoded video data comprises a series of video frames and the blind error correction comprises selectively refreshing at least portions of the some of the video frames with video data encoded or decoded in non-aggressive deployment. 15. A non-transitory computer readable medium having computer readable instructions for performing a method processing video data, comprising:
encoding or decoding the video data with a codec in aggressive deployment; and correcting one or more errors in the encoding or decoding wherein the error correction includes re-encoding or re-decoding the video data in a non-aggressive deployment or generating a skip picture. 16. The non-transitory computer readable medium of claim 15, comprising instructions for detecting the one or more errors. 17. The non-transitory computer readable medium of claim 16, wherein the error detection comprises also encoding or decoding the video data in non-aggressive deployment and comparing the non-aggressive deployment encoded or decoded video data with the aggressive deployment encoded or decoded video data. 18. The non-transitory computer readable medium of claim 16, wherein the error detection comprises checking the video data for syntax errors or semantics errors. 19. The non-transitory computer readable medium of claim 15, wherein the error correction comprises using a watchdog timer that is operable to time encoding or decoding operations and trigger an error signal in the event a duration of a given encoding or decoding operation exceeds some threshold that is indicative of a hang. 20. The non-transitory computer readable medium of claim 15 wherein the error correction comprises applying blind error correction to the video data. 21. The non-transitory computer readable medium of claim 20, wherein the encoded or decoded video data comprises a series of video frames and the blind error correction comprises selectively refreshing at least portions of the some of the video frames with video data encoded or decoded in non-aggressive deployment. | Various codecs and methods of using the same are disclosed. In one aspect, a method of processing video data is provided that includes encoding or decoding the video data with a codec in aggressive deployment and correcting one or more errors in the encoding or decoding wherein the error correction includes re-encoding or re-decoding the video data in a non-aggressive deployment or generating a skip picture.1. A method of processing video data, comprising:
encoding or decoding the video data with a codec in aggressive deployment; and correcting one or more errors in the encoding or decoding wherein the error correction includes re-encoding or re-decoding the video data in a non-aggressive deployment or generating a skip picture. 2. The method of claim 1, comprising detecting the one or more errors. 3. The method of claim 2, wherein the error detection comprises also encoding or decoding the video data in non-aggressive deployment and comparing the non-aggressive deployment encoded or decoded video data with the aggressive deployment encoded or decoded video data. 4. The method of claim 2, wherein the error detection comprises checking the video data for syntax errors or semantics errors. 5. The method of claim 1, wherein the encoding or decoding the video data is timed by a watchdog timer that is operable to time encoding or decoding operations and trigger an error signal in the event a duration of a given encoding or decoding operation exceeds some threshold that is indicative of a hang. 6. The method of claim 1, wherein the error correction comprises applying blind error correction to the video data. 7. The method of claim 6, wherein the encoded or decoded video data comprises a series of video frames and the blind error correction comprises selectively refreshing at least portions of the some of the video frames with video data encoded or decoded in non-aggressive deployment. 8. A computing device, comprising:
a codec operable to encode or decode video data in aggressive deployment; and logic and/or instructions to correct one or more errors in the encoding or decoding wherein the error correction includes re-encoding or re-decoding the video data in a non-aggressive deployment or generating a skip picture. 9. The computing device of claim 8, comprising logic and/or instructions to detect the one or more errors. 10. The computing device of claim 9, wherein the error detection logic and/or instructions is operable to encode or decode the video data in non-aggressive deployment and compare the non-aggressive deployment encoded or decoded video data with the aggressive deployment encoded or decoded video data. 11. The computing device of claim 9, wherein the error detection logic and/or instructions is operable to check the video data for syntax errors or semantics errors. 12. The computing device of claim 8, wherein the computing device includes a watchdog timer to time encoding or decoding operations and trigger an error signal in the event a duration of a given encoding or decoding operation exceeds some threshold that is indicative of a hang. 13. The computing device of claim 8, wherein the error correction logic and/or instructions is operable to apply blind error correction to the video data. 14. The computing device of claim 13, wherein the encoded or decoded video data comprises a series of video frames and the blind error correction comprises selectively refreshing at least portions of the some of the video frames with video data encoded or decoded in non-aggressive deployment. 15. A non-transitory computer readable medium having computer readable instructions for performing a method processing video data, comprising:
encoding or decoding the video data with a codec in aggressive deployment; and correcting one or more errors in the encoding or decoding wherein the error correction includes re-encoding or re-decoding the video data in a non-aggressive deployment or generating a skip picture. 16. The non-transitory computer readable medium of claim 15, comprising instructions for detecting the one or more errors. 17. The non-transitory computer readable medium of claim 16, wherein the error detection comprises also encoding or decoding the video data in non-aggressive deployment and comparing the non-aggressive deployment encoded or decoded video data with the aggressive deployment encoded or decoded video data. 18. The non-transitory computer readable medium of claim 16, wherein the error detection comprises checking the video data for syntax errors or semantics errors. 19. The non-transitory computer readable medium of claim 15, wherein the error correction comprises using a watchdog timer that is operable to time encoding or decoding operations and trigger an error signal in the event a duration of a given encoding or decoding operation exceeds some threshold that is indicative of a hang. 20. The non-transitory computer readable medium of claim 15 wherein the error correction comprises applying blind error correction to the video data. 21. The non-transitory computer readable medium of claim 20, wherein the encoded or decoded video data comprises a series of video frames and the blind error correction comprises selectively refreshing at least portions of the some of the video frames with video data encoded or decoded in non-aggressive deployment. | 2,400 |
7,566 | 7,566 | 13,733,634 | 2,456 | A system and method in a building or vehicle for an actuator operation in response to a sensor according to a control logic, the system comprising a router or a gateway communicating with a device associated with the sensor and a device associated with the actuator over in-building or in-vehicle networks, and an external Internet-connected control server associated with the control logic implementing a PID closed linear control loop and communicating with the router over external network for controlling the in-building or in-vehicle phenomenon. The sensor may be a microphone or a camera, and the system may include voice or image processing as part of the control logic. A redundancy is used by using multiple sensors or actuators, or by using multiple data paths over the building or vehicle internal or external communication. The networks may be wired or wireless, and may be BAN, PAN, LAN, WAN, or home networks. | 1. A system for commanding an actuator operation in response to a sensor response associated with a phenomenon according to a control logic, for use with one or more in-building or in-vehicle networks for communication in a building or in a vehicle, and an external network at least in part external to the building or to the vehicle, the system comprising:
a router in the building or in the vehicle, coupled between the one or more in-building or in-vehicle networks and the external network, and operative to pass digital data between the in-building and external networks; a first device in the building or in the vehicle comprising, or connectable to, the sensor that responds to the phenomenon, said first device is operative to transmit a sensor data corresponding to the phenomenon to said router over the one or more in-building or in-vehicle networks; a second device in the building or in the vehicle comprising, or connectable to, an actuator that affects the phenomenon, the second device is operative to execute actuator commands received from said router over said one or more in-building or in-vehicle networks; and a control server external to the building or to the vehicle storing said control logic and coupled to said router over the Internet via the external network, wherein said control server is operative to receive the sensor data from said router, to produce actuator commands in response to the received sensor digital data according to said control logic, and to transmit the actuator commands to said second device via said router. 2. The system according to claim 1, wherein said router is a gateway or is further operative for IP routing, NAT, DHCP, firewalling, parental control, rate converting, fault isolating, protocol converting or translating, or proxy serving. 3. The system according to claim 1 further comprising a third device external to the building or to the vehicle comprising an additional sensor that responds to a distinct or same phenomenon, the third device is operative to transmit an additional sensor data corresponding to the distinct phenomenon to said control server over the external network or over a network distinct from the external network, wherein said control server is operative to receive the additional sensor data, and to produce actuator commands in response to the received additional sensor data according to said control logic. 4. The system according to claim 1 further comprising a third device external to the building or to the vehicle comprising an additional actuator that responds to received additional actuator commands, the third device is operative to receive the additional actuator commands from said control server over the external network or over a network distinct from the external network, wherein said control server is operative to transmit said additional actuator commands to said third device. 5. The system according to claim 1, wherein said control logic is affecting a control loop for controlling the phenomenon, and wherein the control loop is a closed linear control loop where the sensor data serves as a feedback to command the actuator based of the loop deviation from a setpoint or a reference value. 6. The system according to claim 5, wherein the closed control loop is a proportional-based, an integral-based, a derivative-based, or a Proportional, Integral, and Derivative (PID) based control loop, wherein the control loop uses feed-forward, Bistable, Bang-Bang, Hysteretic, or fuzzy logic based control, or wherein: the control loop involves randomness based on random numbers; and the system further comprises a random number generator for generating random numbers, and wherein said random number generator is hardware-based using thermal noise, shot noise, nuclear decaying radiation, photoelectric effect, or quantum phenomena, or wherein said random number generator is software-based and executes an algorithm for generating pseudo-random numbers. 7. The system according to claim 5, wherein the setpoint is fixed, set by a user, or is time dependent. 8. The system according to claim 5 further comprising an additional sensor responsive to a phenomenon distinct from the phenomenon, and wherein the setpoint is dependent upon the output of said additional sensor. 9. The system according to claim 1 wherein at least one of the in-building or in-vehicle networks is using in-wall wiring that is connected to an outlet as a network medium, and wherein said first device, said second device, or said router is operative to communicate over the in-wall wiring. 10. The system according to claim 9 wherein an enclosure of the sensor, the actuator, said first device, said second device, or said router, consists of, comprising, or is integrated with, the outlet or a plug-in module pluggable to the outlet. 11. The system according to claim 9 wherein the outlet is a telephone, LAN, AC power, or CATV outlet, and the in-wall wiring is respectively a telephone wire pair, a LAN cable, an AC power cable, or a CATV coaxial cable. 12. The system according to claim 9 wherein the in-wall wiring is carrying a power signal, and wherein the sensor, the actuator, said first device, said second device, or said router is at least in part powered from the power signal. 13. The system according to claim 1, wherein the sensor is a piezoelectric sensor that includes single crystal material or a piezoelectric ceramics and uses a transverse, longitudinal, or shear effect mode of the piezoelectric effect. 14. The system according to claim 1, further comprising multiple sensors arranged as a directional sensor array operative to estimate the number, magnitude, frequency, Direction-Of-Arrival (DOA), distance, or speed of the phenomenon impinging the sensor array, and wherein said control logic includes processing of the sensor array outputs. 15. The system according to claim 1, wherein a single component consists of, or is part of, the sensor and the actuator. 16. The system according to claim 1, wherein the sensor is a thermoelectric sensor that responds to a temperature or to a temperature gradient of an object using conduction, convection, or radiation, and wherein the thermoelectric sensor consists of, or comprises, a Positive Temperature Coefficient (PTC) thermistor, a Negative Temperature Coefficient (NTC) thermistor, a thermocouple, a quartz crystal, or a Resistance Temperature Detector (RTD). 17. The system according to claim 1, wherein the sensor consists of, or comprises, a nanosensor, a crystal, or a semiconductor, or wherein: the sensor is an ultrasonic based, the sensor is an eddy-current sensor, the sensor is a proximity sensor, the sensor is a bulk or surface acoustic sensor, or the sensor is an atmospheric or an environmental sensor. 18. The system according to claim 1, wherein the sensor is a radiation sensor that responds to radioactivity, nuclear radiation, alpha particles, beta particles, or gamma rays, and is based on gas ionization. 19. The system according to claim 1, wherein the sensor is a photoelectric sensor that responds to a visible or an invisible light, the invisible light is infrared, ultraviolet, X-rays, or gamma rays, and wherein the photoelectric sensor is based on the photoelectric or photovoltaic effect, and consists of, or comprises, a semiconductor component that consists of, or comprises, a photodiode, a phototransistor, or a solar cell. 20. The system according to claim 19, wherein the photoelectric sensor is based on Charge-Coupled Device (CCD) or a Complementary Metal-Oxide Semiconductor (CMOS) element. 21. The system according to claim 1, wherein the sensor is a photosensitive image sensor array comprising multiple photoelectric sensors, for capturing an image and producing electronic image information representing the image, and the system further comprising one or more optical lens for focusing the received light and to guide the image, and wherein the image sensor is disposed approximately at an image focal point plane of the one or more optical lens for properly capturing the image. 22. The system according to claim 21, further comprising an image processor coupled to the image sensor for providing a digital data video signal according to a digital video format, the digital video signal carrying digital data video based on the captured images, and wherein the digital video format is based on one out of: TIFF (Tagged Image File Format), RAW format, AVI, DV, MOV, WMV, MP4, DCF (Design Rule for Camera Format), ITU-T H.261, ITU-T H.263, ITU-T H.264, ITU-T CCIR 601, ASF, Exif (Exchangeable Image File Format), and DPOF (Digital Print Order Format) standards. 23. The device according to claim 22 further comprising a intraframe or interframe compression based video compressor coupled to the image sensor for lossy or non-lossy compressing the digital data video, wherein the compression is based on a standard compression algorithm which is one or more out of JPEG (Joint Photographic Experts Group) and MPEG (Moving Picture Experts Group), ITU-T H.261, ITU-T H.263, ITU-T H.264 and ITU-T CCIR 601. 24. The system according to claim 1, wherein the sensor is an electrochemical sensor that responds to an object chemical structure, properties, composition, or reactions. 25. The system according to claim 24, wherein the electrochemical sensor is a pH meter or a gas sensor responding to a presence of radon, hydrogen, oxygen, or Carbon-Monoxide (CO), or wherein the electrochemical sensor is based on optical detection or on ionization and is a smoke, a flame, or a fire detector, or is responsive to combustible, flammable, or toxic gas. 26. The system according to claim 1, wherein the sensor is a physiological sensor that responds to parameters associated with a live body, and is external to the sensed body, implanted inside the sensed body, attached to the sensed body, or wearable on the sensed body. 27. The system according to claim 26, wherein the physiological sensor is responding to body electrical signals and is an EEG Electroencephalography (EEG) or an Electrocardiography (ECG) sensor. 28. The system according to claim 26, wherein the physiological sensor is responding to oxygen saturation, gas saturation, or a blood pressure in the sensed body. 29. The system according to claim 1, wherein the sensor is an electroacoustic sensor that responds to an audible or inaudible sound. 30. The system according to claim 29, wherein the electroacoustic sensor is an omnidirectional, unidirectional, or bidirectional microphone that is based on the sensing the incident sound based motion of a diaphragm or a ribbon, and the microphone consists of, or comprising, a condenser, an electret, a dynamic, a ribbon, a carbon, or a piezoelectric microphone. 31. The system according to claim 1 wherein said router, said first device, said second device, the sensor, or the actuator are addressable in a digital data network using distinct locally administered addresses or a universally administered digital addresses stored in a volatile or non-volatile memory of the respective device and uniquely identifying the respective device in the digital data network, and the digital data network is one or more of the in-building or in-vehicle networks, the external network, a WAN, a LAN, a PAN, a BAN, a home network, or the Internet. 32. The system according to claim 31 wherein the digital address is a MAC layer address that is MAC-48, EUI-48, or EUI-64 address type. 33. The system according to claim 31 wherein the digital address is a layer 3 address and is static or dynamic IP address that is IPv4 or IPv6 type address. 34. The system according to claim 31 wherein the digital address is autonomously assigned or is assigned by another device via a communication interface using DHCP. 35. The system according to claim 34 wherein the digital address of said first or second device is assigned by said router or control server via the in-building or in-vehicle networks or the external network. 36. The system according to claim 34 wherein the digital address of said router is assigned by said control server via the external network. 37. The system according to claim 31 wherein said router, said first device, or said second device are addressable in one or more digital data networks using multiple digital addresses, and wherein a distinct digital address is assigned to each network interface of the respective device. 38. The system according to claim 1, wherein said router, said first device, or said second device are connectable to be powered from a DC or AC power source, and further comprising a power supply housed with the respective device enclosure, and coupled to be powered from the power source and to power at least part of said respective device. 39. The system according to claim 38, wherein the power source is a primary or rechargeable battery, or wherein the AC power source is mains AC power, and wherein said respective device further comprising an AC power connector connectable to an AC power outlet. 40. The system according to claim 38, wherein the power source is an electrical power generator for generating an electric power from the phenomenon or from a distinct another phenomenon. 41. The system according to claim 40, wherein a single component serves as the sensor and as the electrical power generator. 42. The system according to claim 40, wherein the electrical power generator is an electromechanical generator for harvesting kinetic energy, or wherein the electrical power generator is a solar cell or a Peltier effect based thermoelectric device. 43. The system according to claim 38, wherein the power source is internal or external to said respective enclosure of said router, said first device, or said second device. 44. An apparatus for coupling between an internal network extending substantially within an enclosed environment and an external network, coupled to the Internet for communication with a control server, extending substantially outside the enclosed environment, and for use with a sensor disposed in the enclosed environment that senses a condition in the enclosed environment and provides sensor data corresponding to the condition, and an actuator disposed to affect the condition in the enclosed environment in response to received actuator commands, said apparatus comprising:
a first port for coupling to the internal network; a first modem coupled to said first port for communication over the internal network; a second port for coupling to the external network; a second modem coupled to said second port for communication over the external network; a router coupled between said first and second modems so as to pass information between the internal and external networks, and configured to deliver the sensor data from the internal network to the control server over the external networks and to deliver the actuator commands from the control server to the actuator over the internal network; and an housing enclosing said first and second ports, said first and second modems, and said router. 45. The apparatus according to claim 44, wherein said apparatus is a gateway or is further operative for IP routing, NAT, DHCP, firewalling, parental control, rate converting, fault isolating, protocol converting or translating, or proxy serving. 46. The apparatus according to claim 44 further comprising in said housing an additional sensor that senses a second condition that is distinct from, or same as, the condition, and provides additional sensor data corresponding to the second condition, and said apparatus further operative to transmit the additional sensor data to the control server over the external network or over a network distinct from the external network. 47. The apparatus according to claim 44 further comprising in said housing an additional actuator that affects a second condition that is distinct from, or same as, the condition, in response to received additional actuator commands, and said apparatus further operative to receive the additional actuator commands from the control server over the external network or over a network distinct from the external network. 48. The apparatus according to claim 44 further operative for producing actuator commands in response to the sensor data and for delivering the actuator commands to the actuator over the internal network, and wherein a control logic is affecting a control loop for controlling the condition, and wherein the control loop is a closed linear control loop where the sensor data serves as a feedback to command the actuator based of a loop deviation from a setpoint or a reference value. 49. The apparatus according to claim 48, wherein the closed control loop is a proportional-based, an integral-based, a derivative-based, or a Proportional, Integral, and Derivative (PID) based control loop, wherein the control loop uses feed-forward, Bistable, Bang-Bang, Hysteretic, or fuzzy logic based control, or wherein: the control loop involves randomness based on random numbers; and the apparatus further comprises a random number generator for generating random numbers, and wherein said random number generator is hardware-based using thermal noise, shot noise, nuclear decaying radiation, photoelectric effect, or quantum phenomena, or wherein said random number generator is software-based and executes an algorithm for generating pseudo-random numbers. 50. The apparatus according to claim 48, wherein the setpoint is fixed, set by a user, or is time dependent. 51. The apparatus according to claim 48 further couplable to, or comprising in said housing, an additional sensor responsive to a second condition distinct from the condition, and wherein the setpoint is dependent upon an output of the additional sensor. 52. The apparatus according to claim 44 wherein the internal or the external network is using in-wall wiring that is connected to an outlet as a network medium, and wherein said apparatus operative to communicate over the in-wall wiring. 53. The apparatus according to claim 52 wherein said housing consists of, comprising, or is integrated with, the outlet or a plug-in module pluggable to the outlet. 54. The apparatus according to claim 52 wherein the outlet is a telephone, LAN, AC power, or CATV outlet, and the in-wall wiring is respectively a telephone wire pair, a LAN cable, an AC power cable, or a CATV coaxial cable, and wherein said first or second modem is operative to respectively communication over the telephone wire pair, the LAN cable, the AC power cable, or the CATV coaxial cable. 55. The apparatus according to claim 52 wherein the in-wall wiring is carrying a power signal, and wherein said apparatus is at least in part powered from the power signal. 56. The apparatus according to claim 44, wherein the sensor is a photosensitive image sensor array comprising multiple photoelectric sensors, for capturing an image and producing electronic image information representing the image, and said apparatus further comprising an image processor coupled to the image sensor for providing a digital video data signal according to a digital video format, the digital video signal carrying digital video data based on the captured images, and wherein the digital video format is based on one out of: TIFF (Tagged Image File Format), RAW format, AVI, DV, MOV, WMV, MP4, DCF (Design Rule for Camera Format), ITU-T H.261, ITU-T H.263, ITU-T H.264, ITU-T CCIR 601, ASF, Exif (Exchangeable Image File Format), and DPOF (Digital Print Order Format) standards. 57. The apparatus according to claim 56 further comprising a intraframe or interframe compression based video compressor coupled to said image sensor for lossy or non-lossy compressing the digital video data, wherein the compression is based on a standard compression algorithm which is one or more out of JPEG (Joint Photographic Experts Group) and MPEG (Moving Picture Experts Group), ITU-T H.261, ITU-T H.263, ITU-T H.264 and ITU-T CCIR 601. 58. The apparatus according to claim 44, wherein said apparatus is operative to calculate or provide a space-dependent characteristic of the sensed condition, that is a pattern, a linear density, a surface density, a volume density, a flux density, a current, a direction, a rate of change in a direction, or a flow, of the condition. 59. The apparatus according to claim 44, wherein the internal or external network is using a cable for carrying a communication signal, and wherein said first or second port consisting of a connector for connecting to the cable, and wherein the cable is connectable to simultaneously carry a DC or AC power signal and the communication signal. 60. The apparatus according to claim 59, wherein said apparatus is further operative to supply at least in part of the power signal or to be at least in part powered from the power signal. 61. The apparatus according to claim 59, wherein the power signal is carried over dedicated wires in the cable, and wherein the wires are distinct from the wires in the cable carrying the communication signal. 62. The apparatus according to claim 59, wherein the power signal and the communication signal are concurrently carried over same wires in the cable, and wherein said apparatus further comprising a power/data splitter arrangement having first, second and third ports, wherein only the communication signal is passed between the first and second ports, and only the power signal is passed between the first and third ports, and wherein the first port is coupled to the connector. 63. The apparatus according to claim 62, wherein the power and communication signals are carried using Frequency Division Multiplexing (FDM), where the power signal is carried over a power signal frequency or a power frequency band, and the communication signal is carried over a frequency band above and distinct from the power signal frequency or the power frequency band, and wherein the power/data splitter comprising an HPF between the first and second ports and a LPF between the first and third ports, or wherein said power/data splitter comprising a transformer and a capacitor connected to the transformer windings. 64. The apparatus according to claim 62, wherein said power and digital data signals are carried using a phantom scheme, and the said power/data splitter comprising at least two transformers having a center-tap connection. 65. The apparatus according to claim 59, wherein said power and digital data signals are carried substantially according to IEEE 802.3af-2003 or IEEE 802.3at-2009 standards. 66. The apparatus according to claim 44 wherein said second port and said second modem consists of a first network interface, for use with an additional external network and for communicating with the control server over multiple data paths, said apparatus further comprising a second network interface consisting of a third port for coupling to the additional external network, and a third modem coupled to said third port for communication over the additional external network. 67. The apparatus according to claim 66 wherein said first and second network interfaces are of a same type. 68. The apparatus according to claim 66 wherein the external network interface is based on a conductive medium, and wherein said second port is a connector. 69. The apparatus according to claim 68 wherein said connector is selected from a group consisting of a coaxial connector, a twisted-pair connector, an AC power connector, a telephone connector. 70. The apparatus according to claim 66 wherein the external network is on a non-conductive medium, and wherein said second port is a non-conductive coupler. 71. The apparatus according to claim 70 wherein the non-conductive coupler is selected from a group consisting of an antenna, a light emitter, a light detector, a microphone, a speaker, and a fiber-optics connector. 72. The apparatus according to claim 66 wherein the external network is based on conductive medium and said second port is a connector, and wherein the additional external network is based on a non-conductive medium, and wherein said third port is a non-conductive coupler. 73. The apparatus according to claim 66 wherein said second and third modems are of different scales selected from a group consisting of NFC, PAN, LAN, MAN or WAN modems, wherein said second and third modems use different modulation schemes selected from a group consisting of AM, FM, and PM, wherein said second and third modems use different duplexing schemes selected from a group consisting of half-duplex, full-duplex, and unidirectional, wherein said second modem is packet-based and said third modem is circuit-switched, or wherein said second port and said third port are the same port used by said first and second network interfaces. 74. The apparatus according to claim 73 wherein said first and second network interfaces are operative to communicate over a same network using FDM, where said first network interface is using a first frequency band and said second network interface is using a second frequency band, that is overlapping or non-overlapping with the first frequency band. 75. The apparatus according to claim 66 further operative to send a packet to the control server via said first and second network interfaces to be carried over two distinct data paths, the packet comprising a source address, a destination address, an information type, and an information content, and the packet is sent via said first or second network interfaces selected by a fixed, adaptive, or dynamic selection mechanism. 76. The apparatus according to claim 75 wherein a same packet is sent via said first and second network interfaces. 77. The apparatus according to claim 75 wherein a distinct number is assigned to said first and second network interfaces, and wherein said selection mechanism is using, or based on, the assigned numbers. 78. The apparatus according to claim 77 wherein the assigned numbers represent priority levels associated with said network interfaces, and the network interface having the highest priority level is selected. 79. The apparatus according to claim 77 wherein the network interface is randomly selected from said first and second network interface, or wherein the selection mechanism is based on alternate selection. 80. The apparatus according to claim 77 wherein the assigned numbers are based on the associated networks types or attributes or the performance history, or wherein the assigned numbers are based on the current or past associated networks data rates, transfer delays, networks mediums or networks mediums types, qualities, duplexing schemes, line codes, modulation schemes, switching mechanisms, throughputs, or usages. 81. The apparatus according to claim 77 wherein the network interface are selected based on the packet source address, based on the packet destination address, based on the packet information type, or based on the packet information content. 82. The apparatus according to claim 44 further operative to analyze the sensor data versus the actuator commands. 83. The apparatus according to claim 82, wherein the sensor transfer function is characterized as S(s), the actuator transfer function is characterized as C(s), the actuator command is characterized as A(s), and the sensor data is characterized as F(s), and wherein the analysis include a calculation of F(s)/[S(s)*A(s)*C(s)]. 84. The apparatus according to claim 82, wherein the analysis is used to estimate or determine a condition characteristics or parameter. 85. The apparatus according to claim 82, wherein the analysis is used as a sensor data by a control logic, and the apparatus is further periodically operative for initiating and transmitting actuator commands and for analyzing the sensor data versus the transmitted actuator commands. 86. The apparatus according to claim 44 further integrated in part or entirely in an appliance. 87. The apparatus according to claim 44 wherein the internal network is a Body Area Network (BAN), a Personal Area Network (PAN), or Local Area Network (LAN), and wherein said first port is respectively a BAN, PAN, or LAN port and said first modem is respectively a BAN, PAN, or LAN modem. 88. The apparatus according to claim 87 wherein: the LAN is a wired LAN using a wired LAN medium and said LAN port is a LAN connector, and wherein: the LAN is Ethernet based; and the wired LAN is according to, or based on, IEEE 802.3-2008 standard. 89. The apparatus according to claim 44 wherein the external network is a packet-based or a circuit-switched-based Wide Area Network (WAN), and wherein said second port is a WAN port and said second modem is a WAN transceiver. 90. The apparatus according to claim 44 wherein the enclosed environment is a vehicle and said housing is attachable to the vehicle body, and wherein said apparatus communicates with another vehicle or a roadside unit external to the vehicle over the external network, and the condition is in the vehicle, external to the vehicle, or associated with surroundings around the vehicle. 91. The apparatus according to claim 90, wherein the vehicle is one out of a bicycle, a car, a motorcycle, a train, a ship, an aircraft, a boat, a spacecraft, a boat, a submarine, a dirigible, an electric scooter, a subway, a train, a trolleybus, a tram, a sailboat, a yacht, and an airplane. 92. The apparatus according to claim 90, wherein the vehicle is an automobile, and wherein said apparatus is coupled to monitor or control a Engine Control Unit (ECU), a Transmission Control Unit (TCU), a Anti-Lock Braking System (ABS), or Body Control Modules (BCM) of the automobile. 93. The apparatus according to claim 90, wherein the internal network is a vehicle bus that is according to, or based on, Control Area Network (CAN) or Local Interconnect Network (LIN). 94. The apparatus according to claim 90, wherein the vehicle further comprising an On-Board Diagnostics (OBD) system, and said apparatus is coupled to or integrated with the OBD system. 95. The apparatus according to claim 94, further operative to communicate to the control server an information regarding fuel and air metering, ignition system, misfire, auxiliary emission control, vehicle speed and idle control, transmission, on-board computer, fuel level, relative throttle position, ambient air temperature, accelerator pedal position, air flow rate, fuel type, oxygen level, fuel rail pressure, engine oil temperature, fuel injection timing, engine torque, engine coolant temperature, intake air temperature, exhaust gas temperature, fuel pressure, injection pressure, turbocharger pressure, boost pressure, exhaust pressure, exhaust gas temperature, engine run time, NOx sensor, manifold surface temperature, or a Vehicle Identification Number (VIN). 96. A control system for commanding an actuator operation in response to processing of an image according to a control logic, for use with one or more in-building or in-vehicle networks for communication in a building or in a vehicle, and an external network at least in part external to the building or to the vehicle, the system comprising:
a router in the building or in the vehicle, connected to the one or more in-building or in-vehicle networks and to the external network, and operative to pass digital data between said in-building or in-vehicle and external networks; a first device in the building or in the vehicle comprising an image sensor for capturing still or video image, the first device is operative to transmit a digital data corresponding to said captured still or video image to said router over said one or more in-building or in-vehicle networks; a second device in the building or in the vehicle comprising an actuator that affects a phenomenon, the second device is operative to execute actuator commands received from said router over said one or more in-building or in-vehicle networks; a control server external to the building or to the vehicle storing said control logic, and communicatively coupled to said router over the Internet via said external network; and an image processor having an output for processing said captured still or video image, wherein said control server is operative to produce actuator commands in response to the output of said image processor according to said control logic, and to transmit the actuator commands to said second device via said router, and wherein said image processor is entirely or in part in said first device, said router, or said control server. 97. A control system for commanding an actuator operation according to a control logic, in response to processing of a voice, for use with one or more in-building or in-vehicle networks for communication in a building or in a vehicle, and an external network at least in part external to the building or to the vehicle, the system comprising:
a router in the building or in the vehicle, connected to the one or more in-building or in-vehicle networks and to the external network, and operative to pass digital data between said in-building or in-vehicle and external networks; a first device in the building or in the vehicle comprising a microphone for sensing voice, the first device is operative to transmit a digital data corresponding to said sensed voice to said router over said one or more in-building or in-vehicle networks; a second device in the building or in the vehicle comprising an actuator that affects a phenomenon, the second device is operative to execute actuator commands received from said router over said one or more in-building or in-vehicle networks; a control server external to the building or to the vehicle storing said control logic, and communicatively coupled to said router over the Internet via said external network; and a voice processor having an output for processing said voice, wherein said control server is operative to produce actuator commands in response to the output of said voice processor according to said control logic and to transmit the actuator commands to said second device via said router, and wherein said voice processor is entirely or in part in said first device, said router, or said control server. 98. The system according to claim 97 wherein at least one of the in-building or in-vehicle networks is a Body Area Network (BAN), at least one of said router, said first device, and said second device further comprising a BAN interface, and said BAN interface includes a BAN port and a BAN transceiver, and wherein the BAN is a Wireless BAN (WBAN), said BAN port is an antenna, said BAN transceiver is a WBAN modem, and the BAN is according to, or based on, IEEE 802.15.6 standard. 99. The system according to claim 98 wherein the BAN is a Wireless BAN (WBAN), said BAN port is an antenna, said BAN transceiver is a WBAN modem, and the BAN is according to, or based on, IEEE 802.15.6 standard. 100. A system for commanding an actuator operation in response to a sensor response associated with a phenomenon according to a control logic, for use with one or more in-building or in-vehicle networks for communication in a building or in a vehicle, and an external network at least in part external to the building or to the vehicle, the system comprising:
a router in a single enclosure in the building or in the vehicle, coupled between the one or more in-building or in-vehicle networks and the external network, and operative to pass digital data between the in-building and external networks; a first device in a single enclosure in the building or in the vehicle comprising, or connectable to, the sensor that responds to the phenomenon, said first device is operative to transmit a sensor data corresponding to the phenomenon to said router over the one or more in-building or in-vehicle networks; a second device in a single enclosure in the building or in the vehicle comprising, or connectable to, an actuator that affects the phenomenon, the second device is operative to execute actuator commands received from said router over said one or more in-building or in-vehicle networks; and a control server external to the building or to the vehicle storing said control logic and coupled to said router over the Internet via the external network, wherein said control server is operative to receive the sensor data from said router, to produce actuator commands in response to the received sensor digital data according to said control logic, and to transmit the actuator commands to said second device via said router. 101. The system according to claim 100, wherein the actuator is a light source that emits visible or non-visible light for illumination or indication, the non-visible light is infrared, ultraviolet, X-rays, or gamma rays, and wherein the light source is an electric light source for converting electrical energy into light. 102. The system according to claim 101, wherein the electric light source consists of, or comprises, a lamp, an incandescent lamp, a gas discharge lamp, a fluorescent lamp, a Solid-State Lighting (SSL), a Light Emitting Diode (LED), an Organic LED (OLED), a polymer LED (PLED), or a laser diode. 103. The system according to claim 100, wherein the actuator is a motion actuator that causes linear or rotary motion, and said system further comprising a conversion mechanism for respectfully converting to rotary or linear motion based on a screw, a wheel and axle, or a cam. 104. The system according to claim 100, wherein one or more of the in-building or in-vehicle networks is a wired network using a cable for carrying a communication signal, and wherein said router, said first device, or said second device further comprising a connector for connecting to the cable, and wherein the cable is connectable to simultaneously carry a DC or AC power signal and the communication signal. 105. The system according to claim 104, wherein said router, said first device, or said second device is further operative to supply at least in part of the power signal or to be at least in part powered from the power signal. 106. The system according to claim 104, wherein the power signal is carried over dedicated wires in the cable, and wherein said wires are distinct from the wires in the cable carrying the communication signal. 107. The system according to claim 104, wherein the power signal and the communication signal are concurrently carried over same wires in the cable, and wherein said connectable device further comprising a power/data splitter arrangement having first, second and third ports, wherein only the communication signal is passed between the first and second ports, and only the power signal is passed between the first and third ports, and wherein the first port is coupled to the connector. 108. The system according to claim 107, wherein the power and communication signals are carried using Frequency Division Multiplexing (FDM), where the power signal is carried over a power signal frequency or a power frequency band, and the communication signal is carried over a frequency band above and distinct from the power signal frequency or the power frequency band, and wherein the power/data splitter comprising an HPF between the first and second ports and a LPF between the first and third ports, or wherein said power/data splitter comprising a transformer and a capacitor connected to the transformer windings. 109. The system according to claim 107, wherein said power and digital data signals are carried using a phantom scheme, and the said power/data splitter comprising at least two transformers having a center-tap connection. 110. The system according to claim 104, wherein said power and digital data signals are carried substantially according to IEEE 802.3af-2003 or IEEE 802.3at-2009 standards. 111. The system according to claim 100, wherein the actuator is a sounder for converting an electrical energy to omnidirectional, unidirectional, or bidirectional pattern emitted, audible or inaudible, sound waves. 112. The system according to claim 100 wherein two devices out of a group consisting of said router, said first device, said second device, and said control server are operative for communicating using multiple data paths that are in part or in full, distinct or independent, from each other. 113. The system according to claim 112 wherein the multiple data paths are of a same type, or are using multiple networks that are similar, identical, or different from each other. 114. The system according to claim 112 wherein the multiple data paths are using multiple networks, and wherein: at least two out of the multiple networks use similar, identical, or different network mediums; at least two out of the multiple networks use similar, identical, or different protocols; or at least two out of the multiple networks are coupled to using similar, identical, or different physical layers. 115. The system according to claim 112 wherein the multiple data paths are using multiple networks, and wherein: all of the multiple networks use similar, identical, or different network mediums; all of the multiple networks use similar, identical, or different protocols; or all of the multiple networks are coupled to using similar, identical, or different physical layers. 116. The system according to claim 112 wherein the multiple data paths are using multiple networks, and wherein: at least one network is a wired network and at least one network is a wireless network; at least one network is based on conductive medium and at least one network is based on non-conductive medium, the conductive medium is coaxial cable, twisted-pair, powerlines, or telephone lines, and the non-conductive medium is using RF, light or sound guided or over-the-air propagation; at least one network is packet-based and at least one network is circuit-switched; at least one network is a private network and at least one network is public; at least two networks use different line codes or provide different data-rates; at least two networks use different duplexing schemes selected from a group consisting of half-duplex, full-duplex, and unidirectional; at least two networks use different modulation schemes selected from a group consisting of AM, FM, and PM; or at least two networks are of different types selected from a group consisting of NFC, PAN, LAN, MAN and WAN. 117. The system according to claim 100 wherein a device out of a group consisting of said router, said first device, said second device, and said control server, is operative for communicating with another device from the group over multiple data paths. 118. The system according to claim 117 wherein said device comprises multiple network interfaces each associated with a respective data path over a respective data path network that is coupled to said respective network interface, and wherein each of said network interfaces comprises a transceiver or a modem for transmitting digital data to, and receiving digital data from, the respective data path network, and a network port for coupling to the respective data path network. 119. The system according to claim 118 wherein at least two out of said network interfaces are of a same type, or wherein all of said network interfaces are of a same type. 120. The system according to claim 118 wherein at least two out of said network interfaces use similar, identical, or different transceivers or modems, or wherein all of said network interfaces use similar, identical, or different transceivers or modems. 121. The system according to claim 118 wherein at least two out of said network interfaces use similar, identical, or different network ports, or wherein all of said network interfaces use similar, identical, or different network ports. 122. The system according to claim 118 wherein at least two out of the data path networks are based on a conductive medium, and wherein said respective network ports are connectors. 123. The system according to claim 122 wherein one or more of said connectors is selected from a group consisting of a coaxial connector, a twisted-pair connector, an AC power connector, a telephone connector. 124. The system according to claim 118 wherein at least two out of the data path networks are based on a non-conductive medium, and wherein said respective network ports are non-conductive couplers. 125. The system according to claim 169 wherein said non-conductive couplers are selected from a group consisting of an antenna, a light emitter, a light detector, a microphone, a speaker, and a fiber-optics connector. 126. The system according to claim 118 wherein one of the data path networks is based on conductive medium, and wherein said respective network port is a connector, and wherein one out of the data path networks is based on a non-conductive medium, and wherein said respective network port is a non-conductive coupler. 127. The system according to claim 118 wherein two out of said modems are of different scales selected from a group consisting of NFC, PAN, LAN, MAN or WAN modems, wherein two out of said modems use different modulation schemes selected from a group consisting of AM, FM, and PM, wherein two out of said modems use different duplexing schemes selected from a group consisting of half-duplex, full-duplex, and unidirectional, wherein at least one out of said modems is packet-based and at least one out of said modems is circuit-switched, or wherein one of said network ports is used by two distinct network interfaces, designated as first and second network interfaces. 128. The system according to claim 127 wherein said first and second network interfaces are operative to communicate over a same network using FDM, where said first network interface is using a first frequency band and said second network interface is using a second frequency band. 129. The system according to claim 173 wherein the first and second frequency bands are distinct from each other or wherein the first and second frequency bands are in part or in whole overlapping over each other, and wherein said first and second network interfaces further respectively comprising a first and second filters for substantially passing only signals in the first and second frequency bands respectively. 130. The system according to claim 118 wherein said device is operative to send a packet to said another device via said one or more said network interfaces to be carried over the one or more data paths, the packet comprising a source address, a destination address, an information type, and an information content, and the packet is sent via one or more of said network interfaces selected by a fixed, adaptive, or dynamic selection mechanism. 131. The system according to claim 130 wherein a same packet is sent via two or more of said network interfaces, or wherein a same packet is sent via all said network interfaces. 132. The system according to claim 130 wherein a distinct number is assigned to each of said network interfaces, and wherein said selection mechanism is using, or based on, the assigned numbers. 133. The system according to claim 132 wherein the assigned numbers represent priority levels associated with said network interfaces, and the network interface having the highest priority level is selected. 134. The system according to claim 132 wherein the one of said network interfaces is randomly selected, or wherein the selection mechanism is based on cyclic selection. 135. The system according to claim 132 wherein the assigned numbers are based on the associated networks types or attributes or the performance history, or wherein the assigned numbers are based on the current or past associated networks data rates, transfer delays, networks mediums or networks mediums types, qualities, duplexing schemes, line codes, modulation schemes, switching mechanisms, throughputs, or usages. 136. The system according to claim 130 wherein the one or more network interfaces are selected based on the packet source address, based on the packet destination address, based on the packet information type, or based on the packet information content. 137. The system according to claim 100, wherein said second device further comprising a first electrically actuated switch, the electrically actuated switch coupled for connecting an electric signal to the actuator, and wherein said electrically actuated switch is actuated is responsive to the control commands. 138. The system according to claim 137, wherein the electric signal is a power signal from a power source, and wherein said first electrically actuated switch is coupled between the power source and the actuator. 139. The system according to claim 137, wherein said first electrically actuated switch is ‘normally open’ type, ‘normally closed’ type, or a changeover switch, wherein said first electrically actuated switch is ‘make-before-break’ or ‘break-before-make’ type, wherein said first electrically actuated switch have two or more poles or two or more throws, and the contacts of said first electrically actuated switch are arranged as a Single-Pole-Double-Throw (SPDT), Double-Pole-Double-Throw (DPDT), Double-Pole-Single-Throw (DPST), or Single-Pole-Changeover (SPCO), wherein said first electrically actuated switch is a latching or non-latching type relay, and wherein said relay is a solenoid-based electromagnetic relay that is a reed relay, wherein said relay is solid-state or semiconductor based, or wherein said relay is a Solid State Relay (SSR), wherein said first switch is based on an electrical circuit that comprises an open collector transistor, an open drain transistor, a thyristor, a TRIAC, or an opto-isolator, or wherein said second device further comprising a second electrically actuated switch that is connected in parallel or in series with the said first electrically actuated switch. 140. The system according to claim 100, wherein said control server is operative to analyze the sensor data versus the transmitted actuator commands. 141. The system according to claim 140, wherein the sensor transfer function is characterized as S(s), the actuator transfer function is characterized as C(s), the actuator command is characterized as A(s), and the sensor data is characterized as F(s), and wherein the analysis include the calculation of F(s)/[S(s)*A(s)*C(s)]. 142. The system according to claim 140, wherein the analysis is used to estimate or determine a phenomenon characteristics or parameter. 143. The system according to claim 140, wherein the analysis is used as a sensor data by the control logic, and the system is further periodically operative for initiating actuator commands and for analyzing the sensor data versus the transmitted actuator commands. 144. The system according to claim 100 further implementing redundancy, where the system further includes an additional sensor that respond to the phenomenon by outputting additional sensor data, an additional actuator that affects the phenomenon, or a redundant data path, and wherein the redundancy is based on Dual Modula redundancy (DMR), Triple Modular Redundancy (TMR), Quadruple Modular Redundancy (QMR), 1:N Redundancy, ‘Cold Standby’, or ‘Hot Standby’. 145. The system according to claim 144 wherein said additional sensor is identical, similar, or different from the sensor, said control server is operative to receive said additional sensor data, and said control logic produces actuator commands in response to the received said additional sensor data. 146. The system according to claim 144 wherein said additional actuator is identical, similar, or different from the actuator. 147. The system according to claim 144 wherein the redundant data path is identical to, similar to, or different from, a data path connecting devices in the system. 148. The system according to claim 144 further including an additional sensor that responds to the phenomenon, and wherein said control server is operative to receive the additional sensor data, and wherein said control logic at one time produce actuator commands in response only to the received additional sensor digital data. 149. The system according to claim 148 further including a third device in the building or in the vehicle comprising said additional sensor that responds to the phenomenon, said third device is operative to transmit the additional sensor data to said router over one of the in-building or in-vehicle networks. 150. The system according to claim 144 further including an additional actuator that affects the phenomenon, and wherein said control server is operative to transmit the additional actuator commands to said additional actuator. 151. The system according to claim 144 further including an additional actuator that affects the phenomenon, and wherein said control server at one time is operative to transmit the additional actuator commands only to said additional actuator. 152. The system according to claim 151 further including a third device in the building or in the vehicle comprising said additional actuator that affects the phenomenon, said third device is operative to receive and execute the additional actuator commands received from said router. 153. The system according to claim 100 further comprising a third device that comprises an additional sensor that responds to a second phenomenon, the third device is operative to transmit said additional sensor data corresponding to the second phenomenon to said router over one of the in-building or in-vehicle networks. 154. The system according to claim 153 wherein the second phenomenon is same as, or distinct from, the phenomenon, and the type of said sensor of said third device is distinct or of a same type of the sensor, and wherein said third device communicates with said router over the same, or over a distinct, in-building or in-vehicle network used by said first device. 155. The system according to claim 100 further comprising a third device that comprises an additional actuator that affects a second phenomenon, the third device is operative to receive the additional actuator commands from said router over one of the in-building or in-vehicle networks. 156. The system according to claim 155 wherein the second phenomenon is same as, or distinct from, the phenomenon. 157. The system according to claim 155 wherein the type of said additional actuator is same as, or distinct from, the type of the actuator. 158. The system according to claim 155 wherein said third device communicates with said router over an in-building or in-vehicle network that is distinct from, or of a same type as, the in-building or in-vehicle network used by said second device. 159. The system according to claim 100, wherein said first device, said second device, or said router is integrated in part or entirely in an appliance. 160. The system according to claim 159, wherein a primary functionality of said appliance is associated with food storage, handling, or preparation. 161. The system according to claim 160, wherein a primary function of said appliance is heating food, and wherein said appliance is a microwave oven, an electric mixer, a stove, an oven, or an induction cooker. 162. The system according to claim 160, wherein said appliance is a refrigerator, a freezer, a food processor, a dishwashers, a food blender, a beverage maker, a coffeemaker, or a iced-tea maker. 163. The system according to claim 159, wherein a primary function of said appliance is associated with environmental control, and said appliance consists of, or is part of, an HVAC system. 164. The system according to claim 163, wherein a primary function of said appliance is associated with temperature control, and wherein said appliance is an air conditioner or a heater. 165. The system according to claim 159, wherein a primary function of said appliance is associated with cleaning, wherein said primary function is associated with clothes cleaning, and the appliance is a washing machine or a clothes dryer, or wherein said appliance is a vacuum cleaner. 166. The system according to claim 159, wherein a primary function of said appliance is associated with water control or water heating. 167. The system according to claim 159, wherein said appliance is an answering machine, a telephone set, a home cinema system, a HiFi system, a CD or DVD player, an electric furnace, a trash compactor, a smoke detector, a light fixture, or a dehumidifier. 168. The system according to claim 159, wherein said appliance is a battery-operated portable electronic device, and said appliance is a notebook, a laptop computer, a media player, a cellular phone, a Personal Digital Assistant (PDA), an image processing device, a digital camera, a video recorder, or a handheld computing device. 169. The system according to claim 159, wherein said integration involves sharing a component. 170. The system according to claim 169, wherein said integration involves housing in same enclosure, sharing same processor, or mounting onto same surface. 171. The system according to claim 169, wherein said integration involves sharing a same connector. 172. The system according to claim 171, wherein said connector is a power connector for connecting to a power source, and wherein said integration involves sharing the same connector for being powered from same power source, or wherein said integration involves sharing same power supply. 173. The system according to claim 100, wherein said first device or said second device is integrated with, or enclosed with, said router. 174. The system according to claim 100 wherein the sensor is an image sensor for capturing still or video image, and the system further comprising an image processor having an output for processing the captured image. 175. The system according to claim 174 wherein said image processor is entirely or in part in said first device, said router, said control server, or any combination thereof, and wherein said control logic responds to the output of said image processor. 176. The system according to claim 174 wherein said image sensor is a digital video sensor for capturing digital video content, and wherein said image processor is operative for enhancing said video content using image stabilization, unsharp masking, or super-resolution. 177. The system according to claim 174 wherein said image sensor is a digital video sensor for capturing digital video content, and wherein said image processor is operative for Video Content Analysis (VCA). 178. The system according to claim 177 wherein said VCA includes Video Motion Detection (VMD), video tracking, egomotion estimation, identification, behavior analysis, situation awareness, dynamic masking, motion detection, object detection, face recognition, automatic number plate recognition, tamper detection, video tracking, or pattern recognition. 179. The system according to claim 174 wherein said image processor is operative for detecting a location of an element in the captured image. 180. The system according to claim 179 wherein the element is a human body part. 181. The system according to claim 180 wherein the element is a human face or a human hand. 182. The system according to claim 179 wherein said image processor is operative for detecting a motion of the element in the captured image, or wherein said image processor is operative for detecting multiple elements in the captured image, and said image processor is operative for detecting and counting a number of the elements in the captured image. 183. The system according to claim 100 wherein at least one of the in-building or in-vehicle networks is a Personal Area Network (PAN), at least one of said router, said first device, and said second device further comprising a PAN interface, and said PAN interface includes a PAN port and a PAN transceiver. 184. The system according to claim 183 wherein the PAN is a Wireless PAN (WPAN), said PAN port is an antenna, and said PAN transceiver is a WPAN modem, and wherein the WPAN is according to, or based on, Bluetooth™ or IEEE 802.15.1-2005 standards, or wherein the WPAN is a wireless control network that is according to, or based on, Zigbee™, IEEE 802.15.4-2003, or Z-Wave™ standards. 185. The system according to claim 100 wherein: at least one of the in-building or in-vehicle networks is a Local Area Network (LAN); at least one of said router, said first device, and said second device further comprising a LAN interface; and said LAN interface includes a LAN port and a LAN transceiver. 186. The system according to claim 185 wherein: the LAN is a wired LAN using a wired LAN medium; said LAN port is a LAN connector; and said LAN transceiver is a LAN modem, and wherein: the LAN is Ethernet based; and the wired LAN is according to, or based on, IEEE 802.3-2008 standard. 187. The system according to claim 186 wherein: the wired LAN medium is based on twisted-pair copper cables; said LAN interface is 10Base-T, 100Base-T, 100Base-TX, 100Base-T2, 100Base-T4, 1000Base-T, 1000Base-TX, 10GBase-CX4, or 10GBase-T; and said LAN connector is RJ-45 type, or wherein: said wired LAN medium is based on an optical fiber; said LAN interface is 10Base-FX, 100Base-SX, 100Base-BX, 100Base-LX10, 1000Base-CX, 1000Base-SX, 1000Base-LX, 1000Base-LX10, 1000Base-ZX, 1000Base-BX10, 10GBase-SR, 10GBase-LR, 10GBase-LRM, 10GBase-ER, 10GBase-ZR, or 10GBase-LX4; and said LAN connector is a fiber-optic connector. 188. The system according to claim 185 wherein: the LAN is a Wireless LAN (WLAN); said LAN port is a WLAN antenna; and said LAN transceiver is a WLAN modem, and wherein the WLAN is according to, or base on, IEEE 802.11-2012, IEEE 802.11a, IEEE 802.11b, IEEE 802.11g, IEEE 802.11n, or IEEE 802.11ac. 189. The system according to claim 100 wherein: at least one of said in-building or in-vehicle networks is a packet-based or a circuit-switched-based Home Network (HN); at least one of said router, said first device, and said second device further comprising a HN interface; and said HN interface includes a HN port and a HN transceiver, and wherein: the HN is a wired HN using a wired HN medium; said HN port is an HN connector; and said HN transceiver is an HN modem, and wherein the wired HN medium comprises a wiring primarily installed for carrying a service signal, where the wiring is in-wall wiring connected to by a wiring connector at a service outlet. 190. The system according to claim 189 wherein: said wiring is a telephone wire pair; the service signal is an analog telephone signal (POTS); and said wiring connector is a telephone connector, and wherein said FIN is according to, or based on, HomePNA standard, ITU-T Recommendation G.9954, ITU-T Recommendation G.9960, ITU-T Recommendation G.9970, or ITU-T Recommendation G.9961. 191. The system according to claim 189 wherein: said wiring is a coaxial cable; the service signal is a Cable Television (CATV) signal; and said wiring connector is a coaxial connector, wherein said FIN is according to, or based on, HomePNA standard or Multimedia over Coax Alliance (MoCA) standard that is according to, or based on, ITU-T Recommendation G.9954, ITU-T Recommendation G.9960, ITU-T Recommendation G.9970, or ITU-T Recommendation G.9961. 192. The system according to claim 189 wherein: the wiring is an AC power wires; the service signal is an AC power signal; and said wiring connector is an AC power connector, and wherein the FIN is according to, or based on, HomePlug™ standard, HD-PLC standard, Universal Powerline Association (UPA) standard, IEEE 1901-2010, ITU-T Recommendation G.9960, ITU-T Recommendation G.9961, ITU-T Recommendation G.9970, or ITU-T Recommendation G.9972. 193. The system according to claim 100 wherein the external network is a packet-based or a circuit-switched-based Wide Area Network (WAN), and wherein said router comprising a WAN interface, and wherein said WAN interface includes a WAN port and a WAN transceiver. 194. The system according to claim 193 wherein the WAN is a wired WAN using a wired WAN medium, said WAN port is a WAN connector, and said WAN transceiver is a WAN modem, and wherein the wired WAN medium comprises a wiring primarily installed for carrying a service signal to the building or to the vehicle. 195. The system according to claim 194 wherein the wired WAN medium comprises one or more telephone wire pairs primarily designed for carrying an analog telephone signal, and wherein said external network is using Digital Subscriber Line/Loop (DSL). 196. The system according to claim 195 wherein the external network is based on Asymmetric Digital Subscriber Line (ADSL), ADSL2 or on ADSL2+, according to, or based on, ANSI T1.413, ITU-T Recommendation G.992.1, ITU-T Recommendation G.992.2, ITU-T Recommendation G.992.3, ITU-T Recommendation G.992.4, or ITU-T Recommendation G.992.5, or wherein the external network is based on Very-high-bit-rate Digital Subscriber Line (VDSL), according to, or based on, ITU-T Recommendation G.993.1 or ITU-T Recommendation G.993.2. 197. The system according to claim 194 wherein the wired WAN medium comprises AC power wires primarily designed for carrying an AC power signal to the building or to the vehicle, and the network is using Broadband over Power Lines (BPL) according to, or based on, IEEE 1675-2008 or IEEE 1901-2010, wherein the wired WAN medium comprises coaxial cable primarily designed for carrying a CATV to the building or to the vehicle, and the network is using Data-Over-Cable Service Interface Specification (DOCSIS), according to, or based on, ITU-T Recommendation J.112, ITU-T Recommendation J.122, or ITU-T Recommendation J.222, or wherein the wired WAN medium comprises an optical fiber, said WAN connector is a fiber-optic connector, and the WAN is based on Fiber-To-The-Home (FTTH), Fiber-To-The-Building (FTTB), Fiber-To-The-Premises (FTTP), Fiber-To-The-Curb (FTTC), or Fiber-To-The-Node (FTTN). 198. The system according to claim 193 wherein the WAN is a wireless broadband network over a licensed or unlicensed radio frequency band, said WAN port is an antenna, and said WAN transceiver is a wireless modem, and wherein the unlicensed radio frequency band is an Industrial, Scientific and Medical (ISM) radio band. 199. The system according to claim 198 wherein the wireless network is a satellite network, said antenna is a satellite antenna, and said wireless modem is a satellite modem. 200. The system according to claim 198 wherein the wireless network is a WiMAX network, wherein said antenna is a WiMAX antenna and said wireless modem is a WiMAX modem, and the WiMAX network is according to, or based on, IEEE 802.16-2009. 201. The system according to claim 198 wherein the wireless network is a cellular telephone network, said antenna is a cellular antenna, and said wireless modem is a cellular modem, and wherein the cellular telephone network is a Third Generation (3G) network that uses UMTS W-CDMA, UMTS HSPA, UMTS TDD, CDMA2000 1xRTT, CDMA2000 EV-DO, or GSM EDGE-Evolution, or wherein the cellular telephone network is a Fourth Generation (4G) network that uses HSPA+, Mobile WiMAX, LTE, LTE-Advanced, MBWA, or is based on IEEE 802.20-2008. 202. A vehicle control system for commanding an actuator operation according in response to a sensor response associated with a phenomenon to a control, for use with one or more in-vehicle networks for communication in a vehicle, and one or more external networks for communicating with an Internet-connected control server via another vehicle or a roadside unit external to the vehicle, the system comprising:
a router in the vehicle, connected to the one or more in-vehicle networks and to the one or more of the external networks, and operative to pass digital data between said in-vehicle and one or more of the external networks; a first device in the vehicle comprising of, or connectable to, a sensor that responds to the phenomenon, the first device is operative to transmit a sensor digital data corresponding to the phenomenon to said router over said one or more in-vehicle networks; a second device in the vehicle comprising of, or connectable to, an actuator that affects the phenomenon, the second device is operative to execute actuator commands received from said router over said one or more in-vehicle networks; and a control server external to the vehicle storing the control logic, and communicatively coupled to said router over the Internet via said one or more of the external networks, wherein said control server is operative to receive the sensor digital data from said router, to produce actuator commands in response to the received sensor digital data according to the control logic, and to transmit the actuator commands to said second device via said router. 203. The system according to claim 202, wherein at least one of said external network is a vehicle-to-vehicle network for communicating with said control server via another vehicle. 204. The system according to claim 202, wherein at least one of the external networks is communicating with a stationary device, and wherein the stationary device is a roadside unit. 205. The system according to claim 202, wherein said router, said first device, and said second device are mechanical attached to the vehicle. 206. The system according to claim 202, wherein the vehicle is adapted for travelling on land, or water, or is airborne. 207. The system according to claim 202, wherein the vehicle is one out of a bicycle, a car, a motorcycle, a train, a ship, an aircraft, a boat, a spacecraft, a boat, a submarine, a dirigible, an electric scooter, a subway, a train, a trolleybus, a tram, a sailboat, a yacht, and an airplane. 208. The system according to claim 202, wherein the sensor is operative to sense the phenomenon in the vehicle, external to the vehicle, or associated with surroundings around the vehicle. 209. The system according to claim 202, wherein the actuator is operative to affect the phenomenon in the vehicle, external to the vehicle, or associated with surroundings around the vehicle. 210. The system according to claim 202, wherein the vehicle is an automobile, and wherein said system is coupled to monitor or control a Engine Control Unit (ECU), a Transmission Control Unit (TCU), a Anti-Lock Braking System (ABS), or Body Control Modules (BCM) of the automobile. 211. The system according to claim 202 further integrated with or being part of a vehicular communication system used for improved safety, traffic flow control, traffic reporting, or traffic management. 212. The system according to claim 202 further used for parking help, cruise control, lane keeping, road sign recognition, surveillance, speed limit warning, restricted entries, and pull-over commands, travel information, cooperative adaptive cruise control, cooperative forward collision warning, intersection collision avoidance, approaching emergency vehicle warning, vehicle safety inspection, transit or emergency vehicle signal priority, electronic parking payments, commercial vehicle clearance and safety inspections, in-vehicle signing, rollover warning, probe data collection, highway-rail intersection warning, or electronic toll collection. 213. The system according to claim 202, wherein one or more of the in-vehicle networks is a vehicle bus. 214. The system according to claim 213, wherein the vehicle bus is according to, or based on, Control Area Network (CAN) or Local Interconnect Network (LIN). 215. The system according to claim 202, wherein one or more of the in-vehicle networks is using is a communication medium that is based on DC power lines of the vehicle. 216. The system according to claim 202, wherein the vehicle further comprising an On-Board Diagnostics (OBD) system. 217. The system according to claim 216, wherein said system is coupled to or integrated with the OBD system. 218. The system according to claim 217, wherein the OBD system is according to, or based on, OBD-II or EOBD (European On-Board Diagnostics) standards. 219. The system according to claim 217, wherein the OBD system further comprises a diagnostics connector, and wherein said router, said first device, or said second device are coupled to the diagnostics connector. 220. The system according to claim 219, wherein said router, said first device, or said second device are at least in part powered via the diagnostics connector. 221. The system according to claim 202, wherein said router is operative to communicate to said control server an information regarding fuel and air metering, ignition system, misfire, auxiliary emission control, vehicle speed and idle control, transmission, on-board computer, fuel level, relative throttle position, ambient air temperature, accelerator pedal position, air flow rate, fuel type, oxygen level, fuel rail pressure, engine oil temperature, fuel injection timing, engine torque, engine coolant temperature, intake air temperature, exhaust gas temperature, fuel pressure, injection pressure, turbocharger pressure, boost pressure, exhaust pressure, exhaust gas temperature, engine run time, NOx sensor, manifold surface temperature, or a Vehicle Identification Number (VIN). 222. The system according to claim 214, wherein one out of the in-vehicle networks is according to, or based on, SAE J1962, SAE J1850, SAE J1979, ISO 15765, or ISO 9141 standard. 223. A control system comprising:
a sensor disposed in an enclosed environment that senses a condition in the enclosed environment and provides sensor data corresponding to the condition; an internal network extending substantially within the enclosed environment; an external network, coupled to the Internet, extending substantially outside the enclosed environment; a control server, disposed outside the enclosed environment, coupled to the Internet, said server receiving sensor data corresponding to the sensor data and executing control logic therein so as to generate actuator commands responsive to the received sensor data; a router coupled to said internal and external networks so as to pass information between said internal and external networks, and configured to deliver the sensor data from said internal to said external networks and to deliver the actuator commands from said external to said internal networks; an actuator disposed within the enclosed environment, receiving the actuator commands from said router, said actuator operative to affect the condition in the enclosed environment. 224. The system according to claim 223 wherein the external network or the internal network is a wireless network using a wireless communication over a licensed or an unlicensed radio frequency band. 225. The system according to claim 224 wherein the unlicensed radio frequency band is an Industrial, Scientific and Medical (ISM) radio band. 226. The system according to claim 224 wherein the unlicensed radio frequency band is about 60 GHz, and wherein the internal network is based on beamforming and supports a data rate of above 7 Gb/s and is used for in-room communication. 227. The system according to claim 226 wherein the internal network is according to, or based on, WiGig™, IEEE 802.1 lad, WirelessHD™, or IEEE 80215.3c-2009. 228. The system according to claim 224 wherein the internal is operative to carry uncompressed video data according to, or based on, WHDI™. 229. The system according to claim 224 wherein the wireless network is using a white space spectrum or an analog television channel consisting of a 6 MHz, 7 MHz or 8 MHz frequency band, and allocated in a 54-806 MHz band, and the system is further operative for channel bonding, where the wireless network is using two or more analog television channels. 230. The system according to claim 224 wherein the external network or at least one of the in-building or in-vehicle networks is using a wireless communication that is based on a Wireless Regional Area Network (WRAN) standard communicatively coupling a Base Station (BS) and one or more CPEs using OFDMA modulation. 231. The system according to claim 230 wherein said router serves as a BS or as a CPE. 232. The system according to claim 230 wherein the wireless communication is based on geographically-based cognitive radio, and is according to, or based on, IEEE 802.22 or IEEE 802.11af standards. 233. The system according to claim 230 wherein said wireless network is based on, or is according to, Near Field Communication (NFC) that is based on a standard that is according to, or based on, ISO/IEC 18092, ECMA-340, ISO/IEC 21481, or ECMA-352, and wherein the wireless communication couples an initiator and a target. 234. The system according to claim 233 wherein said wireless network is using a 13.56 MHz frequency band, a data rate of 106 Kb/s, 212 Kb/s, or 424 Kb/s, and a modulation that is Amplitude-Shift-Keying (ASK), and the wireless network is using a passive or an active communication mode. 235. The system according to claim 234 wherein said router serves as an initiator or as a target or transponder. | A system and method in a building or vehicle for an actuator operation in response to a sensor according to a control logic, the system comprising a router or a gateway communicating with a device associated with the sensor and a device associated with the actuator over in-building or in-vehicle networks, and an external Internet-connected control server associated with the control logic implementing a PID closed linear control loop and communicating with the router over external network for controlling the in-building or in-vehicle phenomenon. The sensor may be a microphone or a camera, and the system may include voice or image processing as part of the control logic. A redundancy is used by using multiple sensors or actuators, or by using multiple data paths over the building or vehicle internal or external communication. The networks may be wired or wireless, and may be BAN, PAN, LAN, WAN, or home networks.1. A system for commanding an actuator operation in response to a sensor response associated with a phenomenon according to a control logic, for use with one or more in-building or in-vehicle networks for communication in a building or in a vehicle, and an external network at least in part external to the building or to the vehicle, the system comprising:
a router in the building or in the vehicle, coupled between the one or more in-building or in-vehicle networks and the external network, and operative to pass digital data between the in-building and external networks; a first device in the building or in the vehicle comprising, or connectable to, the sensor that responds to the phenomenon, said first device is operative to transmit a sensor data corresponding to the phenomenon to said router over the one or more in-building or in-vehicle networks; a second device in the building or in the vehicle comprising, or connectable to, an actuator that affects the phenomenon, the second device is operative to execute actuator commands received from said router over said one or more in-building or in-vehicle networks; and a control server external to the building or to the vehicle storing said control logic and coupled to said router over the Internet via the external network, wherein said control server is operative to receive the sensor data from said router, to produce actuator commands in response to the received sensor digital data according to said control logic, and to transmit the actuator commands to said second device via said router. 2. The system according to claim 1, wherein said router is a gateway or is further operative for IP routing, NAT, DHCP, firewalling, parental control, rate converting, fault isolating, protocol converting or translating, or proxy serving. 3. The system according to claim 1 further comprising a third device external to the building or to the vehicle comprising an additional sensor that responds to a distinct or same phenomenon, the third device is operative to transmit an additional sensor data corresponding to the distinct phenomenon to said control server over the external network or over a network distinct from the external network, wherein said control server is operative to receive the additional sensor data, and to produce actuator commands in response to the received additional sensor data according to said control logic. 4. The system according to claim 1 further comprising a third device external to the building or to the vehicle comprising an additional actuator that responds to received additional actuator commands, the third device is operative to receive the additional actuator commands from said control server over the external network or over a network distinct from the external network, wherein said control server is operative to transmit said additional actuator commands to said third device. 5. The system according to claim 1, wherein said control logic is affecting a control loop for controlling the phenomenon, and wherein the control loop is a closed linear control loop where the sensor data serves as a feedback to command the actuator based of the loop deviation from a setpoint or a reference value. 6. The system according to claim 5, wherein the closed control loop is a proportional-based, an integral-based, a derivative-based, or a Proportional, Integral, and Derivative (PID) based control loop, wherein the control loop uses feed-forward, Bistable, Bang-Bang, Hysteretic, or fuzzy logic based control, or wherein: the control loop involves randomness based on random numbers; and the system further comprises a random number generator for generating random numbers, and wherein said random number generator is hardware-based using thermal noise, shot noise, nuclear decaying radiation, photoelectric effect, or quantum phenomena, or wherein said random number generator is software-based and executes an algorithm for generating pseudo-random numbers. 7. The system according to claim 5, wherein the setpoint is fixed, set by a user, or is time dependent. 8. The system according to claim 5 further comprising an additional sensor responsive to a phenomenon distinct from the phenomenon, and wherein the setpoint is dependent upon the output of said additional sensor. 9. The system according to claim 1 wherein at least one of the in-building or in-vehicle networks is using in-wall wiring that is connected to an outlet as a network medium, and wherein said first device, said second device, or said router is operative to communicate over the in-wall wiring. 10. The system according to claim 9 wherein an enclosure of the sensor, the actuator, said first device, said second device, or said router, consists of, comprising, or is integrated with, the outlet or a plug-in module pluggable to the outlet. 11. The system according to claim 9 wherein the outlet is a telephone, LAN, AC power, or CATV outlet, and the in-wall wiring is respectively a telephone wire pair, a LAN cable, an AC power cable, or a CATV coaxial cable. 12. The system according to claim 9 wherein the in-wall wiring is carrying a power signal, and wherein the sensor, the actuator, said first device, said second device, or said router is at least in part powered from the power signal. 13. The system according to claim 1, wherein the sensor is a piezoelectric sensor that includes single crystal material or a piezoelectric ceramics and uses a transverse, longitudinal, or shear effect mode of the piezoelectric effect. 14. The system according to claim 1, further comprising multiple sensors arranged as a directional sensor array operative to estimate the number, magnitude, frequency, Direction-Of-Arrival (DOA), distance, or speed of the phenomenon impinging the sensor array, and wherein said control logic includes processing of the sensor array outputs. 15. The system according to claim 1, wherein a single component consists of, or is part of, the sensor and the actuator. 16. The system according to claim 1, wherein the sensor is a thermoelectric sensor that responds to a temperature or to a temperature gradient of an object using conduction, convection, or radiation, and wherein the thermoelectric sensor consists of, or comprises, a Positive Temperature Coefficient (PTC) thermistor, a Negative Temperature Coefficient (NTC) thermistor, a thermocouple, a quartz crystal, or a Resistance Temperature Detector (RTD). 17. The system according to claim 1, wherein the sensor consists of, or comprises, a nanosensor, a crystal, or a semiconductor, or wherein: the sensor is an ultrasonic based, the sensor is an eddy-current sensor, the sensor is a proximity sensor, the sensor is a bulk or surface acoustic sensor, or the sensor is an atmospheric or an environmental sensor. 18. The system according to claim 1, wherein the sensor is a radiation sensor that responds to radioactivity, nuclear radiation, alpha particles, beta particles, or gamma rays, and is based on gas ionization. 19. The system according to claim 1, wherein the sensor is a photoelectric sensor that responds to a visible or an invisible light, the invisible light is infrared, ultraviolet, X-rays, or gamma rays, and wherein the photoelectric sensor is based on the photoelectric or photovoltaic effect, and consists of, or comprises, a semiconductor component that consists of, or comprises, a photodiode, a phototransistor, or a solar cell. 20. The system according to claim 19, wherein the photoelectric sensor is based on Charge-Coupled Device (CCD) or a Complementary Metal-Oxide Semiconductor (CMOS) element. 21. The system according to claim 1, wherein the sensor is a photosensitive image sensor array comprising multiple photoelectric sensors, for capturing an image and producing electronic image information representing the image, and the system further comprising one or more optical lens for focusing the received light and to guide the image, and wherein the image sensor is disposed approximately at an image focal point plane of the one or more optical lens for properly capturing the image. 22. The system according to claim 21, further comprising an image processor coupled to the image sensor for providing a digital data video signal according to a digital video format, the digital video signal carrying digital data video based on the captured images, and wherein the digital video format is based on one out of: TIFF (Tagged Image File Format), RAW format, AVI, DV, MOV, WMV, MP4, DCF (Design Rule for Camera Format), ITU-T H.261, ITU-T H.263, ITU-T H.264, ITU-T CCIR 601, ASF, Exif (Exchangeable Image File Format), and DPOF (Digital Print Order Format) standards. 23. The device according to claim 22 further comprising a intraframe or interframe compression based video compressor coupled to the image sensor for lossy or non-lossy compressing the digital data video, wherein the compression is based on a standard compression algorithm which is one or more out of JPEG (Joint Photographic Experts Group) and MPEG (Moving Picture Experts Group), ITU-T H.261, ITU-T H.263, ITU-T H.264 and ITU-T CCIR 601. 24. The system according to claim 1, wherein the sensor is an electrochemical sensor that responds to an object chemical structure, properties, composition, or reactions. 25. The system according to claim 24, wherein the electrochemical sensor is a pH meter or a gas sensor responding to a presence of radon, hydrogen, oxygen, or Carbon-Monoxide (CO), or wherein the electrochemical sensor is based on optical detection or on ionization and is a smoke, a flame, or a fire detector, or is responsive to combustible, flammable, or toxic gas. 26. The system according to claim 1, wherein the sensor is a physiological sensor that responds to parameters associated with a live body, and is external to the sensed body, implanted inside the sensed body, attached to the sensed body, or wearable on the sensed body. 27. The system according to claim 26, wherein the physiological sensor is responding to body electrical signals and is an EEG Electroencephalography (EEG) or an Electrocardiography (ECG) sensor. 28. The system according to claim 26, wherein the physiological sensor is responding to oxygen saturation, gas saturation, or a blood pressure in the sensed body. 29. The system according to claim 1, wherein the sensor is an electroacoustic sensor that responds to an audible or inaudible sound. 30. The system according to claim 29, wherein the electroacoustic sensor is an omnidirectional, unidirectional, or bidirectional microphone that is based on the sensing the incident sound based motion of a diaphragm or a ribbon, and the microphone consists of, or comprising, a condenser, an electret, a dynamic, a ribbon, a carbon, or a piezoelectric microphone. 31. The system according to claim 1 wherein said router, said first device, said second device, the sensor, or the actuator are addressable in a digital data network using distinct locally administered addresses or a universally administered digital addresses stored in a volatile or non-volatile memory of the respective device and uniquely identifying the respective device in the digital data network, and the digital data network is one or more of the in-building or in-vehicle networks, the external network, a WAN, a LAN, a PAN, a BAN, a home network, or the Internet. 32. The system according to claim 31 wherein the digital address is a MAC layer address that is MAC-48, EUI-48, or EUI-64 address type. 33. The system according to claim 31 wherein the digital address is a layer 3 address and is static or dynamic IP address that is IPv4 or IPv6 type address. 34. The system according to claim 31 wherein the digital address is autonomously assigned or is assigned by another device via a communication interface using DHCP. 35. The system according to claim 34 wherein the digital address of said first or second device is assigned by said router or control server via the in-building or in-vehicle networks or the external network. 36. The system according to claim 34 wherein the digital address of said router is assigned by said control server via the external network. 37. The system according to claim 31 wherein said router, said first device, or said second device are addressable in one or more digital data networks using multiple digital addresses, and wherein a distinct digital address is assigned to each network interface of the respective device. 38. The system according to claim 1, wherein said router, said first device, or said second device are connectable to be powered from a DC or AC power source, and further comprising a power supply housed with the respective device enclosure, and coupled to be powered from the power source and to power at least part of said respective device. 39. The system according to claim 38, wherein the power source is a primary or rechargeable battery, or wherein the AC power source is mains AC power, and wherein said respective device further comprising an AC power connector connectable to an AC power outlet. 40. The system according to claim 38, wherein the power source is an electrical power generator for generating an electric power from the phenomenon or from a distinct another phenomenon. 41. The system according to claim 40, wherein a single component serves as the sensor and as the electrical power generator. 42. The system according to claim 40, wherein the electrical power generator is an electromechanical generator for harvesting kinetic energy, or wherein the electrical power generator is a solar cell or a Peltier effect based thermoelectric device. 43. The system according to claim 38, wherein the power source is internal or external to said respective enclosure of said router, said first device, or said second device. 44. An apparatus for coupling between an internal network extending substantially within an enclosed environment and an external network, coupled to the Internet for communication with a control server, extending substantially outside the enclosed environment, and for use with a sensor disposed in the enclosed environment that senses a condition in the enclosed environment and provides sensor data corresponding to the condition, and an actuator disposed to affect the condition in the enclosed environment in response to received actuator commands, said apparatus comprising:
a first port for coupling to the internal network; a first modem coupled to said first port for communication over the internal network; a second port for coupling to the external network; a second modem coupled to said second port for communication over the external network; a router coupled between said first and second modems so as to pass information between the internal and external networks, and configured to deliver the sensor data from the internal network to the control server over the external networks and to deliver the actuator commands from the control server to the actuator over the internal network; and an housing enclosing said first and second ports, said first and second modems, and said router. 45. The apparatus according to claim 44, wherein said apparatus is a gateway or is further operative for IP routing, NAT, DHCP, firewalling, parental control, rate converting, fault isolating, protocol converting or translating, or proxy serving. 46. The apparatus according to claim 44 further comprising in said housing an additional sensor that senses a second condition that is distinct from, or same as, the condition, and provides additional sensor data corresponding to the second condition, and said apparatus further operative to transmit the additional sensor data to the control server over the external network or over a network distinct from the external network. 47. The apparatus according to claim 44 further comprising in said housing an additional actuator that affects a second condition that is distinct from, or same as, the condition, in response to received additional actuator commands, and said apparatus further operative to receive the additional actuator commands from the control server over the external network or over a network distinct from the external network. 48. The apparatus according to claim 44 further operative for producing actuator commands in response to the sensor data and for delivering the actuator commands to the actuator over the internal network, and wherein a control logic is affecting a control loop for controlling the condition, and wherein the control loop is a closed linear control loop where the sensor data serves as a feedback to command the actuator based of a loop deviation from a setpoint or a reference value. 49. The apparatus according to claim 48, wherein the closed control loop is a proportional-based, an integral-based, a derivative-based, or a Proportional, Integral, and Derivative (PID) based control loop, wherein the control loop uses feed-forward, Bistable, Bang-Bang, Hysteretic, or fuzzy logic based control, or wherein: the control loop involves randomness based on random numbers; and the apparatus further comprises a random number generator for generating random numbers, and wherein said random number generator is hardware-based using thermal noise, shot noise, nuclear decaying radiation, photoelectric effect, or quantum phenomena, or wherein said random number generator is software-based and executes an algorithm for generating pseudo-random numbers. 50. The apparatus according to claim 48, wherein the setpoint is fixed, set by a user, or is time dependent. 51. The apparatus according to claim 48 further couplable to, or comprising in said housing, an additional sensor responsive to a second condition distinct from the condition, and wherein the setpoint is dependent upon an output of the additional sensor. 52. The apparatus according to claim 44 wherein the internal or the external network is using in-wall wiring that is connected to an outlet as a network medium, and wherein said apparatus operative to communicate over the in-wall wiring. 53. The apparatus according to claim 52 wherein said housing consists of, comprising, or is integrated with, the outlet or a plug-in module pluggable to the outlet. 54. The apparatus according to claim 52 wherein the outlet is a telephone, LAN, AC power, or CATV outlet, and the in-wall wiring is respectively a telephone wire pair, a LAN cable, an AC power cable, or a CATV coaxial cable, and wherein said first or second modem is operative to respectively communication over the telephone wire pair, the LAN cable, the AC power cable, or the CATV coaxial cable. 55. The apparatus according to claim 52 wherein the in-wall wiring is carrying a power signal, and wherein said apparatus is at least in part powered from the power signal. 56. The apparatus according to claim 44, wherein the sensor is a photosensitive image sensor array comprising multiple photoelectric sensors, for capturing an image and producing electronic image information representing the image, and said apparatus further comprising an image processor coupled to the image sensor for providing a digital video data signal according to a digital video format, the digital video signal carrying digital video data based on the captured images, and wherein the digital video format is based on one out of: TIFF (Tagged Image File Format), RAW format, AVI, DV, MOV, WMV, MP4, DCF (Design Rule for Camera Format), ITU-T H.261, ITU-T H.263, ITU-T H.264, ITU-T CCIR 601, ASF, Exif (Exchangeable Image File Format), and DPOF (Digital Print Order Format) standards. 57. The apparatus according to claim 56 further comprising a intraframe or interframe compression based video compressor coupled to said image sensor for lossy or non-lossy compressing the digital video data, wherein the compression is based on a standard compression algorithm which is one or more out of JPEG (Joint Photographic Experts Group) and MPEG (Moving Picture Experts Group), ITU-T H.261, ITU-T H.263, ITU-T H.264 and ITU-T CCIR 601. 58. The apparatus according to claim 44, wherein said apparatus is operative to calculate or provide a space-dependent characteristic of the sensed condition, that is a pattern, a linear density, a surface density, a volume density, a flux density, a current, a direction, a rate of change in a direction, or a flow, of the condition. 59. The apparatus according to claim 44, wherein the internal or external network is using a cable for carrying a communication signal, and wherein said first or second port consisting of a connector for connecting to the cable, and wherein the cable is connectable to simultaneously carry a DC or AC power signal and the communication signal. 60. The apparatus according to claim 59, wherein said apparatus is further operative to supply at least in part of the power signal or to be at least in part powered from the power signal. 61. The apparatus according to claim 59, wherein the power signal is carried over dedicated wires in the cable, and wherein the wires are distinct from the wires in the cable carrying the communication signal. 62. The apparatus according to claim 59, wherein the power signal and the communication signal are concurrently carried over same wires in the cable, and wherein said apparatus further comprising a power/data splitter arrangement having first, second and third ports, wherein only the communication signal is passed between the first and second ports, and only the power signal is passed between the first and third ports, and wherein the first port is coupled to the connector. 63. The apparatus according to claim 62, wherein the power and communication signals are carried using Frequency Division Multiplexing (FDM), where the power signal is carried over a power signal frequency or a power frequency band, and the communication signal is carried over a frequency band above and distinct from the power signal frequency or the power frequency band, and wherein the power/data splitter comprising an HPF between the first and second ports and a LPF between the first and third ports, or wherein said power/data splitter comprising a transformer and a capacitor connected to the transformer windings. 64. The apparatus according to claim 62, wherein said power and digital data signals are carried using a phantom scheme, and the said power/data splitter comprising at least two transformers having a center-tap connection. 65. The apparatus according to claim 59, wherein said power and digital data signals are carried substantially according to IEEE 802.3af-2003 or IEEE 802.3at-2009 standards. 66. The apparatus according to claim 44 wherein said second port and said second modem consists of a first network interface, for use with an additional external network and for communicating with the control server over multiple data paths, said apparatus further comprising a second network interface consisting of a third port for coupling to the additional external network, and a third modem coupled to said third port for communication over the additional external network. 67. The apparatus according to claim 66 wherein said first and second network interfaces are of a same type. 68. The apparatus according to claim 66 wherein the external network interface is based on a conductive medium, and wherein said second port is a connector. 69. The apparatus according to claim 68 wherein said connector is selected from a group consisting of a coaxial connector, a twisted-pair connector, an AC power connector, a telephone connector. 70. The apparatus according to claim 66 wherein the external network is on a non-conductive medium, and wherein said second port is a non-conductive coupler. 71. The apparatus according to claim 70 wherein the non-conductive coupler is selected from a group consisting of an antenna, a light emitter, a light detector, a microphone, a speaker, and a fiber-optics connector. 72. The apparatus according to claim 66 wherein the external network is based on conductive medium and said second port is a connector, and wherein the additional external network is based on a non-conductive medium, and wherein said third port is a non-conductive coupler. 73. The apparatus according to claim 66 wherein said second and third modems are of different scales selected from a group consisting of NFC, PAN, LAN, MAN or WAN modems, wherein said second and third modems use different modulation schemes selected from a group consisting of AM, FM, and PM, wherein said second and third modems use different duplexing schemes selected from a group consisting of half-duplex, full-duplex, and unidirectional, wherein said second modem is packet-based and said third modem is circuit-switched, or wherein said second port and said third port are the same port used by said first and second network interfaces. 74. The apparatus according to claim 73 wherein said first and second network interfaces are operative to communicate over a same network using FDM, where said first network interface is using a first frequency band and said second network interface is using a second frequency band, that is overlapping or non-overlapping with the first frequency band. 75. The apparatus according to claim 66 further operative to send a packet to the control server via said first and second network interfaces to be carried over two distinct data paths, the packet comprising a source address, a destination address, an information type, and an information content, and the packet is sent via said first or second network interfaces selected by a fixed, adaptive, or dynamic selection mechanism. 76. The apparatus according to claim 75 wherein a same packet is sent via said first and second network interfaces. 77. The apparatus according to claim 75 wherein a distinct number is assigned to said first and second network interfaces, and wherein said selection mechanism is using, or based on, the assigned numbers. 78. The apparatus according to claim 77 wherein the assigned numbers represent priority levels associated with said network interfaces, and the network interface having the highest priority level is selected. 79. The apparatus according to claim 77 wherein the network interface is randomly selected from said first and second network interface, or wherein the selection mechanism is based on alternate selection. 80. The apparatus according to claim 77 wherein the assigned numbers are based on the associated networks types or attributes or the performance history, or wherein the assigned numbers are based on the current or past associated networks data rates, transfer delays, networks mediums or networks mediums types, qualities, duplexing schemes, line codes, modulation schemes, switching mechanisms, throughputs, or usages. 81. The apparatus according to claim 77 wherein the network interface are selected based on the packet source address, based on the packet destination address, based on the packet information type, or based on the packet information content. 82. The apparatus according to claim 44 further operative to analyze the sensor data versus the actuator commands. 83. The apparatus according to claim 82, wherein the sensor transfer function is characterized as S(s), the actuator transfer function is characterized as C(s), the actuator command is characterized as A(s), and the sensor data is characterized as F(s), and wherein the analysis include a calculation of F(s)/[S(s)*A(s)*C(s)]. 84. The apparatus according to claim 82, wherein the analysis is used to estimate or determine a condition characteristics or parameter. 85. The apparatus according to claim 82, wherein the analysis is used as a sensor data by a control logic, and the apparatus is further periodically operative for initiating and transmitting actuator commands and for analyzing the sensor data versus the transmitted actuator commands. 86. The apparatus according to claim 44 further integrated in part or entirely in an appliance. 87. The apparatus according to claim 44 wherein the internal network is a Body Area Network (BAN), a Personal Area Network (PAN), or Local Area Network (LAN), and wherein said first port is respectively a BAN, PAN, or LAN port and said first modem is respectively a BAN, PAN, or LAN modem. 88. The apparatus according to claim 87 wherein: the LAN is a wired LAN using a wired LAN medium and said LAN port is a LAN connector, and wherein: the LAN is Ethernet based; and the wired LAN is according to, or based on, IEEE 802.3-2008 standard. 89. The apparatus according to claim 44 wherein the external network is a packet-based or a circuit-switched-based Wide Area Network (WAN), and wherein said second port is a WAN port and said second modem is a WAN transceiver. 90. The apparatus according to claim 44 wherein the enclosed environment is a vehicle and said housing is attachable to the vehicle body, and wherein said apparatus communicates with another vehicle or a roadside unit external to the vehicle over the external network, and the condition is in the vehicle, external to the vehicle, or associated with surroundings around the vehicle. 91. The apparatus according to claim 90, wherein the vehicle is one out of a bicycle, a car, a motorcycle, a train, a ship, an aircraft, a boat, a spacecraft, a boat, a submarine, a dirigible, an electric scooter, a subway, a train, a trolleybus, a tram, a sailboat, a yacht, and an airplane. 92. The apparatus according to claim 90, wherein the vehicle is an automobile, and wherein said apparatus is coupled to monitor or control a Engine Control Unit (ECU), a Transmission Control Unit (TCU), a Anti-Lock Braking System (ABS), or Body Control Modules (BCM) of the automobile. 93. The apparatus according to claim 90, wherein the internal network is a vehicle bus that is according to, or based on, Control Area Network (CAN) or Local Interconnect Network (LIN). 94. The apparatus according to claim 90, wherein the vehicle further comprising an On-Board Diagnostics (OBD) system, and said apparatus is coupled to or integrated with the OBD system. 95. The apparatus according to claim 94, further operative to communicate to the control server an information regarding fuel and air metering, ignition system, misfire, auxiliary emission control, vehicle speed and idle control, transmission, on-board computer, fuel level, relative throttle position, ambient air temperature, accelerator pedal position, air flow rate, fuel type, oxygen level, fuel rail pressure, engine oil temperature, fuel injection timing, engine torque, engine coolant temperature, intake air temperature, exhaust gas temperature, fuel pressure, injection pressure, turbocharger pressure, boost pressure, exhaust pressure, exhaust gas temperature, engine run time, NOx sensor, manifold surface temperature, or a Vehicle Identification Number (VIN). 96. A control system for commanding an actuator operation in response to processing of an image according to a control logic, for use with one or more in-building or in-vehicle networks for communication in a building or in a vehicle, and an external network at least in part external to the building or to the vehicle, the system comprising:
a router in the building or in the vehicle, connected to the one or more in-building or in-vehicle networks and to the external network, and operative to pass digital data between said in-building or in-vehicle and external networks; a first device in the building or in the vehicle comprising an image sensor for capturing still or video image, the first device is operative to transmit a digital data corresponding to said captured still or video image to said router over said one or more in-building or in-vehicle networks; a second device in the building or in the vehicle comprising an actuator that affects a phenomenon, the second device is operative to execute actuator commands received from said router over said one or more in-building or in-vehicle networks; a control server external to the building or to the vehicle storing said control logic, and communicatively coupled to said router over the Internet via said external network; and an image processor having an output for processing said captured still or video image, wherein said control server is operative to produce actuator commands in response to the output of said image processor according to said control logic, and to transmit the actuator commands to said second device via said router, and wherein said image processor is entirely or in part in said first device, said router, or said control server. 97. A control system for commanding an actuator operation according to a control logic, in response to processing of a voice, for use with one or more in-building or in-vehicle networks for communication in a building or in a vehicle, and an external network at least in part external to the building or to the vehicle, the system comprising:
a router in the building or in the vehicle, connected to the one or more in-building or in-vehicle networks and to the external network, and operative to pass digital data between said in-building or in-vehicle and external networks; a first device in the building or in the vehicle comprising a microphone for sensing voice, the first device is operative to transmit a digital data corresponding to said sensed voice to said router over said one or more in-building or in-vehicle networks; a second device in the building or in the vehicle comprising an actuator that affects a phenomenon, the second device is operative to execute actuator commands received from said router over said one or more in-building or in-vehicle networks; a control server external to the building or to the vehicle storing said control logic, and communicatively coupled to said router over the Internet via said external network; and a voice processor having an output for processing said voice, wherein said control server is operative to produce actuator commands in response to the output of said voice processor according to said control logic and to transmit the actuator commands to said second device via said router, and wherein said voice processor is entirely or in part in said first device, said router, or said control server. 98. The system according to claim 97 wherein at least one of the in-building or in-vehicle networks is a Body Area Network (BAN), at least one of said router, said first device, and said second device further comprising a BAN interface, and said BAN interface includes a BAN port and a BAN transceiver, and wherein the BAN is a Wireless BAN (WBAN), said BAN port is an antenna, said BAN transceiver is a WBAN modem, and the BAN is according to, or based on, IEEE 802.15.6 standard. 99. The system according to claim 98 wherein the BAN is a Wireless BAN (WBAN), said BAN port is an antenna, said BAN transceiver is a WBAN modem, and the BAN is according to, or based on, IEEE 802.15.6 standard. 100. A system for commanding an actuator operation in response to a sensor response associated with a phenomenon according to a control logic, for use with one or more in-building or in-vehicle networks for communication in a building or in a vehicle, and an external network at least in part external to the building or to the vehicle, the system comprising:
a router in a single enclosure in the building or in the vehicle, coupled between the one or more in-building or in-vehicle networks and the external network, and operative to pass digital data between the in-building and external networks; a first device in a single enclosure in the building or in the vehicle comprising, or connectable to, the sensor that responds to the phenomenon, said first device is operative to transmit a sensor data corresponding to the phenomenon to said router over the one or more in-building or in-vehicle networks; a second device in a single enclosure in the building or in the vehicle comprising, or connectable to, an actuator that affects the phenomenon, the second device is operative to execute actuator commands received from said router over said one or more in-building or in-vehicle networks; and a control server external to the building or to the vehicle storing said control logic and coupled to said router over the Internet via the external network, wherein said control server is operative to receive the sensor data from said router, to produce actuator commands in response to the received sensor digital data according to said control logic, and to transmit the actuator commands to said second device via said router. 101. The system according to claim 100, wherein the actuator is a light source that emits visible or non-visible light for illumination or indication, the non-visible light is infrared, ultraviolet, X-rays, or gamma rays, and wherein the light source is an electric light source for converting electrical energy into light. 102. The system according to claim 101, wherein the electric light source consists of, or comprises, a lamp, an incandescent lamp, a gas discharge lamp, a fluorescent lamp, a Solid-State Lighting (SSL), a Light Emitting Diode (LED), an Organic LED (OLED), a polymer LED (PLED), or a laser diode. 103. The system according to claim 100, wherein the actuator is a motion actuator that causes linear or rotary motion, and said system further comprising a conversion mechanism for respectfully converting to rotary or linear motion based on a screw, a wheel and axle, or a cam. 104. The system according to claim 100, wherein one or more of the in-building or in-vehicle networks is a wired network using a cable for carrying a communication signal, and wherein said router, said first device, or said second device further comprising a connector for connecting to the cable, and wherein the cable is connectable to simultaneously carry a DC or AC power signal and the communication signal. 105. The system according to claim 104, wherein said router, said first device, or said second device is further operative to supply at least in part of the power signal or to be at least in part powered from the power signal. 106. The system according to claim 104, wherein the power signal is carried over dedicated wires in the cable, and wherein said wires are distinct from the wires in the cable carrying the communication signal. 107. The system according to claim 104, wherein the power signal and the communication signal are concurrently carried over same wires in the cable, and wherein said connectable device further comprising a power/data splitter arrangement having first, second and third ports, wherein only the communication signal is passed between the first and second ports, and only the power signal is passed between the first and third ports, and wherein the first port is coupled to the connector. 108. The system according to claim 107, wherein the power and communication signals are carried using Frequency Division Multiplexing (FDM), where the power signal is carried over a power signal frequency or a power frequency band, and the communication signal is carried over a frequency band above and distinct from the power signal frequency or the power frequency band, and wherein the power/data splitter comprising an HPF between the first and second ports and a LPF between the first and third ports, or wherein said power/data splitter comprising a transformer and a capacitor connected to the transformer windings. 109. The system according to claim 107, wherein said power and digital data signals are carried using a phantom scheme, and the said power/data splitter comprising at least two transformers having a center-tap connection. 110. The system according to claim 104, wherein said power and digital data signals are carried substantially according to IEEE 802.3af-2003 or IEEE 802.3at-2009 standards. 111. The system according to claim 100, wherein the actuator is a sounder for converting an electrical energy to omnidirectional, unidirectional, or bidirectional pattern emitted, audible or inaudible, sound waves. 112. The system according to claim 100 wherein two devices out of a group consisting of said router, said first device, said second device, and said control server are operative for communicating using multiple data paths that are in part or in full, distinct or independent, from each other. 113. The system according to claim 112 wherein the multiple data paths are of a same type, or are using multiple networks that are similar, identical, or different from each other. 114. The system according to claim 112 wherein the multiple data paths are using multiple networks, and wherein: at least two out of the multiple networks use similar, identical, or different network mediums; at least two out of the multiple networks use similar, identical, or different protocols; or at least two out of the multiple networks are coupled to using similar, identical, or different physical layers. 115. The system according to claim 112 wherein the multiple data paths are using multiple networks, and wherein: all of the multiple networks use similar, identical, or different network mediums; all of the multiple networks use similar, identical, or different protocols; or all of the multiple networks are coupled to using similar, identical, or different physical layers. 116. The system according to claim 112 wherein the multiple data paths are using multiple networks, and wherein: at least one network is a wired network and at least one network is a wireless network; at least one network is based on conductive medium and at least one network is based on non-conductive medium, the conductive medium is coaxial cable, twisted-pair, powerlines, or telephone lines, and the non-conductive medium is using RF, light or sound guided or over-the-air propagation; at least one network is packet-based and at least one network is circuit-switched; at least one network is a private network and at least one network is public; at least two networks use different line codes or provide different data-rates; at least two networks use different duplexing schemes selected from a group consisting of half-duplex, full-duplex, and unidirectional; at least two networks use different modulation schemes selected from a group consisting of AM, FM, and PM; or at least two networks are of different types selected from a group consisting of NFC, PAN, LAN, MAN and WAN. 117. The system according to claim 100 wherein a device out of a group consisting of said router, said first device, said second device, and said control server, is operative for communicating with another device from the group over multiple data paths. 118. The system according to claim 117 wherein said device comprises multiple network interfaces each associated with a respective data path over a respective data path network that is coupled to said respective network interface, and wherein each of said network interfaces comprises a transceiver or a modem for transmitting digital data to, and receiving digital data from, the respective data path network, and a network port for coupling to the respective data path network. 119. The system according to claim 118 wherein at least two out of said network interfaces are of a same type, or wherein all of said network interfaces are of a same type. 120. The system according to claim 118 wherein at least two out of said network interfaces use similar, identical, or different transceivers or modems, or wherein all of said network interfaces use similar, identical, or different transceivers or modems. 121. The system according to claim 118 wherein at least two out of said network interfaces use similar, identical, or different network ports, or wherein all of said network interfaces use similar, identical, or different network ports. 122. The system according to claim 118 wherein at least two out of the data path networks are based on a conductive medium, and wherein said respective network ports are connectors. 123. The system according to claim 122 wherein one or more of said connectors is selected from a group consisting of a coaxial connector, a twisted-pair connector, an AC power connector, a telephone connector. 124. The system according to claim 118 wherein at least two out of the data path networks are based on a non-conductive medium, and wherein said respective network ports are non-conductive couplers. 125. The system according to claim 169 wherein said non-conductive couplers are selected from a group consisting of an antenna, a light emitter, a light detector, a microphone, a speaker, and a fiber-optics connector. 126. The system according to claim 118 wherein one of the data path networks is based on conductive medium, and wherein said respective network port is a connector, and wherein one out of the data path networks is based on a non-conductive medium, and wherein said respective network port is a non-conductive coupler. 127. The system according to claim 118 wherein two out of said modems are of different scales selected from a group consisting of NFC, PAN, LAN, MAN or WAN modems, wherein two out of said modems use different modulation schemes selected from a group consisting of AM, FM, and PM, wherein two out of said modems use different duplexing schemes selected from a group consisting of half-duplex, full-duplex, and unidirectional, wherein at least one out of said modems is packet-based and at least one out of said modems is circuit-switched, or wherein one of said network ports is used by two distinct network interfaces, designated as first and second network interfaces. 128. The system according to claim 127 wherein said first and second network interfaces are operative to communicate over a same network using FDM, where said first network interface is using a first frequency band and said second network interface is using a second frequency band. 129. The system according to claim 173 wherein the first and second frequency bands are distinct from each other or wherein the first and second frequency bands are in part or in whole overlapping over each other, and wherein said first and second network interfaces further respectively comprising a first and second filters for substantially passing only signals in the first and second frequency bands respectively. 130. The system according to claim 118 wherein said device is operative to send a packet to said another device via said one or more said network interfaces to be carried over the one or more data paths, the packet comprising a source address, a destination address, an information type, and an information content, and the packet is sent via one or more of said network interfaces selected by a fixed, adaptive, or dynamic selection mechanism. 131. The system according to claim 130 wherein a same packet is sent via two or more of said network interfaces, or wherein a same packet is sent via all said network interfaces. 132. The system according to claim 130 wherein a distinct number is assigned to each of said network interfaces, and wherein said selection mechanism is using, or based on, the assigned numbers. 133. The system according to claim 132 wherein the assigned numbers represent priority levels associated with said network interfaces, and the network interface having the highest priority level is selected. 134. The system according to claim 132 wherein the one of said network interfaces is randomly selected, or wherein the selection mechanism is based on cyclic selection. 135. The system according to claim 132 wherein the assigned numbers are based on the associated networks types or attributes or the performance history, or wherein the assigned numbers are based on the current or past associated networks data rates, transfer delays, networks mediums or networks mediums types, qualities, duplexing schemes, line codes, modulation schemes, switching mechanisms, throughputs, or usages. 136. The system according to claim 130 wherein the one or more network interfaces are selected based on the packet source address, based on the packet destination address, based on the packet information type, or based on the packet information content. 137. The system according to claim 100, wherein said second device further comprising a first electrically actuated switch, the electrically actuated switch coupled for connecting an electric signal to the actuator, and wherein said electrically actuated switch is actuated is responsive to the control commands. 138. The system according to claim 137, wherein the electric signal is a power signal from a power source, and wherein said first electrically actuated switch is coupled between the power source and the actuator. 139. The system according to claim 137, wherein said first electrically actuated switch is ‘normally open’ type, ‘normally closed’ type, or a changeover switch, wherein said first electrically actuated switch is ‘make-before-break’ or ‘break-before-make’ type, wherein said first electrically actuated switch have two or more poles or two or more throws, and the contacts of said first electrically actuated switch are arranged as a Single-Pole-Double-Throw (SPDT), Double-Pole-Double-Throw (DPDT), Double-Pole-Single-Throw (DPST), or Single-Pole-Changeover (SPCO), wherein said first electrically actuated switch is a latching or non-latching type relay, and wherein said relay is a solenoid-based electromagnetic relay that is a reed relay, wherein said relay is solid-state or semiconductor based, or wherein said relay is a Solid State Relay (SSR), wherein said first switch is based on an electrical circuit that comprises an open collector transistor, an open drain transistor, a thyristor, a TRIAC, or an opto-isolator, or wherein said second device further comprising a second electrically actuated switch that is connected in parallel or in series with the said first electrically actuated switch. 140. The system according to claim 100, wherein said control server is operative to analyze the sensor data versus the transmitted actuator commands. 141. The system according to claim 140, wherein the sensor transfer function is characterized as S(s), the actuator transfer function is characterized as C(s), the actuator command is characterized as A(s), and the sensor data is characterized as F(s), and wherein the analysis include the calculation of F(s)/[S(s)*A(s)*C(s)]. 142. The system according to claim 140, wherein the analysis is used to estimate or determine a phenomenon characteristics or parameter. 143. The system according to claim 140, wherein the analysis is used as a sensor data by the control logic, and the system is further periodically operative for initiating actuator commands and for analyzing the sensor data versus the transmitted actuator commands. 144. The system according to claim 100 further implementing redundancy, where the system further includes an additional sensor that respond to the phenomenon by outputting additional sensor data, an additional actuator that affects the phenomenon, or a redundant data path, and wherein the redundancy is based on Dual Modula redundancy (DMR), Triple Modular Redundancy (TMR), Quadruple Modular Redundancy (QMR), 1:N Redundancy, ‘Cold Standby’, or ‘Hot Standby’. 145. The system according to claim 144 wherein said additional sensor is identical, similar, or different from the sensor, said control server is operative to receive said additional sensor data, and said control logic produces actuator commands in response to the received said additional sensor data. 146. The system according to claim 144 wherein said additional actuator is identical, similar, or different from the actuator. 147. The system according to claim 144 wherein the redundant data path is identical to, similar to, or different from, a data path connecting devices in the system. 148. The system according to claim 144 further including an additional sensor that responds to the phenomenon, and wherein said control server is operative to receive the additional sensor data, and wherein said control logic at one time produce actuator commands in response only to the received additional sensor digital data. 149. The system according to claim 148 further including a third device in the building or in the vehicle comprising said additional sensor that responds to the phenomenon, said third device is operative to transmit the additional sensor data to said router over one of the in-building or in-vehicle networks. 150. The system according to claim 144 further including an additional actuator that affects the phenomenon, and wherein said control server is operative to transmit the additional actuator commands to said additional actuator. 151. The system according to claim 144 further including an additional actuator that affects the phenomenon, and wherein said control server at one time is operative to transmit the additional actuator commands only to said additional actuator. 152. The system according to claim 151 further including a third device in the building or in the vehicle comprising said additional actuator that affects the phenomenon, said third device is operative to receive and execute the additional actuator commands received from said router. 153. The system according to claim 100 further comprising a third device that comprises an additional sensor that responds to a second phenomenon, the third device is operative to transmit said additional sensor data corresponding to the second phenomenon to said router over one of the in-building or in-vehicle networks. 154. The system according to claim 153 wherein the second phenomenon is same as, or distinct from, the phenomenon, and the type of said sensor of said third device is distinct or of a same type of the sensor, and wherein said third device communicates with said router over the same, or over a distinct, in-building or in-vehicle network used by said first device. 155. The system according to claim 100 further comprising a third device that comprises an additional actuator that affects a second phenomenon, the third device is operative to receive the additional actuator commands from said router over one of the in-building or in-vehicle networks. 156. The system according to claim 155 wherein the second phenomenon is same as, or distinct from, the phenomenon. 157. The system according to claim 155 wherein the type of said additional actuator is same as, or distinct from, the type of the actuator. 158. The system according to claim 155 wherein said third device communicates with said router over an in-building or in-vehicle network that is distinct from, or of a same type as, the in-building or in-vehicle network used by said second device. 159. The system according to claim 100, wherein said first device, said second device, or said router is integrated in part or entirely in an appliance. 160. The system according to claim 159, wherein a primary functionality of said appliance is associated with food storage, handling, or preparation. 161. The system according to claim 160, wherein a primary function of said appliance is heating food, and wherein said appliance is a microwave oven, an electric mixer, a stove, an oven, or an induction cooker. 162. The system according to claim 160, wherein said appliance is a refrigerator, a freezer, a food processor, a dishwashers, a food blender, a beverage maker, a coffeemaker, or a iced-tea maker. 163. The system according to claim 159, wherein a primary function of said appliance is associated with environmental control, and said appliance consists of, or is part of, an HVAC system. 164. The system according to claim 163, wherein a primary function of said appliance is associated with temperature control, and wherein said appliance is an air conditioner or a heater. 165. The system according to claim 159, wherein a primary function of said appliance is associated with cleaning, wherein said primary function is associated with clothes cleaning, and the appliance is a washing machine or a clothes dryer, or wherein said appliance is a vacuum cleaner. 166. The system according to claim 159, wherein a primary function of said appliance is associated with water control or water heating. 167. The system according to claim 159, wherein said appliance is an answering machine, a telephone set, a home cinema system, a HiFi system, a CD or DVD player, an electric furnace, a trash compactor, a smoke detector, a light fixture, or a dehumidifier. 168. The system according to claim 159, wherein said appliance is a battery-operated portable electronic device, and said appliance is a notebook, a laptop computer, a media player, a cellular phone, a Personal Digital Assistant (PDA), an image processing device, a digital camera, a video recorder, or a handheld computing device. 169. The system according to claim 159, wherein said integration involves sharing a component. 170. The system according to claim 169, wherein said integration involves housing in same enclosure, sharing same processor, or mounting onto same surface. 171. The system according to claim 169, wherein said integration involves sharing a same connector. 172. The system according to claim 171, wherein said connector is a power connector for connecting to a power source, and wherein said integration involves sharing the same connector for being powered from same power source, or wherein said integration involves sharing same power supply. 173. The system according to claim 100, wherein said first device or said second device is integrated with, or enclosed with, said router. 174. The system according to claim 100 wherein the sensor is an image sensor for capturing still or video image, and the system further comprising an image processor having an output for processing the captured image. 175. The system according to claim 174 wherein said image processor is entirely or in part in said first device, said router, said control server, or any combination thereof, and wherein said control logic responds to the output of said image processor. 176. The system according to claim 174 wherein said image sensor is a digital video sensor for capturing digital video content, and wherein said image processor is operative for enhancing said video content using image stabilization, unsharp masking, or super-resolution. 177. The system according to claim 174 wherein said image sensor is a digital video sensor for capturing digital video content, and wherein said image processor is operative for Video Content Analysis (VCA). 178. The system according to claim 177 wherein said VCA includes Video Motion Detection (VMD), video tracking, egomotion estimation, identification, behavior analysis, situation awareness, dynamic masking, motion detection, object detection, face recognition, automatic number plate recognition, tamper detection, video tracking, or pattern recognition. 179. The system according to claim 174 wherein said image processor is operative for detecting a location of an element in the captured image. 180. The system according to claim 179 wherein the element is a human body part. 181. The system according to claim 180 wherein the element is a human face or a human hand. 182. The system according to claim 179 wherein said image processor is operative for detecting a motion of the element in the captured image, or wherein said image processor is operative for detecting multiple elements in the captured image, and said image processor is operative for detecting and counting a number of the elements in the captured image. 183. The system according to claim 100 wherein at least one of the in-building or in-vehicle networks is a Personal Area Network (PAN), at least one of said router, said first device, and said second device further comprising a PAN interface, and said PAN interface includes a PAN port and a PAN transceiver. 184. The system according to claim 183 wherein the PAN is a Wireless PAN (WPAN), said PAN port is an antenna, and said PAN transceiver is a WPAN modem, and wherein the WPAN is according to, or based on, Bluetooth™ or IEEE 802.15.1-2005 standards, or wherein the WPAN is a wireless control network that is according to, or based on, Zigbee™, IEEE 802.15.4-2003, or Z-Wave™ standards. 185. The system according to claim 100 wherein: at least one of the in-building or in-vehicle networks is a Local Area Network (LAN); at least one of said router, said first device, and said second device further comprising a LAN interface; and said LAN interface includes a LAN port and a LAN transceiver. 186. The system according to claim 185 wherein: the LAN is a wired LAN using a wired LAN medium; said LAN port is a LAN connector; and said LAN transceiver is a LAN modem, and wherein: the LAN is Ethernet based; and the wired LAN is according to, or based on, IEEE 802.3-2008 standard. 187. The system according to claim 186 wherein: the wired LAN medium is based on twisted-pair copper cables; said LAN interface is 10Base-T, 100Base-T, 100Base-TX, 100Base-T2, 100Base-T4, 1000Base-T, 1000Base-TX, 10GBase-CX4, or 10GBase-T; and said LAN connector is RJ-45 type, or wherein: said wired LAN medium is based on an optical fiber; said LAN interface is 10Base-FX, 100Base-SX, 100Base-BX, 100Base-LX10, 1000Base-CX, 1000Base-SX, 1000Base-LX, 1000Base-LX10, 1000Base-ZX, 1000Base-BX10, 10GBase-SR, 10GBase-LR, 10GBase-LRM, 10GBase-ER, 10GBase-ZR, or 10GBase-LX4; and said LAN connector is a fiber-optic connector. 188. The system according to claim 185 wherein: the LAN is a Wireless LAN (WLAN); said LAN port is a WLAN antenna; and said LAN transceiver is a WLAN modem, and wherein the WLAN is according to, or base on, IEEE 802.11-2012, IEEE 802.11a, IEEE 802.11b, IEEE 802.11g, IEEE 802.11n, or IEEE 802.11ac. 189. The system according to claim 100 wherein: at least one of said in-building or in-vehicle networks is a packet-based or a circuit-switched-based Home Network (HN); at least one of said router, said first device, and said second device further comprising a HN interface; and said HN interface includes a HN port and a HN transceiver, and wherein: the HN is a wired HN using a wired HN medium; said HN port is an HN connector; and said HN transceiver is an HN modem, and wherein the wired HN medium comprises a wiring primarily installed for carrying a service signal, where the wiring is in-wall wiring connected to by a wiring connector at a service outlet. 190. The system according to claim 189 wherein: said wiring is a telephone wire pair; the service signal is an analog telephone signal (POTS); and said wiring connector is a telephone connector, and wherein said FIN is according to, or based on, HomePNA standard, ITU-T Recommendation G.9954, ITU-T Recommendation G.9960, ITU-T Recommendation G.9970, or ITU-T Recommendation G.9961. 191. The system according to claim 189 wherein: said wiring is a coaxial cable; the service signal is a Cable Television (CATV) signal; and said wiring connector is a coaxial connector, wherein said FIN is according to, or based on, HomePNA standard or Multimedia over Coax Alliance (MoCA) standard that is according to, or based on, ITU-T Recommendation G.9954, ITU-T Recommendation G.9960, ITU-T Recommendation G.9970, or ITU-T Recommendation G.9961. 192. The system according to claim 189 wherein: the wiring is an AC power wires; the service signal is an AC power signal; and said wiring connector is an AC power connector, and wherein the FIN is according to, or based on, HomePlug™ standard, HD-PLC standard, Universal Powerline Association (UPA) standard, IEEE 1901-2010, ITU-T Recommendation G.9960, ITU-T Recommendation G.9961, ITU-T Recommendation G.9970, or ITU-T Recommendation G.9972. 193. The system according to claim 100 wherein the external network is a packet-based or a circuit-switched-based Wide Area Network (WAN), and wherein said router comprising a WAN interface, and wherein said WAN interface includes a WAN port and a WAN transceiver. 194. The system according to claim 193 wherein the WAN is a wired WAN using a wired WAN medium, said WAN port is a WAN connector, and said WAN transceiver is a WAN modem, and wherein the wired WAN medium comprises a wiring primarily installed for carrying a service signal to the building or to the vehicle. 195. The system according to claim 194 wherein the wired WAN medium comprises one or more telephone wire pairs primarily designed for carrying an analog telephone signal, and wherein said external network is using Digital Subscriber Line/Loop (DSL). 196. The system according to claim 195 wherein the external network is based on Asymmetric Digital Subscriber Line (ADSL), ADSL2 or on ADSL2+, according to, or based on, ANSI T1.413, ITU-T Recommendation G.992.1, ITU-T Recommendation G.992.2, ITU-T Recommendation G.992.3, ITU-T Recommendation G.992.4, or ITU-T Recommendation G.992.5, or wherein the external network is based on Very-high-bit-rate Digital Subscriber Line (VDSL), according to, or based on, ITU-T Recommendation G.993.1 or ITU-T Recommendation G.993.2. 197. The system according to claim 194 wherein the wired WAN medium comprises AC power wires primarily designed for carrying an AC power signal to the building or to the vehicle, and the network is using Broadband over Power Lines (BPL) according to, or based on, IEEE 1675-2008 or IEEE 1901-2010, wherein the wired WAN medium comprises coaxial cable primarily designed for carrying a CATV to the building or to the vehicle, and the network is using Data-Over-Cable Service Interface Specification (DOCSIS), according to, or based on, ITU-T Recommendation J.112, ITU-T Recommendation J.122, or ITU-T Recommendation J.222, or wherein the wired WAN medium comprises an optical fiber, said WAN connector is a fiber-optic connector, and the WAN is based on Fiber-To-The-Home (FTTH), Fiber-To-The-Building (FTTB), Fiber-To-The-Premises (FTTP), Fiber-To-The-Curb (FTTC), or Fiber-To-The-Node (FTTN). 198. The system according to claim 193 wherein the WAN is a wireless broadband network over a licensed or unlicensed radio frequency band, said WAN port is an antenna, and said WAN transceiver is a wireless modem, and wherein the unlicensed radio frequency band is an Industrial, Scientific and Medical (ISM) radio band. 199. The system according to claim 198 wherein the wireless network is a satellite network, said antenna is a satellite antenna, and said wireless modem is a satellite modem. 200. The system according to claim 198 wherein the wireless network is a WiMAX network, wherein said antenna is a WiMAX antenna and said wireless modem is a WiMAX modem, and the WiMAX network is according to, or based on, IEEE 802.16-2009. 201. The system according to claim 198 wherein the wireless network is a cellular telephone network, said antenna is a cellular antenna, and said wireless modem is a cellular modem, and wherein the cellular telephone network is a Third Generation (3G) network that uses UMTS W-CDMA, UMTS HSPA, UMTS TDD, CDMA2000 1xRTT, CDMA2000 EV-DO, or GSM EDGE-Evolution, or wherein the cellular telephone network is a Fourth Generation (4G) network that uses HSPA+, Mobile WiMAX, LTE, LTE-Advanced, MBWA, or is based on IEEE 802.20-2008. 202. A vehicle control system for commanding an actuator operation according in response to a sensor response associated with a phenomenon to a control, for use with one or more in-vehicle networks for communication in a vehicle, and one or more external networks for communicating with an Internet-connected control server via another vehicle or a roadside unit external to the vehicle, the system comprising:
a router in the vehicle, connected to the one or more in-vehicle networks and to the one or more of the external networks, and operative to pass digital data between said in-vehicle and one or more of the external networks; a first device in the vehicle comprising of, or connectable to, a sensor that responds to the phenomenon, the first device is operative to transmit a sensor digital data corresponding to the phenomenon to said router over said one or more in-vehicle networks; a second device in the vehicle comprising of, or connectable to, an actuator that affects the phenomenon, the second device is operative to execute actuator commands received from said router over said one or more in-vehicle networks; and a control server external to the vehicle storing the control logic, and communicatively coupled to said router over the Internet via said one or more of the external networks, wherein said control server is operative to receive the sensor digital data from said router, to produce actuator commands in response to the received sensor digital data according to the control logic, and to transmit the actuator commands to said second device via said router. 203. The system according to claim 202, wherein at least one of said external network is a vehicle-to-vehicle network for communicating with said control server via another vehicle. 204. The system according to claim 202, wherein at least one of the external networks is communicating with a stationary device, and wherein the stationary device is a roadside unit. 205. The system according to claim 202, wherein said router, said first device, and said second device are mechanical attached to the vehicle. 206. The system according to claim 202, wherein the vehicle is adapted for travelling on land, or water, or is airborne. 207. The system according to claim 202, wherein the vehicle is one out of a bicycle, a car, a motorcycle, a train, a ship, an aircraft, a boat, a spacecraft, a boat, a submarine, a dirigible, an electric scooter, a subway, a train, a trolleybus, a tram, a sailboat, a yacht, and an airplane. 208. The system according to claim 202, wherein the sensor is operative to sense the phenomenon in the vehicle, external to the vehicle, or associated with surroundings around the vehicle. 209. The system according to claim 202, wherein the actuator is operative to affect the phenomenon in the vehicle, external to the vehicle, or associated with surroundings around the vehicle. 210. The system according to claim 202, wherein the vehicle is an automobile, and wherein said system is coupled to monitor or control a Engine Control Unit (ECU), a Transmission Control Unit (TCU), a Anti-Lock Braking System (ABS), or Body Control Modules (BCM) of the automobile. 211. The system according to claim 202 further integrated with or being part of a vehicular communication system used for improved safety, traffic flow control, traffic reporting, or traffic management. 212. The system according to claim 202 further used for parking help, cruise control, lane keeping, road sign recognition, surveillance, speed limit warning, restricted entries, and pull-over commands, travel information, cooperative adaptive cruise control, cooperative forward collision warning, intersection collision avoidance, approaching emergency vehicle warning, vehicle safety inspection, transit or emergency vehicle signal priority, electronic parking payments, commercial vehicle clearance and safety inspections, in-vehicle signing, rollover warning, probe data collection, highway-rail intersection warning, or electronic toll collection. 213. The system according to claim 202, wherein one or more of the in-vehicle networks is a vehicle bus. 214. The system according to claim 213, wherein the vehicle bus is according to, or based on, Control Area Network (CAN) or Local Interconnect Network (LIN). 215. The system according to claim 202, wherein one or more of the in-vehicle networks is using is a communication medium that is based on DC power lines of the vehicle. 216. The system according to claim 202, wherein the vehicle further comprising an On-Board Diagnostics (OBD) system. 217. The system according to claim 216, wherein said system is coupled to or integrated with the OBD system. 218. The system according to claim 217, wherein the OBD system is according to, or based on, OBD-II or EOBD (European On-Board Diagnostics) standards. 219. The system according to claim 217, wherein the OBD system further comprises a diagnostics connector, and wherein said router, said first device, or said second device are coupled to the diagnostics connector. 220. The system according to claim 219, wherein said router, said first device, or said second device are at least in part powered via the diagnostics connector. 221. The system according to claim 202, wherein said router is operative to communicate to said control server an information regarding fuel and air metering, ignition system, misfire, auxiliary emission control, vehicle speed and idle control, transmission, on-board computer, fuel level, relative throttle position, ambient air temperature, accelerator pedal position, air flow rate, fuel type, oxygen level, fuel rail pressure, engine oil temperature, fuel injection timing, engine torque, engine coolant temperature, intake air temperature, exhaust gas temperature, fuel pressure, injection pressure, turbocharger pressure, boost pressure, exhaust pressure, exhaust gas temperature, engine run time, NOx sensor, manifold surface temperature, or a Vehicle Identification Number (VIN). 222. The system according to claim 214, wherein one out of the in-vehicle networks is according to, or based on, SAE J1962, SAE J1850, SAE J1979, ISO 15765, or ISO 9141 standard. 223. A control system comprising:
a sensor disposed in an enclosed environment that senses a condition in the enclosed environment and provides sensor data corresponding to the condition; an internal network extending substantially within the enclosed environment; an external network, coupled to the Internet, extending substantially outside the enclosed environment; a control server, disposed outside the enclosed environment, coupled to the Internet, said server receiving sensor data corresponding to the sensor data and executing control logic therein so as to generate actuator commands responsive to the received sensor data; a router coupled to said internal and external networks so as to pass information between said internal and external networks, and configured to deliver the sensor data from said internal to said external networks and to deliver the actuator commands from said external to said internal networks; an actuator disposed within the enclosed environment, receiving the actuator commands from said router, said actuator operative to affect the condition in the enclosed environment. 224. The system according to claim 223 wherein the external network or the internal network is a wireless network using a wireless communication over a licensed or an unlicensed radio frequency band. 225. The system according to claim 224 wherein the unlicensed radio frequency band is an Industrial, Scientific and Medical (ISM) radio band. 226. The system according to claim 224 wherein the unlicensed radio frequency band is about 60 GHz, and wherein the internal network is based on beamforming and supports a data rate of above 7 Gb/s and is used for in-room communication. 227. The system according to claim 226 wherein the internal network is according to, or based on, WiGig™, IEEE 802.1 lad, WirelessHD™, or IEEE 80215.3c-2009. 228. The system according to claim 224 wherein the internal is operative to carry uncompressed video data according to, or based on, WHDI™. 229. The system according to claim 224 wherein the wireless network is using a white space spectrum or an analog television channel consisting of a 6 MHz, 7 MHz or 8 MHz frequency band, and allocated in a 54-806 MHz band, and the system is further operative for channel bonding, where the wireless network is using two or more analog television channels. 230. The system according to claim 224 wherein the external network or at least one of the in-building or in-vehicle networks is using a wireless communication that is based on a Wireless Regional Area Network (WRAN) standard communicatively coupling a Base Station (BS) and one or more CPEs using OFDMA modulation. 231. The system according to claim 230 wherein said router serves as a BS or as a CPE. 232. The system according to claim 230 wherein the wireless communication is based on geographically-based cognitive radio, and is according to, or based on, IEEE 802.22 or IEEE 802.11af standards. 233. The system according to claim 230 wherein said wireless network is based on, or is according to, Near Field Communication (NFC) that is based on a standard that is according to, or based on, ISO/IEC 18092, ECMA-340, ISO/IEC 21481, or ECMA-352, and wherein the wireless communication couples an initiator and a target. 234. The system according to claim 233 wherein said wireless network is using a 13.56 MHz frequency band, a data rate of 106 Kb/s, 212 Kb/s, or 424 Kb/s, and a modulation that is Amplitude-Shift-Keying (ASK), and the wireless network is using a passive or an active communication mode. 235. The system according to claim 234 wherein said router serves as an initiator or as a target or transponder. | 2,400 |
7,567 | 7,567 | 14,458,667 | 2,447 | A content management system interface at a local computer device is configured to receive user file commands from a file manager and translate the user file commands into content management commands for sending to the remote content management system via a network interface. The content management system interface can further be configured to receive remote file information from the remote content management system via the network interface and translate the remote file information into user file information for the file manager. | 1. A computer device for interfacing with a remote content management system, the computer device comprising:
memory; a network interface; a processor coupled to the memory and the network interface, the processor configured to execute:
a file manager stored in the memory, the file manager for receiving user file commands and outputting user file information;
a content management system interface stored in the memory, the content management system interface configured to receive user file commands from the file manager and translate the user file commands into content management commands for sending to the remote content management system via the network interface, the content management system interface further configured to receive remote file information from the remote content management system via the network interface and translate the remote file information into user file information for the file manager. 2. The device of claim 1, further comprising a file system driver stored in the memory and executable by the processor, the file system driver coupled to the file manager and coupled to the content management system interface, the file system driver configured to receive the user file commands from the file manager and output user file information to the file manager, the content management system interface configured to receive the user file commands from the file system driver and to provide the user file information to the file system driver. 3. The device of claim 1, further comprising a file system driver stored in the memory and executable by the processor, the file system driver coupled to the file manager and coupled to the content management system interface, the file system driver mapping the content management system interface as a local drive for access by the file manager. 4. The device of claim 1, wherein the file manager is configured to provide a graphical user interface for navigating and manipulating a hierarchy of folders and files. 5. The device of claim 1, wherein the file manager is configured to receive user file commands from an application that is executed by the processor. 6. The device of claim 1, wherein the content management system interface comprises a cache of one or more temporary files, the content management system interface configured to output a master file based on the one or more temporary files to the remote content management system upon receiving a specific user file command, the content management system interface configured to not output to the remote content management system any of the one or more temporary files associated with the master file. 7. The device of claim 6, wherein the cache is encrypted and the content management system interface is configured to authenticate a user for decrypting the cache. 8. The device of claim 6, wherein the content management system interface is configured to reference filename masks to differentiate temporary files from master files. 9. The device of claim 8, wherein each individual user profile of a plurality of user profiles for the remote content management system is associated with a unique set of filename masks. 10. The device of claim 1, wherein the content management system interface is configured to block a user file command received from the file manager for a particular file when the user file command violates a permission set at the remote content management system. 11. The device of claim 1, wherein the content management system interface is configured to block delivery of remote file information to the file manager when delivery of the remote file information violates a permission set on the remote content management system. 12. The device of claim 1, wherein the content management system interface is configured to map a path on the remote content management system to a truncated path that contains a portion of the path, and provide the truncated path to the file manager as an alias for addressing the path. 13. The device of claim 1, wherein the content management system interface is configured to migrate files from an existing file store to the remote content management system by determining if a received user file command is associated with a remote file stored on the remote content management system, and if the user file command is not associated with the remote file then copying the remote file from the existing file store to the remote content management system. | A content management system interface at a local computer device is configured to receive user file commands from a file manager and translate the user file commands into content management commands for sending to the remote content management system via a network interface. The content management system interface can further be configured to receive remote file information from the remote content management system via the network interface and translate the remote file information into user file information for the file manager.1. A computer device for interfacing with a remote content management system, the computer device comprising:
memory; a network interface; a processor coupled to the memory and the network interface, the processor configured to execute:
a file manager stored in the memory, the file manager for receiving user file commands and outputting user file information;
a content management system interface stored in the memory, the content management system interface configured to receive user file commands from the file manager and translate the user file commands into content management commands for sending to the remote content management system via the network interface, the content management system interface further configured to receive remote file information from the remote content management system via the network interface and translate the remote file information into user file information for the file manager. 2. The device of claim 1, further comprising a file system driver stored in the memory and executable by the processor, the file system driver coupled to the file manager and coupled to the content management system interface, the file system driver configured to receive the user file commands from the file manager and output user file information to the file manager, the content management system interface configured to receive the user file commands from the file system driver and to provide the user file information to the file system driver. 3. The device of claim 1, further comprising a file system driver stored in the memory and executable by the processor, the file system driver coupled to the file manager and coupled to the content management system interface, the file system driver mapping the content management system interface as a local drive for access by the file manager. 4. The device of claim 1, wherein the file manager is configured to provide a graphical user interface for navigating and manipulating a hierarchy of folders and files. 5. The device of claim 1, wherein the file manager is configured to receive user file commands from an application that is executed by the processor. 6. The device of claim 1, wherein the content management system interface comprises a cache of one or more temporary files, the content management system interface configured to output a master file based on the one or more temporary files to the remote content management system upon receiving a specific user file command, the content management system interface configured to not output to the remote content management system any of the one or more temporary files associated with the master file. 7. The device of claim 6, wherein the cache is encrypted and the content management system interface is configured to authenticate a user for decrypting the cache. 8. The device of claim 6, wherein the content management system interface is configured to reference filename masks to differentiate temporary files from master files. 9. The device of claim 8, wherein each individual user profile of a plurality of user profiles for the remote content management system is associated with a unique set of filename masks. 10. The device of claim 1, wherein the content management system interface is configured to block a user file command received from the file manager for a particular file when the user file command violates a permission set at the remote content management system. 11. The device of claim 1, wherein the content management system interface is configured to block delivery of remote file information to the file manager when delivery of the remote file information violates a permission set on the remote content management system. 12. The device of claim 1, wherein the content management system interface is configured to map a path on the remote content management system to a truncated path that contains a portion of the path, and provide the truncated path to the file manager as an alias for addressing the path. 13. The device of claim 1, wherein the content management system interface is configured to migrate files from an existing file store to the remote content management system by determining if a received user file command is associated with a remote file stored on the remote content management system, and if the user file command is not associated with the remote file then copying the remote file from the existing file store to the remote content management system. | 2,400 |
7,568 | 7,568 | 14,184,132 | 2,487 | A shape measuring apparatus includes a first light source, a second light source, an optical system, an image capturer, and a controller. The first light source emits visible light. The second light source emits measurement light used in a measurement. The optical system emits the visible light and the measurement light at the same position on a work piece. The image capturer captures an image of the measurement light reflected by the work piece. The controller is configured to cause the emission of the visible light onto the work piece with the first light source when determining a measurement position, and to control the emission of the measurement light onto the work piece with the second light source when making the measurement. | 1. A shape measuring apparatus comprising:
a first light source configured to emit visible light; a second light source configured to emit measurement light used in a measurement; an optical system configured to emit the visible light and the measurement light at the same position on a work piece; an image capturer configured to capture an image of the measurement light reflected by the work piece; and a controller configured to control the emission of the visible light onto the work piece with the first light source when determining a measurement position, and further configured to control the emission of the measurement light onto the work piece with the second light source when making the measurement. 2. The shape measuring apparatus according to claim 1, wherein the measurement light is invisible light. 3. The shape measuring apparatus according claim 1, wherein the measurement light is infrared light. 4. The shape measuring apparatus according to claim 2, wherein the measurement light is infrared light. 5. The shape measuring apparatus according to claim 1, wherein the measurement light is ultraviolet light. 6. The shape measuring apparatus according to claim 2, wherein the measurement light is ultraviolet light. 7. The shape measuring apparatus according to claim 1, wherein the optical system is further configured to emit the visible light and the measurement light in a straight line form. 8. The shape measuring apparatus according to claim 2, wherein the optical system is further configured to emit the visible light and the measurement light in a straight line form. 9. The shape measuring apparatus according to claim 3, wherein the optical system is further configured to emit the visible light and the measurement light in a straight line form. 10. The shape measuring apparatus according to claim 4, wherein the optical system is further configured to emit the visible light and the measurement light in a straight line form. 11. The shape measuring apparatus according to claim 5, wherein the optical system is further configured to emit the visible light and the measurement light in a straight line form. 12. The shape measuring apparatus according to claim 6, wherein the optical system is further configured to emit the visible light and the measurement light in a straight line form. 13. The shape measuring apparatus according to claim 1, wherein the optical system is further configured to sweep the visible light and the measurement light in a straight line form. 14. The shape measuring apparatus according to claim 2, wherein the optical system is further configured to sweep the visible light and the measurement light in a straight line form. 15. The shape measuring apparatus according to claim 3, wherein the optical system is further configured to sweep the visible light and the measurement light in a straight line form. 16. The shape measuring apparatus according to claim 4, wherein the optical system is further configured to sweep the visible light and the measurement light in a straight line form. 17. The shape measuring apparatus according to claim 5, wherein the optical system is further configured to sweep the visible light and the measurement light in a straight line form. 18. The shape measuring apparatus according to claim 6, wherein the optical system is further configured to sweep the visible light and the measurement light in a straight line form. | A shape measuring apparatus includes a first light source, a second light source, an optical system, an image capturer, and a controller. The first light source emits visible light. The second light source emits measurement light used in a measurement. The optical system emits the visible light and the measurement light at the same position on a work piece. The image capturer captures an image of the measurement light reflected by the work piece. The controller is configured to cause the emission of the visible light onto the work piece with the first light source when determining a measurement position, and to control the emission of the measurement light onto the work piece with the second light source when making the measurement.1. A shape measuring apparatus comprising:
a first light source configured to emit visible light; a second light source configured to emit measurement light used in a measurement; an optical system configured to emit the visible light and the measurement light at the same position on a work piece; an image capturer configured to capture an image of the measurement light reflected by the work piece; and a controller configured to control the emission of the visible light onto the work piece with the first light source when determining a measurement position, and further configured to control the emission of the measurement light onto the work piece with the second light source when making the measurement. 2. The shape measuring apparatus according to claim 1, wherein the measurement light is invisible light. 3. The shape measuring apparatus according claim 1, wherein the measurement light is infrared light. 4. The shape measuring apparatus according to claim 2, wherein the measurement light is infrared light. 5. The shape measuring apparatus according to claim 1, wherein the measurement light is ultraviolet light. 6. The shape measuring apparatus according to claim 2, wherein the measurement light is ultraviolet light. 7. The shape measuring apparatus according to claim 1, wherein the optical system is further configured to emit the visible light and the measurement light in a straight line form. 8. The shape measuring apparatus according to claim 2, wherein the optical system is further configured to emit the visible light and the measurement light in a straight line form. 9. The shape measuring apparatus according to claim 3, wherein the optical system is further configured to emit the visible light and the measurement light in a straight line form. 10. The shape measuring apparatus according to claim 4, wherein the optical system is further configured to emit the visible light and the measurement light in a straight line form. 11. The shape measuring apparatus according to claim 5, wherein the optical system is further configured to emit the visible light and the measurement light in a straight line form. 12. The shape measuring apparatus according to claim 6, wherein the optical system is further configured to emit the visible light and the measurement light in a straight line form. 13. The shape measuring apparatus according to claim 1, wherein the optical system is further configured to sweep the visible light and the measurement light in a straight line form. 14. The shape measuring apparatus according to claim 2, wherein the optical system is further configured to sweep the visible light and the measurement light in a straight line form. 15. The shape measuring apparatus according to claim 3, wherein the optical system is further configured to sweep the visible light and the measurement light in a straight line form. 16. The shape measuring apparatus according to claim 4, wherein the optical system is further configured to sweep the visible light and the measurement light in a straight line form. 17. The shape measuring apparatus according to claim 5, wherein the optical system is further configured to sweep the visible light and the measurement light in a straight line form. 18. The shape measuring apparatus according to claim 6, wherein the optical system is further configured to sweep the visible light and the measurement light in a straight line form. | 2,400 |
7,569 | 7,569 | 14,743,662 | 2,423 | In an intra-block copy video encoding method, an encoder performs a hash-based search to identify a selected set of candidate blocks for prediction of an input video block. For each of the candidate blocks in the selected set, the encoder determines a correlation between, on the one hand, luma and chroma components of the input video block and, on the other hand, luma and chroma components of the respective candidate blocks. A predictor block is selected based on the correlation and is used to encode the input video block. In different embodiments, the correlation may be the negative of the sum of absolute differences of the components, may include a Jaccard similarity measure between respective pixels, or may be based on a Hamming distance between two high precision hash values of the input video block and the candidate block. | 1. A method of generating a bit stream encoding a video including an input video block, the method comprising:
identifying a selected set of candidate blocks for prediction of the input video block, where the identification of the selected set includes performing a hash-based search of available video blocks; for each of the candidate blocks in the selected set, determining a correlation between luma and chroma components of the input video block and luma and chroma components of the respective candidate blocks; selecting a predictor block based on the correlation; and encoding the input video block in the bit stream using the selected predictor block for prediction of the input video block. 2. The method of claim 1, wherein the identification of the selected set further includes performing a spatial search of the available video blocks. 3. The method of claim 1, wherein the hash-based search includes a search of blocks having a hash value equal to a hash value of the input video block. 4. The method of claim 3, wherein the hash-based search identifies the selected set of candidate blocks based on a comparison between the luma component of the input video block and the luma component of the respective available video blocks. 5. The method of claim 3, wherein the hash-based search identifies the selected set of candidate blocks based on a sum of absolute differences between luma pixels of the input video block and corresponding luma pixels of the respective available video blocks. 6. The method of claim 1, wherein determining a correlation includes determining a sum of absolute differences between the luma and chroma pixels of the input video block and the corresponding luma and chroma pixels of the respective candidate blocks. 7. The method of claim 6, wherein the correlation is the negative of the sum of absolute differences. 8. The method of claim 1, wherein determining a correlation includes determining a Jaccard similarity measure between corresponding pixels of the input video block and of the respective candidate blocks. 9. The method of claim 1, wherein determining a correlation includes determining a Hamming distance between a high-precision hash value of the input video block and high-precision hash values of the respective candidate blocks. 10. The method of claim 9, wherein the high-precision hash values are cyclic redundancy check values. 11. A method of generating a bit stream encoding a video including an input video block, the method comprising:
determining a hash value for the input video block; identifying a first set of candidate blocks for prediction of the input video block, wherein identifying the first set of candidate blocks includes identifying available video blocks having respective hash values equal to the hash value of the input video block; from the candidate blocks in the first set, selecting a second set of candidate blocks based on comparison of the luma component of the input video block with the luma component of the respective candidate blocks; for each of the candidate blocks in the second set, determining a correlation between luma and chroma components of the input video block and luma and chroma components of the respective candidate blocks in the second set; selecting a predictor block based on the correlation; and encoding the input video block in the bit stream using the selected predictor block for prediction of the input video block. 12. The method of claim 11, wherein the identification of the first set of candidate blocks includes identifying available video blocks located in a predetermined spatial range. 13. The method of claim 11, wherein the comparison of the luma component of the input video block with the luma component of the respective candidate blocks includes determining, for each of the respective candidate blocks, a sum of absolute differences between the luma pixels of the input video block and the corresponding luma pixels of the respective candidate blocks. 14. The method of claim 13, wherein the second set of candidate blocks includes those of the first set of candidate blocks having the N lowest values of the sum of absolute differences, where N is a predetermined integer greater than or equal to one. 15. The method of claim 11, wherein determining a correlation includes determining a sum of absolute differences between the luma and chroma pixels of the input video block and the corresponding luma and chroma pixels of the respective candidate blocks. 16. The method of claim 15, wherein the correlation is the negative of the sum of absolute differences. 17. The method of claim 11, wherein determining a correlation includes determining a Jaccard similarity measure between corresponding pixels of the input video block and of the respective candidate blocks. 18. The method of claim 11, wherein determining a correlation includes determining a Hamming distance between a high-precision hash value of the input video block and high-precision hash values of the respective candidate blocks. 19. The method of claim 18, wherein the high-precision hash values are cyclic redundancy check values. 20. A video encoder including a processor and a non-transitory computer-readable medium storing instructions operative, when executed on the processor, to perform functions including:
identifying a selected set of candidate blocks for prediction of the input video block, where the identification of the selected set includes performing a hash-based search; for each of the candidate blocks in the selected set, determining a correlation between luma and chroma components of the input video block and luma and chroma components of the respective candidate blocks; selecting a predictor block based on the correlation; and encoding the input video block in the bit stream using the selected predictor block for prediction of the input video block. | In an intra-block copy video encoding method, an encoder performs a hash-based search to identify a selected set of candidate blocks for prediction of an input video block. For each of the candidate blocks in the selected set, the encoder determines a correlation between, on the one hand, luma and chroma components of the input video block and, on the other hand, luma and chroma components of the respective candidate blocks. A predictor block is selected based on the correlation and is used to encode the input video block. In different embodiments, the correlation may be the negative of the sum of absolute differences of the components, may include a Jaccard similarity measure between respective pixels, or may be based on a Hamming distance between two high precision hash values of the input video block and the candidate block.1. A method of generating a bit stream encoding a video including an input video block, the method comprising:
identifying a selected set of candidate blocks for prediction of the input video block, where the identification of the selected set includes performing a hash-based search of available video blocks; for each of the candidate blocks in the selected set, determining a correlation between luma and chroma components of the input video block and luma and chroma components of the respective candidate blocks; selecting a predictor block based on the correlation; and encoding the input video block in the bit stream using the selected predictor block for prediction of the input video block. 2. The method of claim 1, wherein the identification of the selected set further includes performing a spatial search of the available video blocks. 3. The method of claim 1, wherein the hash-based search includes a search of blocks having a hash value equal to a hash value of the input video block. 4. The method of claim 3, wherein the hash-based search identifies the selected set of candidate blocks based on a comparison between the luma component of the input video block and the luma component of the respective available video blocks. 5. The method of claim 3, wherein the hash-based search identifies the selected set of candidate blocks based on a sum of absolute differences between luma pixels of the input video block and corresponding luma pixels of the respective available video blocks. 6. The method of claim 1, wherein determining a correlation includes determining a sum of absolute differences between the luma and chroma pixels of the input video block and the corresponding luma and chroma pixels of the respective candidate blocks. 7. The method of claim 6, wherein the correlation is the negative of the sum of absolute differences. 8. The method of claim 1, wherein determining a correlation includes determining a Jaccard similarity measure between corresponding pixels of the input video block and of the respective candidate blocks. 9. The method of claim 1, wherein determining a correlation includes determining a Hamming distance between a high-precision hash value of the input video block and high-precision hash values of the respective candidate blocks. 10. The method of claim 9, wherein the high-precision hash values are cyclic redundancy check values. 11. A method of generating a bit stream encoding a video including an input video block, the method comprising:
determining a hash value for the input video block; identifying a first set of candidate blocks for prediction of the input video block, wherein identifying the first set of candidate blocks includes identifying available video blocks having respective hash values equal to the hash value of the input video block; from the candidate blocks in the first set, selecting a second set of candidate blocks based on comparison of the luma component of the input video block with the luma component of the respective candidate blocks; for each of the candidate blocks in the second set, determining a correlation between luma and chroma components of the input video block and luma and chroma components of the respective candidate blocks in the second set; selecting a predictor block based on the correlation; and encoding the input video block in the bit stream using the selected predictor block for prediction of the input video block. 12. The method of claim 11, wherein the identification of the first set of candidate blocks includes identifying available video blocks located in a predetermined spatial range. 13. The method of claim 11, wherein the comparison of the luma component of the input video block with the luma component of the respective candidate blocks includes determining, for each of the respective candidate blocks, a sum of absolute differences between the luma pixels of the input video block and the corresponding luma pixels of the respective candidate blocks. 14. The method of claim 13, wherein the second set of candidate blocks includes those of the first set of candidate blocks having the N lowest values of the sum of absolute differences, where N is a predetermined integer greater than or equal to one. 15. The method of claim 11, wherein determining a correlation includes determining a sum of absolute differences between the luma and chroma pixels of the input video block and the corresponding luma and chroma pixels of the respective candidate blocks. 16. The method of claim 15, wherein the correlation is the negative of the sum of absolute differences. 17. The method of claim 11, wherein determining a correlation includes determining a Jaccard similarity measure between corresponding pixels of the input video block and of the respective candidate blocks. 18. The method of claim 11, wherein determining a correlation includes determining a Hamming distance between a high-precision hash value of the input video block and high-precision hash values of the respective candidate blocks. 19. The method of claim 18, wherein the high-precision hash values are cyclic redundancy check values. 20. A video encoder including a processor and a non-transitory computer-readable medium storing instructions operative, when executed on the processor, to perform functions including:
identifying a selected set of candidate blocks for prediction of the input video block, where the identification of the selected set includes performing a hash-based search; for each of the candidate blocks in the selected set, determining a correlation between luma and chroma components of the input video block and luma and chroma components of the respective candidate blocks; selecting a predictor block based on the correlation; and encoding the input video block in the bit stream using the selected predictor block for prediction of the input video block. | 2,400 |
7,570 | 7,570 | 15,147,736 | 2,487 | Techniques are disclosed for depth map generation in a structured light system where an optical transmitter is tilted relative to an optical receiver. The optical transmitter has a transmitter optical axis around which structured light spreads, and the optical receiver has a receiver optical axis around which a reflection of the structured light can be captured. The transmitter optical axis and the receiver optical axis intersect one another. A processing circuit compensates for the angle in the tilt in the reflected pattern to generate the depth map. | 1. A method of image processing, the method comprising:
transmitting structured light, with an optical transmitter, the optical transmitter having a first angle of view relative to a transmitter optical axis; receiving, with an optical receiver, a reflection of the structured light, the optical receiver having a second angle of view relative to a receiver optical axis, wherein the optical transmitter is angled relative to the optical receiver so that the transmitter optical axis intersects the receiver optical axis, and wherein a position of the optical transmitter relative to the optical receiver is constant; and generating a depth map for one or more images based on the received reflection of the structured light. 2. The method of claim 1, wherein the structured light transmitted with the optical transmitter is the same during the entire generation of the depth map. 3. The method of claim 1, further comprising:
scaling a position of each element in the received reflection of the structured light based on an angle of tilt of the optical transmitter relative to the optical receiver and a focal length of the optical receiver. 4. The method of claim 3, wherein generating the depth map comprises generating the depth map based on the scaled position of each element in the received reflection of the structured light, each element in the structured light that corresponds to a respective element in the received reflection of the structured light, the focal length of the optical receiver, and a distance between the optical transmitter and the optical receiver. 5. The method of claim 1, wherein transmitting the structured light comprises transmitting a pattern via the structured light, wherein receiving the reflection of the structured light comprises receiving a distorted pattern via the reflection, the method further comprising:
determining whether the received distorted pattern corresponds to the transmitted pattern without compensating for an angle of tilt of the optical transmitter relative to the optical receiver. 6. The method of claim 5, further comprising:
determining a location of where the distorted pattern is received by the optical receiver, wherein generating the depth map comprises generating the depth map based on the location of where the distorted pattern is received by the optical receiver and the angle of tilt of the optical transmitter relative to the optical receiver. 7. The method of claim 1, further comprising:
receiving the generated depth map; and generating graphical data for the one or more images based on the generated depth map. 8. The method of claim 1, wherein a device includes the optical transmitter and the optical receiver, wherein one of the optical transmitter or the optical receiver is parallel with a face of the device, and the other one of the optical transmitter or the optical receiver is tilted relative to the face of the device. 9. The method of claim 1, wherein a near field field of view (FOV) generated by the optical transmitter and the optical receiver is closer to a device that includes the optical transmitter and the optical receiver as compared to if the optical transmitter is not angled relative to the optical receiver and the transmitter optical axis does not intersect the receiver optical axis. 10. A device for image processing, the device comprising:
an optical transmitter configured to transmit structured light, the optical transmitter having a first angle of view relative to a transmitter optical axis; an optical receiver configured to receive a reflection of the structured light, the receiver having a second angle of view relative to a receiver optical axis, wherein the optical transmitter is angled relative to the optical receiver so that the transmitter optical axis intersects the receiver optical axis, and wherein a position of the optical transmitter relative to the optical receiver is constant; and a processing circuit configured to generate a depth map for one or more images based on the received reflection of the structured light. 11. The device of claim 10, wherein the optical transmitter transmits the same structured light during the entire generation of the depth map. 12. The device of claim 10, wherein the processing circuit is configured to scale a position of each element in the received reflection of the structured light based on an angle of tilt of the optical transmitter relative to the optical receiver and a focal length of the optical receiver. 13. The device of claim 12, wherein to generate the depth map, the processing circuit is configured to generate the depth map based on the scaled position of each element in the received reflection of the structured light, each element in the structured light that corresponds to a respective element in the received reflection of the structured light, the focal length of the optical receiver, and a distance between the optical transmitter and the optical receiver. 14. The device of claim 10, wherein the optical transmitter is configured to transmit a pattern via the structured light, wherein the optical receiver is configured to receive a distorted pattern via the reflection, wherein the processing circuit is configured to determine whether the received distorted pattern corresponds to the transmitted pattern without compensating for an angle of tilt of the optical transmitter relative to the optical receiver. 15. The device of claim 14, wherein the processing circuit is configured to determine a location of where the distorted pattern is received by the optical receiver, and wherein to generated the depth map, the processing circuit is configured to generate the depth map based on the location of where the distorted pattern is received by the optical receiver and the angle of tilt of the optical transmitter relative to the optical receiver. 16. The device of claim 10, wherein the processing circuit comprises a first processing circuit, the device further comprising a second processing circuit configured to:
receive the generated depth map from the first processing circuit; and generate graphical data for the one or more images based on the generated depth map. 17. The device of claim 16, wherein the first processing circuit and the second processing circuit are the same processing circuit. 18. The device of claim 10, wherein the device comprises one of:
a wireless communication device, a laptop, a desktop, a tablet, a camera, and a video gaming console. 19. The device of claim 10, wherein one of the optical transmitter or the optical receiver is parallel with a face of the device, and the other one of the optical transmitter or the optical receiver is tilted relative to the face of the device. 20. The device of claim 10, wherein a near field field of view (FOV) generated by the optical transmitter and the optical receiver is closer to the device that includes the optical transmitter and the optical receiver as compared to if the optical transmitter is not angled relative to the optical receiver and the transmitter optical axis does not intersect the receiver optical axis. 21. A computer-readable storage medium including instructions stored thereon that when executed cause one or more processors of a device for image processing to:
cause an optical transmitter to transmit structured light, the optical transmitter having a first angle of view relative to a transmitter optical axis; and generate a depth map for one or more images based on a received reflection of the structured light, wherein the received reflection is received, with an optical receiver, the optical receiver having a second angle of view relative to a receiver optical axis, wherein the optical transmitter is angled relative to the optical receiver so that the transmitter optical axis intersects the receiver optical axis, and wherein a position of the optical transmitter relative to the optical receiver is constant. 22. The computer-readable storage medium of claim 21, wherein the structured light transmitted with the optical transmitter is the same during the entire generation of the depth map. 23. The computer-readable storage medium of claim 21, further comprising instructions that cause the one or more processors to:
scale a position of each element in the received reflection of the structured light based on an angle of tilt of the optical transmitter relative to the optical receiver and a focal length of the optical receiver. 24. A device for image processing, the device comprising:
means for transmitting structured light, the means for transmitting having a first angle of view relative to a transmitter optical axis; means for receiving a reflection of the structured light, the means for receiving having a second angle of view relative to a receiver optical axis, wherein the means for transmitting is angled relative to the means for receiving so that the transmitter optical axis intersects the receiver optical axis, and wherein a position of the means for transmitting is constant relative to the means for receiving; and means for generating a depth map for one or more images based on the received reflection of the structured light. 25. The device of claim 24, wherein the means for transmits the same structured light during the entire generation of the depth map. 26. The device of claim 24, further comprising:
means for scaling a position of each element in the received reflection of the structured light based on an angle of tilt of the means for transmitting relative to the means for receiving and a focal length of the means for receiving. 27. The device of claim 24, further comprising:
means for receiving the generated depth map; and means for generating graphical data for the one or more images based on the generated depth map. 28. The device of claim 24, wherein one of the means for transmitting or the means for receiving is parallel with a face of the device, and the other one of the means for transmitting or the means for receiving is tilted relative to the face of the device. 29. The device of claim 24, wherein a near field field of view (FOV) generated by the means for transmitting and the means for receiving is closer to the device that includes the means for transmitting and the means for receiving as compared to if the means for transmitting is not angled relative to the means for receiving and the transmitter optical axis does not intersect the receiver optical axis. | Techniques are disclosed for depth map generation in a structured light system where an optical transmitter is tilted relative to an optical receiver. The optical transmitter has a transmitter optical axis around which structured light spreads, and the optical receiver has a receiver optical axis around which a reflection of the structured light can be captured. The transmitter optical axis and the receiver optical axis intersect one another. A processing circuit compensates for the angle in the tilt in the reflected pattern to generate the depth map.1. A method of image processing, the method comprising:
transmitting structured light, with an optical transmitter, the optical transmitter having a first angle of view relative to a transmitter optical axis; receiving, with an optical receiver, a reflection of the structured light, the optical receiver having a second angle of view relative to a receiver optical axis, wherein the optical transmitter is angled relative to the optical receiver so that the transmitter optical axis intersects the receiver optical axis, and wherein a position of the optical transmitter relative to the optical receiver is constant; and generating a depth map for one or more images based on the received reflection of the structured light. 2. The method of claim 1, wherein the structured light transmitted with the optical transmitter is the same during the entire generation of the depth map. 3. The method of claim 1, further comprising:
scaling a position of each element in the received reflection of the structured light based on an angle of tilt of the optical transmitter relative to the optical receiver and a focal length of the optical receiver. 4. The method of claim 3, wherein generating the depth map comprises generating the depth map based on the scaled position of each element in the received reflection of the structured light, each element in the structured light that corresponds to a respective element in the received reflection of the structured light, the focal length of the optical receiver, and a distance between the optical transmitter and the optical receiver. 5. The method of claim 1, wherein transmitting the structured light comprises transmitting a pattern via the structured light, wherein receiving the reflection of the structured light comprises receiving a distorted pattern via the reflection, the method further comprising:
determining whether the received distorted pattern corresponds to the transmitted pattern without compensating for an angle of tilt of the optical transmitter relative to the optical receiver. 6. The method of claim 5, further comprising:
determining a location of where the distorted pattern is received by the optical receiver, wherein generating the depth map comprises generating the depth map based on the location of where the distorted pattern is received by the optical receiver and the angle of tilt of the optical transmitter relative to the optical receiver. 7. The method of claim 1, further comprising:
receiving the generated depth map; and generating graphical data for the one or more images based on the generated depth map. 8. The method of claim 1, wherein a device includes the optical transmitter and the optical receiver, wherein one of the optical transmitter or the optical receiver is parallel with a face of the device, and the other one of the optical transmitter or the optical receiver is tilted relative to the face of the device. 9. The method of claim 1, wherein a near field field of view (FOV) generated by the optical transmitter and the optical receiver is closer to a device that includes the optical transmitter and the optical receiver as compared to if the optical transmitter is not angled relative to the optical receiver and the transmitter optical axis does not intersect the receiver optical axis. 10. A device for image processing, the device comprising:
an optical transmitter configured to transmit structured light, the optical transmitter having a first angle of view relative to a transmitter optical axis; an optical receiver configured to receive a reflection of the structured light, the receiver having a second angle of view relative to a receiver optical axis, wherein the optical transmitter is angled relative to the optical receiver so that the transmitter optical axis intersects the receiver optical axis, and wherein a position of the optical transmitter relative to the optical receiver is constant; and a processing circuit configured to generate a depth map for one or more images based on the received reflection of the structured light. 11. The device of claim 10, wherein the optical transmitter transmits the same structured light during the entire generation of the depth map. 12. The device of claim 10, wherein the processing circuit is configured to scale a position of each element in the received reflection of the structured light based on an angle of tilt of the optical transmitter relative to the optical receiver and a focal length of the optical receiver. 13. The device of claim 12, wherein to generate the depth map, the processing circuit is configured to generate the depth map based on the scaled position of each element in the received reflection of the structured light, each element in the structured light that corresponds to a respective element in the received reflection of the structured light, the focal length of the optical receiver, and a distance between the optical transmitter and the optical receiver. 14. The device of claim 10, wherein the optical transmitter is configured to transmit a pattern via the structured light, wherein the optical receiver is configured to receive a distorted pattern via the reflection, wherein the processing circuit is configured to determine whether the received distorted pattern corresponds to the transmitted pattern without compensating for an angle of tilt of the optical transmitter relative to the optical receiver. 15. The device of claim 14, wherein the processing circuit is configured to determine a location of where the distorted pattern is received by the optical receiver, and wherein to generated the depth map, the processing circuit is configured to generate the depth map based on the location of where the distorted pattern is received by the optical receiver and the angle of tilt of the optical transmitter relative to the optical receiver. 16. The device of claim 10, wherein the processing circuit comprises a first processing circuit, the device further comprising a second processing circuit configured to:
receive the generated depth map from the first processing circuit; and generate graphical data for the one or more images based on the generated depth map. 17. The device of claim 16, wherein the first processing circuit and the second processing circuit are the same processing circuit. 18. The device of claim 10, wherein the device comprises one of:
a wireless communication device, a laptop, a desktop, a tablet, a camera, and a video gaming console. 19. The device of claim 10, wherein one of the optical transmitter or the optical receiver is parallel with a face of the device, and the other one of the optical transmitter or the optical receiver is tilted relative to the face of the device. 20. The device of claim 10, wherein a near field field of view (FOV) generated by the optical transmitter and the optical receiver is closer to the device that includes the optical transmitter and the optical receiver as compared to if the optical transmitter is not angled relative to the optical receiver and the transmitter optical axis does not intersect the receiver optical axis. 21. A computer-readable storage medium including instructions stored thereon that when executed cause one or more processors of a device for image processing to:
cause an optical transmitter to transmit structured light, the optical transmitter having a first angle of view relative to a transmitter optical axis; and generate a depth map for one or more images based on a received reflection of the structured light, wherein the received reflection is received, with an optical receiver, the optical receiver having a second angle of view relative to a receiver optical axis, wherein the optical transmitter is angled relative to the optical receiver so that the transmitter optical axis intersects the receiver optical axis, and wherein a position of the optical transmitter relative to the optical receiver is constant. 22. The computer-readable storage medium of claim 21, wherein the structured light transmitted with the optical transmitter is the same during the entire generation of the depth map. 23. The computer-readable storage medium of claim 21, further comprising instructions that cause the one or more processors to:
scale a position of each element in the received reflection of the structured light based on an angle of tilt of the optical transmitter relative to the optical receiver and a focal length of the optical receiver. 24. A device for image processing, the device comprising:
means for transmitting structured light, the means for transmitting having a first angle of view relative to a transmitter optical axis; means for receiving a reflection of the structured light, the means for receiving having a second angle of view relative to a receiver optical axis, wherein the means for transmitting is angled relative to the means for receiving so that the transmitter optical axis intersects the receiver optical axis, and wherein a position of the means for transmitting is constant relative to the means for receiving; and means for generating a depth map for one or more images based on the received reflection of the structured light. 25. The device of claim 24, wherein the means for transmits the same structured light during the entire generation of the depth map. 26. The device of claim 24, further comprising:
means for scaling a position of each element in the received reflection of the structured light based on an angle of tilt of the means for transmitting relative to the means for receiving and a focal length of the means for receiving. 27. The device of claim 24, further comprising:
means for receiving the generated depth map; and means for generating graphical data for the one or more images based on the generated depth map. 28. The device of claim 24, wherein one of the means for transmitting or the means for receiving is parallel with a face of the device, and the other one of the means for transmitting or the means for receiving is tilted relative to the face of the device. 29. The device of claim 24, wherein a near field field of view (FOV) generated by the means for transmitting and the means for receiving is closer to the device that includes the means for transmitting and the means for receiving as compared to if the means for transmitting is not angled relative to the means for receiving and the transmitter optical axis does not intersect the receiver optical axis. | 2,400 |
7,571 | 7,571 | 13,826,090 | 2,422 | Methods, systems, computer readable media, and apparatuses are disclosed for providing event messages to a user. The event messages may include video data or a link to video of the event. In some variations, a user or content provider may define criteria for the event messages that are to be displayed to the user. The event messages may be stored so that a user may be able to browse through the stored event messages and decide when to view the video of the event. Upon a user's selection of the event message, the video of the event may be displayed to the user on the same display device or another display device. | 1. A method, comprising:
registering a first device for event messaging; registering a second device for event messages, wherein both the first device and the second device are associated with a user; receiving information identifying the user's request to be notified of an occurrence of a predetermined event in content; determining that the predetermined event has occurred in the content; generating an event message that indicates of the occurrence of the predetermined event in the content, wherein the event message includes an option to initiate a portion of the content during which the predetermined event occurred; transmitting the event message to the first device; transmitting the event message to the second device; receiving a request for the portion of the content; and transmitting, to at least one of the first device or the second device, the portion of the content to the device. 2. The method of claim 1, wherein determining that the predetermined event has occurred in the content includes determining that event messaging criteria matches data describing one or more content segments of the content. 3. The method of claim 2, wherein the event messaging criteria is for a fantasy team of the user, and the portion of the content includes a composite video that represents events that occurred in one or more sporting games that contributed to a score of the fantasy team, and wherein the method further comprises:
creating the composite video from video segments of the one or more sporting games. 4. The method of claim 1, wherein transmitting the event message to the first device includes transmitting the event message via an e-mail message, a short messaging service (SMS) message, a message conforming to a protocol suitable for instant messaging, or a message posted to a social media account of the user. 5. The method of claim 1, wherein the request for the portion of the content indicates which device is to receive the portion of the content, and wherein the method further comprises:
transmitting the portion of the content only to the first device or the second device in accordance with request for the portion of the content. 6. A method, comprising:
registering a device for event messaging; receiving information identifying a user's request to be notified of an occurrence of a predetermined event in content; determining that the predetermined event has occurred in the content; generating an event message that indicates of the occurrence of the predetermined event in the content, wherein the event message includes an option to initiate a portion of the content during which the predetermined event occurred; transmitting the event message to the device; receiving a request for the portion of the content; and transmitting the portion of the content to the device. 7. The method of claim 6, wherein determining that the predetermined event has occurred in the content includes determining that event messaging criteria matches data describing one or more content segments of the content. 8. The method of claim 7, wherein the event messaging criteria is for a fantasy team of the user, and the portion of the content includes a composite video that represents events that occurred in one or more sporting events that contributed to a score of the fantasy team, and wherein the method further comprises:
creating the composite video from video segments of the one or more sporting events. 9. The method of claim 6, wherein transmitting the event message to the device includes transmitting the event message to both the device and another device. 10. The method of claim 6, wherein transmitting the event message to the device includes transmitting the event message via an e-mail message, a short messaging service (SMS) message, a message conforming to a protocol suitable for instant messaging, or a message posted to a social media account of the user. 11. The method of claim 6, further comprising:
adding the event message to an event message log; receiving a request to view the event message log; receiving a selection of a logged event message from the event message log, wherein the logged event message corresponds to a content segment; and transmitting the content segment to the device. 12. A method, comprising:
receiving information identifying a user's request to be notified of an occurrence of a predetermined event in content; determining that the predetermined event has occurred in the content; and generating an event message to alert the user of the occurrence of the predetermined event in the content, wherein the event message includes an option to initiate a portion of the content during which the predetermined event occurred. 13. The method of claim 12, wherein determining that the predetermined event has occurred in the content includes determining that event messaging criteria matches data describing one or more content segments of the content. 14. The method of claim 13, further comprising:
transmitting the event message to one or more devices registered by the user to receive event messages. 15. The method of claim 14, wherein transmitting the event message to the one or more devices registered to the user includes transmitting the event message via an e-mail message, a short messaging service (SMS) message, a message conforming to a protocol suitable for instant messaging, or a message posted to a social media account of the user. 16. The method of claim 14, wherein transmitting the event message to the one or more devices registered to the user includes transmitting the event message to a first user device and a second user device. 17. The method of claim 16, wherein the user device is in communication with a television being watched by the user and the second user device is a tablet computing device, mobile computing device or personal computer device that is being used by the user. 18. The method of claim 16, wherein the one or more devices registered to the user includes a first device and a second device, wherein the portion of the content includes a video segment, and the method further comprises:
receiving user input that represents the user's selection of the event message; and responsive to receiving the user input, transmitting the video segment to the first device or the second device. 19. The method of claim 14, wherein the event messaging criteria is for a fantasy team of the user, and the portion of the content includes a composite video that represents events that occurred in one or more sporting events that contributed to a score of the fantasy team, and wherein the method further comprises:
creating the composite video from video segments of the one or more sporting events. 20. The method of claim 14, further comprising:
adding the event message to an event message log; receiving a request to view the event message log; receiving a selection of a logged event message from the event message log, wherein the logged event message corresponds to a content segment; and transmitting the content segment to the one or more devices registered by the user. | Methods, systems, computer readable media, and apparatuses are disclosed for providing event messages to a user. The event messages may include video data or a link to video of the event. In some variations, a user or content provider may define criteria for the event messages that are to be displayed to the user. The event messages may be stored so that a user may be able to browse through the stored event messages and decide when to view the video of the event. Upon a user's selection of the event message, the video of the event may be displayed to the user on the same display device or another display device.1. A method, comprising:
registering a first device for event messaging; registering a second device for event messages, wherein both the first device and the second device are associated with a user; receiving information identifying the user's request to be notified of an occurrence of a predetermined event in content; determining that the predetermined event has occurred in the content; generating an event message that indicates of the occurrence of the predetermined event in the content, wherein the event message includes an option to initiate a portion of the content during which the predetermined event occurred; transmitting the event message to the first device; transmitting the event message to the second device; receiving a request for the portion of the content; and transmitting, to at least one of the first device or the second device, the portion of the content to the device. 2. The method of claim 1, wherein determining that the predetermined event has occurred in the content includes determining that event messaging criteria matches data describing one or more content segments of the content. 3. The method of claim 2, wherein the event messaging criteria is for a fantasy team of the user, and the portion of the content includes a composite video that represents events that occurred in one or more sporting games that contributed to a score of the fantasy team, and wherein the method further comprises:
creating the composite video from video segments of the one or more sporting games. 4. The method of claim 1, wherein transmitting the event message to the first device includes transmitting the event message via an e-mail message, a short messaging service (SMS) message, a message conforming to a protocol suitable for instant messaging, or a message posted to a social media account of the user. 5. The method of claim 1, wherein the request for the portion of the content indicates which device is to receive the portion of the content, and wherein the method further comprises:
transmitting the portion of the content only to the first device or the second device in accordance with request for the portion of the content. 6. A method, comprising:
registering a device for event messaging; receiving information identifying a user's request to be notified of an occurrence of a predetermined event in content; determining that the predetermined event has occurred in the content; generating an event message that indicates of the occurrence of the predetermined event in the content, wherein the event message includes an option to initiate a portion of the content during which the predetermined event occurred; transmitting the event message to the device; receiving a request for the portion of the content; and transmitting the portion of the content to the device. 7. The method of claim 6, wherein determining that the predetermined event has occurred in the content includes determining that event messaging criteria matches data describing one or more content segments of the content. 8. The method of claim 7, wherein the event messaging criteria is for a fantasy team of the user, and the portion of the content includes a composite video that represents events that occurred in one or more sporting events that contributed to a score of the fantasy team, and wherein the method further comprises:
creating the composite video from video segments of the one or more sporting events. 9. The method of claim 6, wherein transmitting the event message to the device includes transmitting the event message to both the device and another device. 10. The method of claim 6, wherein transmitting the event message to the device includes transmitting the event message via an e-mail message, a short messaging service (SMS) message, a message conforming to a protocol suitable for instant messaging, or a message posted to a social media account of the user. 11. The method of claim 6, further comprising:
adding the event message to an event message log; receiving a request to view the event message log; receiving a selection of a logged event message from the event message log, wherein the logged event message corresponds to a content segment; and transmitting the content segment to the device. 12. A method, comprising:
receiving information identifying a user's request to be notified of an occurrence of a predetermined event in content; determining that the predetermined event has occurred in the content; and generating an event message to alert the user of the occurrence of the predetermined event in the content, wherein the event message includes an option to initiate a portion of the content during which the predetermined event occurred. 13. The method of claim 12, wherein determining that the predetermined event has occurred in the content includes determining that event messaging criteria matches data describing one or more content segments of the content. 14. The method of claim 13, further comprising:
transmitting the event message to one or more devices registered by the user to receive event messages. 15. The method of claim 14, wherein transmitting the event message to the one or more devices registered to the user includes transmitting the event message via an e-mail message, a short messaging service (SMS) message, a message conforming to a protocol suitable for instant messaging, or a message posted to a social media account of the user. 16. The method of claim 14, wherein transmitting the event message to the one or more devices registered to the user includes transmitting the event message to a first user device and a second user device. 17. The method of claim 16, wherein the user device is in communication with a television being watched by the user and the second user device is a tablet computing device, mobile computing device or personal computer device that is being used by the user. 18. The method of claim 16, wherein the one or more devices registered to the user includes a first device and a second device, wherein the portion of the content includes a video segment, and the method further comprises:
receiving user input that represents the user's selection of the event message; and responsive to receiving the user input, transmitting the video segment to the first device or the second device. 19. The method of claim 14, wherein the event messaging criteria is for a fantasy team of the user, and the portion of the content includes a composite video that represents events that occurred in one or more sporting events that contributed to a score of the fantasy team, and wherein the method further comprises:
creating the composite video from video segments of the one or more sporting events. 20. The method of claim 14, further comprising:
adding the event message to an event message log; receiving a request to view the event message log; receiving a selection of a logged event message from the event message log, wherein the logged event message corresponds to a content segment; and transmitting the content segment to the one or more devices registered by the user. | 2,400 |
7,572 | 7,572 | 14,869,325 | 2,419 | A user equipment and a method performed by the user equipment that has a transceiver configured to enable the user equipment to establish a connection with a first network and a second network. The method including establishing a connection to each of the first network and the second network, tuning away from the first network to the second network, tuning back to the first network from the second network and determining whether to perform a network operation with the first network after tuning back to the first network. | 1. A method comprising:
at a user equipment having a transceiver configured to enable the user equipment to establish a connection with a first network and a second network:
establishing a connection to each of the first network and the second network;
tuning away from the first network to the second network;
tuning back to the first network from the second network; and
determining whether to perform a network operation with the first network after tuning back to the first network. 2. The method of claim 1, wherein the network operation includes one of transmitting a scheduling request to the first network or initiating a random access channel (RACH) procedure with the first network. 3. The method of claim 1, wherein the user equipment omits performing the network operation after tuning back to the first network. 4. The method of claim 1, wherein the determining whether to perform the network operation includes:
determining an amount of data in an uplink buffer of the user equipment. 5. The method of claim 4, wherein the network operation is performed when the amount of data in the uplink buffer is greater than a threshold. 6. The method of claim 5, wherein the threshold is based on one of an amount of data required to maintain the connection to the first network, an identification of the first network, a type of the user equipment, a level of service for the user equipment or a connection quality parameter of the connection to the first network. 7. The method of claim 1, wherein the determining whether to perform the network operation includes:
determining a duration between the tuning away and the tuning back. 8. The method of claim 7, wherein the determining whether to perform the network operation includes:
determining an amount of data in an uplink buffer of the user equipment,
wherein the network operation is performed when the duration exceeds a threshold, and
wherein the threshold is based on the amount of data in the buffer. 9. The method of claim 1, wherein the determining whether to perform the network operation includes:
estimating a duration of an inactivity timer of the first network; and performing the network operation when the tuning away from the first network occurs within a time threshold of an estimated expiration of the inactivity timer. 10. The method of claim 9, wherein the inactivity timer is a radio resource control (RRC) timer of the first network. 11. The method of claim 9, wherein the estimating the duration of the inactivity timer includes one of:
determining an action was performed by the first network for the user equipment based on the inactivity timer; or determining an action was performed by the first network for other user equipment connected to the first network based on the inactivity timer. 12. The method of claim 1, wherein the first network is an LTE network and the second network is a CDMA network. 13. A user equipment, comprising:
a transceiver configured to enable the user equipment to establish a connection with a first network and a second network; and a processor configured to:
instruct the transceiver to establish a connection to each of the first network and the second network;
instruct the transceiver to tune away from the first network to the second network;
instruct the transceiver to tune back to the first network from the second network; and
determine whether to perform a network operation with the first network after tuning back to the first network. 14. The user equipment of claim 13, wherein the network operation includes one of transmitting a scheduling request to the first network or initiating a random access channel (RACH) procedure with the first network. 15. The user equipment of claim 13, further comprising:
an uplink buffer, wherein the processor determines whether to perform the network operation by determining an amount of data in the uplink buffer. 16. The user equipment of claim 13, wherein the processor determines whether to perform the network operation by determining a duration between the tuning away and the tuning back. 17. The user equipment of claim 13, wherein the processor determines whether to perform the network operation by:
estimating a duration of an inactivity timer of the first network; and performing the network operation when the tuning away from the first network occurs within a time threshold of an estimated expiration of the inactivity timer. 18. The user equipment of claim 17, further comprising:
a timer, wherein the timer measures a time since a last uplink transmission by the user equipment, the time being compared to the time threshold. 19. The user equipment of claim 17, further comprising:
a memory storing one of:
an action performed by the first network for the user equipment based on the inactivity timer; or
an action performed by the first network for other user equipment connected to the first network based on the inactivity timer. 20. A nonvolatile computer-readable medium comprising a set of instructions that, when executed, cause a processor to perform operations, comprising:
establish a connection by a user equipment to each of a first network and a second network; tune away the user equipment from the first network to the second network; tune back the user equipment to the first network from the second network; and determine whether to perform a network operation with the first network after tuning back to the first network,
wherein the network operation includes one of transmitting a scheduling request to the first network or initiating a random access channel (RACH) procedure with the first network. | A user equipment and a method performed by the user equipment that has a transceiver configured to enable the user equipment to establish a connection with a first network and a second network. The method including establishing a connection to each of the first network and the second network, tuning away from the first network to the second network, tuning back to the first network from the second network and determining whether to perform a network operation with the first network after tuning back to the first network.1. A method comprising:
at a user equipment having a transceiver configured to enable the user equipment to establish a connection with a first network and a second network:
establishing a connection to each of the first network and the second network;
tuning away from the first network to the second network;
tuning back to the first network from the second network; and
determining whether to perform a network operation with the first network after tuning back to the first network. 2. The method of claim 1, wherein the network operation includes one of transmitting a scheduling request to the first network or initiating a random access channel (RACH) procedure with the first network. 3. The method of claim 1, wherein the user equipment omits performing the network operation after tuning back to the first network. 4. The method of claim 1, wherein the determining whether to perform the network operation includes:
determining an amount of data in an uplink buffer of the user equipment. 5. The method of claim 4, wherein the network operation is performed when the amount of data in the uplink buffer is greater than a threshold. 6. The method of claim 5, wherein the threshold is based on one of an amount of data required to maintain the connection to the first network, an identification of the first network, a type of the user equipment, a level of service for the user equipment or a connection quality parameter of the connection to the first network. 7. The method of claim 1, wherein the determining whether to perform the network operation includes:
determining a duration between the tuning away and the tuning back. 8. The method of claim 7, wherein the determining whether to perform the network operation includes:
determining an amount of data in an uplink buffer of the user equipment,
wherein the network operation is performed when the duration exceeds a threshold, and
wherein the threshold is based on the amount of data in the buffer. 9. The method of claim 1, wherein the determining whether to perform the network operation includes:
estimating a duration of an inactivity timer of the first network; and performing the network operation when the tuning away from the first network occurs within a time threshold of an estimated expiration of the inactivity timer. 10. The method of claim 9, wherein the inactivity timer is a radio resource control (RRC) timer of the first network. 11. The method of claim 9, wherein the estimating the duration of the inactivity timer includes one of:
determining an action was performed by the first network for the user equipment based on the inactivity timer; or determining an action was performed by the first network for other user equipment connected to the first network based on the inactivity timer. 12. The method of claim 1, wherein the first network is an LTE network and the second network is a CDMA network. 13. A user equipment, comprising:
a transceiver configured to enable the user equipment to establish a connection with a first network and a second network; and a processor configured to:
instruct the transceiver to establish a connection to each of the first network and the second network;
instruct the transceiver to tune away from the first network to the second network;
instruct the transceiver to tune back to the first network from the second network; and
determine whether to perform a network operation with the first network after tuning back to the first network. 14. The user equipment of claim 13, wherein the network operation includes one of transmitting a scheduling request to the first network or initiating a random access channel (RACH) procedure with the first network. 15. The user equipment of claim 13, further comprising:
an uplink buffer, wherein the processor determines whether to perform the network operation by determining an amount of data in the uplink buffer. 16. The user equipment of claim 13, wherein the processor determines whether to perform the network operation by determining a duration between the tuning away and the tuning back. 17. The user equipment of claim 13, wherein the processor determines whether to perform the network operation by:
estimating a duration of an inactivity timer of the first network; and performing the network operation when the tuning away from the first network occurs within a time threshold of an estimated expiration of the inactivity timer. 18. The user equipment of claim 17, further comprising:
a timer, wherein the timer measures a time since a last uplink transmission by the user equipment, the time being compared to the time threshold. 19. The user equipment of claim 17, further comprising:
a memory storing one of:
an action performed by the first network for the user equipment based on the inactivity timer; or
an action performed by the first network for other user equipment connected to the first network based on the inactivity timer. 20. A nonvolatile computer-readable medium comprising a set of instructions that, when executed, cause a processor to perform operations, comprising:
establish a connection by a user equipment to each of a first network and a second network; tune away the user equipment from the first network to the second network; tune back the user equipment to the first network from the second network; and determine whether to perform a network operation with the first network after tuning back to the first network,
wherein the network operation includes one of transmitting a scheduling request to the first network or initiating a random access channel (RACH) procedure with the first network. | 2,400 |
7,573 | 7,573 | 14,233,170 | 2,413 | A technique, including initiating an access procedure by making from a communication device an uplink transmission on one component carrier of a plurality of component carriers associated with an access node; thereafter receiving at said communication device group information specifying a group of said plurality of component carriers sharing uplink transmission timing information as a group to which said one component carrier belongs; and making from said communication device a further uplink transmission on said one component carrier using uplink transmission timing information for said group specified in said group information. | 1. A method, comprising: initiating an access procedure by making from a communication device an uplink transmission on one component carrier of a plurality of component carriers associated with an access node; thereafter receiving at said communication device group information specifying a group of said plurality of component carriers sharing uplink transmission timing information as a group to which said one component carrier belongs; and making from said communication device a further uplink transmission on said one component carrier using uplink transmission timing information for said group specified in said group information. 2. A method according to claim 1, wherein said access procedure is a Random Access Procedure. 3. A method according to claim 1, wherein said uplink transmission comprises an access request message, and wherein the method comprises receiving said group information in a response to said access request message. 4. A method according to claim 3, wherein said response also specifies uplink timing information for said group specified in said group information. 5. A method according to claim 3, wherein said access request message is a Random Access Preamble Message, and said response is a Random Access Response Message. 6. A method according to claim 1, comprising initiating said access procedure in response to an order from said access node. 7. A method according to claim 1, comprising configuring said uplink transmission on the basis of configuration information detected from one or more transmissions from said access node before initiating said access procedure. 8. A method according to claim 7, wherein said configuration information does not specify a group of said plurality of component carriers sharing uplink transmission timing information to which said one component carrier belongs. 9. A method according to claim 7, wherein said configuration information specifies a group of said plurality of component carriers sharing common uplink transmission timing information as a group to which said one component carrier belongs; and wherein the method comprises, in the event that the group specified in said configuration information is different to the group specified in said group information, making said further uplink transmission on said one component carrier preferentially using the uplink transmission timing information for said group specified in said group information. 10. A method according to claim 7, wherein said configuration information is received in a radio resource control reconfiguration message. 11. A method according to claim 1, comprising receiving said group information in a radio resource control reconfiguration message. 12. A method according to claim 1, wherein the uplink transmission timing information is timing advance information. 13. A method according to claim 1, wherein said one component carrier is a secondary cell for the communication device, and the plurality of component carriers associated with said access node include at least one other component carrier configured for use by the communication device as a primary cell. 14. A method, comprising: receiving at an access node on one component carrier of a plurality of component carriers associated with said access node an uplink transmission from a communication device initiating an access procedure; at least partly on the basis of a measurement of a parameter of said uplink transmission, determining which group of said plurality of component carriers sharing uplink transmission timing information is to include said one component carrier on which said uplink transmission was received; and transmitting the result of said determination from said access node to said communication device. 15. (canceled) 16. (canceled) 17. (canceled) 18. (canceled) 19. (canceled) 20. (canceled) 21. (canceled) 22. (canceled) 23. (canceled) 24. (canceled) 25. (canceled) 26. (canceled) 27. An apparatus comprising: a processor and memory including computer program code, wherein the memory and computer program code are configured to, with the processor, cause the apparatus to: initiate an access procedure by making from a communication device an uplink transmission on one component carrier of a plurality of component carriers associated with an access node; thereafter receive at said communication device group information specifying a group of said plurality of component carriers sharing uplink transmission timing information as a group to which said one component carrier belongs; and make from said communication device a further uplink transmission on said one component carrier using uplink transmission timing information for said group specified in said group information. 28. (canceled) 29. (canceled) 30. (canceled) 31. (canceled) 32. (canceled) 33. (canceled) 34. (canceled) 35. (canceled) 36. (canceled) 37. (canceled) 38. (canceled) 39. (canceled) 40. An apparatus comprising: a processor and memory including computer program code, wherein the memory and computer program code are configured to, with the processor, cause the apparatus to: receive at an access node on one component carrier of a plurality of component carriers associated with said access node an uplink transmission from a communication device initiating an access procedure; at least partly on the basis of a measurement of a parameter of said uplink transmission, determine which group of said plurality of component carriers sharing uplink transmission timing information is to include said one component carrier on which said uplink transmission was received; and transmit the result of said determination from said access node to said communication device. 41. (canceled) 42. (canceled) 43. (canceled) 44. (canceled) 45. (canceled) 46. (canceled) 47. (canceled) 48. (canceled) 49. (canceled) 50. (canceled) 51. (canceled) 52. (canceled) 53. (canceled) 54. (canceled) 55. (canceled) 56. (canceled) 57. (canceled) 58. A computer program product comprising program code means which when loaded into a computer controls the computer to: initiate an access procedure by making from a communication device an uplink transmission on one component carrier of a plurality of component carriers associated with an access node; thereafter receive at said communication device group information specifying a group of said plurality of component carriers sharing uplink transmission timing information as a group to which said one component carrier belongs; and make from said communication device a further uplink transmission on said one component carrier using uplink transmission timing information for said group specified in said group information. 59. A computer program product comprising program code means which when loaded into a computer controls the computer to: receive at an access node on one component carrier of a plurality of component carriers associated with said access node an uplink transmission from a communication device initiating an access procedure; at least partly on the basis of a measurement of a parameter of said uplink transmission, determine which group of said plurality of component carriers sharing uplink transmission timing information is to include said one component carrier on which said uplink transmission was received; and transmit the result of said determination from said access node to said communication device. | A technique, including initiating an access procedure by making from a communication device an uplink transmission on one component carrier of a plurality of component carriers associated with an access node; thereafter receiving at said communication device group information specifying a group of said plurality of component carriers sharing uplink transmission timing information as a group to which said one component carrier belongs; and making from said communication device a further uplink transmission on said one component carrier using uplink transmission timing information for said group specified in said group information.1. A method, comprising: initiating an access procedure by making from a communication device an uplink transmission on one component carrier of a plurality of component carriers associated with an access node; thereafter receiving at said communication device group information specifying a group of said plurality of component carriers sharing uplink transmission timing information as a group to which said one component carrier belongs; and making from said communication device a further uplink transmission on said one component carrier using uplink transmission timing information for said group specified in said group information. 2. A method according to claim 1, wherein said access procedure is a Random Access Procedure. 3. A method according to claim 1, wherein said uplink transmission comprises an access request message, and wherein the method comprises receiving said group information in a response to said access request message. 4. A method according to claim 3, wherein said response also specifies uplink timing information for said group specified in said group information. 5. A method according to claim 3, wherein said access request message is a Random Access Preamble Message, and said response is a Random Access Response Message. 6. A method according to claim 1, comprising initiating said access procedure in response to an order from said access node. 7. A method according to claim 1, comprising configuring said uplink transmission on the basis of configuration information detected from one or more transmissions from said access node before initiating said access procedure. 8. A method according to claim 7, wherein said configuration information does not specify a group of said plurality of component carriers sharing uplink transmission timing information to which said one component carrier belongs. 9. A method according to claim 7, wherein said configuration information specifies a group of said plurality of component carriers sharing common uplink transmission timing information as a group to which said one component carrier belongs; and wherein the method comprises, in the event that the group specified in said configuration information is different to the group specified in said group information, making said further uplink transmission on said one component carrier preferentially using the uplink transmission timing information for said group specified in said group information. 10. A method according to claim 7, wherein said configuration information is received in a radio resource control reconfiguration message. 11. A method according to claim 1, comprising receiving said group information in a radio resource control reconfiguration message. 12. A method according to claim 1, wherein the uplink transmission timing information is timing advance information. 13. A method according to claim 1, wherein said one component carrier is a secondary cell for the communication device, and the plurality of component carriers associated with said access node include at least one other component carrier configured for use by the communication device as a primary cell. 14. A method, comprising: receiving at an access node on one component carrier of a plurality of component carriers associated with said access node an uplink transmission from a communication device initiating an access procedure; at least partly on the basis of a measurement of a parameter of said uplink transmission, determining which group of said plurality of component carriers sharing uplink transmission timing information is to include said one component carrier on which said uplink transmission was received; and transmitting the result of said determination from said access node to said communication device. 15. (canceled) 16. (canceled) 17. (canceled) 18. (canceled) 19. (canceled) 20. (canceled) 21. (canceled) 22. (canceled) 23. (canceled) 24. (canceled) 25. (canceled) 26. (canceled) 27. An apparatus comprising: a processor and memory including computer program code, wherein the memory and computer program code are configured to, with the processor, cause the apparatus to: initiate an access procedure by making from a communication device an uplink transmission on one component carrier of a plurality of component carriers associated with an access node; thereafter receive at said communication device group information specifying a group of said plurality of component carriers sharing uplink transmission timing information as a group to which said one component carrier belongs; and make from said communication device a further uplink transmission on said one component carrier using uplink transmission timing information for said group specified in said group information. 28. (canceled) 29. (canceled) 30. (canceled) 31. (canceled) 32. (canceled) 33. (canceled) 34. (canceled) 35. (canceled) 36. (canceled) 37. (canceled) 38. (canceled) 39. (canceled) 40. An apparatus comprising: a processor and memory including computer program code, wherein the memory and computer program code are configured to, with the processor, cause the apparatus to: receive at an access node on one component carrier of a plurality of component carriers associated with said access node an uplink transmission from a communication device initiating an access procedure; at least partly on the basis of a measurement of a parameter of said uplink transmission, determine which group of said plurality of component carriers sharing uplink transmission timing information is to include said one component carrier on which said uplink transmission was received; and transmit the result of said determination from said access node to said communication device. 41. (canceled) 42. (canceled) 43. (canceled) 44. (canceled) 45. (canceled) 46. (canceled) 47. (canceled) 48. (canceled) 49. (canceled) 50. (canceled) 51. (canceled) 52. (canceled) 53. (canceled) 54. (canceled) 55. (canceled) 56. (canceled) 57. (canceled) 58. A computer program product comprising program code means which when loaded into a computer controls the computer to: initiate an access procedure by making from a communication device an uplink transmission on one component carrier of a plurality of component carriers associated with an access node; thereafter receive at said communication device group information specifying a group of said plurality of component carriers sharing uplink transmission timing information as a group to which said one component carrier belongs; and make from said communication device a further uplink transmission on said one component carrier using uplink transmission timing information for said group specified in said group information. 59. A computer program product comprising program code means which when loaded into a computer controls the computer to: receive at an access node on one component carrier of a plurality of component carriers associated with said access node an uplink transmission from a communication device initiating an access procedure; at least partly on the basis of a measurement of a parameter of said uplink transmission, determine which group of said plurality of component carriers sharing uplink transmission timing information is to include said one component carrier on which said uplink transmission was received; and transmit the result of said determination from said access node to said communication device. | 2,400 |
7,574 | 7,574 | 13,597,131 | 2,483 | Quantization (scaling) matrices for HEVC standards using an HVS-based mathematical model and data analysis are described herein. A quadratic parameter model-based quantization matrix design is also included. | 1. A method of implementing a quantization matrix design for high efficiency video coding programmed in a memory of a device comprising:
a. determining intra quantization matrices of square-shaped blocks; and b. converting the intra quantization matrices of the square-shaped blocks into corresponding inter square-shaped quantization matrices. 2. The method of claim 1 further comprising determining intra quantization matrices of rectangular-shaped blocks. 3. The method of claim 2 further comprising converting the intra quantization matrices of the rectangular-shaped blocks into corresponding inter rectangular-shaped quantization matrices. 4. The method of claim 1 wherein converting comprises using reference advanced video coding quantization matrices model-based algorithms. 5. The method of claim 1 wherein the intra quantization matrices are derived from contrast sensitivity functions adjustment-based algorithms. 6. The method of claim 1 wherein the intra quantization matrices are selected from the group consisting of 4×4, 8×8, 16×16 and 32×32. 7. The method of claim 2 wherein the intra quantization matrices are selected from the group consisting of 16×4, 32×8, 8×2 and 32×2. 8. The method of claim 1 wherein the device is selected from the group consisting of a personal computer, a laptop computer, a computer workstation, a server, a mainframe computer, a handheld computer, a personal digital assistant, a cellular/mobile telephone, a smart appliance, a gaming console, a digital camera, a digital camcorder, a camera phone, an portable music player, a tablet computer, a video player, a DVD writer/player, a Blu-ray writer/player, a television and a home entertainment system. 9. A method of implementing a quantization matrix design for high efficiency video coding programmed in a memory of a device comprising:
a. determining intra quantization matrices of square-shaped blocks and the intra quantization matrices of rectangular-shaped blocks; and b. converting the intra quantization matrices of the square-shaped blocks into corresponding inter square-shaped quantization matrices and the intra quantization matrices of the rectangular-shaped blocks into corresponding inter rectangular-shaped quantization matrices. 10. The method of claim 9 wherein converting comprises using reference advanced video coding quantization matrices model-based algorithms. 11. The method of claim 9 wherein the intra quantization matrices are derived from contrast sensitivity functions adjustment-based algorithms. 12. The method of claim 9 wherein the intra quantization matrices are selected from the group consisting of 4×4, 8×8, 16×16 and 32×32. 13. The method of claim 9 wherein the intra quantization matrices are selected from the group consisting of 16×4, 32×8, 8×2 and 32×2. 14. The method of claim 9 wherein the device is selected from the group consisting of a personal computer, a laptop computer, a computer workstation, a server, a mainframe computer, a handheld computer, a personal digital assistant, a cellular/mobile telephone, a smart appliance, a gaming console, a digital camera, a digital camcorder, a camera phone, an portable music player, a tablet computer, a video player, a DVD writer/player, a Blu-ray writer/player, a television and a home entertainment system. 15. An apparatus comprising:
a. a memory for storing an application, the application for:
i. determining intra quantization matrices of square-shaped blocks; and
ii. converting the intra quantization matrices of the square-shaped blocks into corresponding inter square-shaped quantization matrices; and
b. a processing component coupled to the memory, the processing component configured for processing the application. 16. The apparatus of claim 15 further comprising determining intra quantization matrices of rectangular-shaped blocks. 17. The apparatus of claim 16 further comprising converting the intra quantization matrices of the rectangular-shaped blocks into corresponding inter rectangular-shaped quantization matrices. 18. The apparatus of claim 15 wherein converting comprises using reference advanced video coding quantization matrices model-based algorithms. 19. The apparatus of claim 15 wherein the intra quantization matrices are derived from contrast sensitivity functions adjustment-based algorithms. 20. The apparatus of claim 15 wherein the intra quantization matrices are selected from the group consisting of 4×4, 8×8, 16×16 and 32×32. 21. The apparatus of claim 16 wherein the intra quantization matrices are selected from the group consisting of 16×4, 32×8, 8×2 and 32×2. 22. The apparatus of claim 15 wherein the apparatus is selected from the group consisting of a personal computer, a laptop computer, a computer workstation, a server, a mainframe computer, a handheld computer, a personal digital assistant, a cellular/mobile telephone, a smart appliance, a gaming console, a digital camera, a digital camcorder, a camera phone, an portable music player, a tablet computer, a video player, a DVD writer/player, a Blu-ray writer/player, a television and a home entertainment system. | Quantization (scaling) matrices for HEVC standards using an HVS-based mathematical model and data analysis are described herein. A quadratic parameter model-based quantization matrix design is also included.1. A method of implementing a quantization matrix design for high efficiency video coding programmed in a memory of a device comprising:
a. determining intra quantization matrices of square-shaped blocks; and b. converting the intra quantization matrices of the square-shaped blocks into corresponding inter square-shaped quantization matrices. 2. The method of claim 1 further comprising determining intra quantization matrices of rectangular-shaped blocks. 3. The method of claim 2 further comprising converting the intra quantization matrices of the rectangular-shaped blocks into corresponding inter rectangular-shaped quantization matrices. 4. The method of claim 1 wherein converting comprises using reference advanced video coding quantization matrices model-based algorithms. 5. The method of claim 1 wherein the intra quantization matrices are derived from contrast sensitivity functions adjustment-based algorithms. 6. The method of claim 1 wherein the intra quantization matrices are selected from the group consisting of 4×4, 8×8, 16×16 and 32×32. 7. The method of claim 2 wherein the intra quantization matrices are selected from the group consisting of 16×4, 32×8, 8×2 and 32×2. 8. The method of claim 1 wherein the device is selected from the group consisting of a personal computer, a laptop computer, a computer workstation, a server, a mainframe computer, a handheld computer, a personal digital assistant, a cellular/mobile telephone, a smart appliance, a gaming console, a digital camera, a digital camcorder, a camera phone, an portable music player, a tablet computer, a video player, a DVD writer/player, a Blu-ray writer/player, a television and a home entertainment system. 9. A method of implementing a quantization matrix design for high efficiency video coding programmed in a memory of a device comprising:
a. determining intra quantization matrices of square-shaped blocks and the intra quantization matrices of rectangular-shaped blocks; and b. converting the intra quantization matrices of the square-shaped blocks into corresponding inter square-shaped quantization matrices and the intra quantization matrices of the rectangular-shaped blocks into corresponding inter rectangular-shaped quantization matrices. 10. The method of claim 9 wherein converting comprises using reference advanced video coding quantization matrices model-based algorithms. 11. The method of claim 9 wherein the intra quantization matrices are derived from contrast sensitivity functions adjustment-based algorithms. 12. The method of claim 9 wherein the intra quantization matrices are selected from the group consisting of 4×4, 8×8, 16×16 and 32×32. 13. The method of claim 9 wherein the intra quantization matrices are selected from the group consisting of 16×4, 32×8, 8×2 and 32×2. 14. The method of claim 9 wherein the device is selected from the group consisting of a personal computer, a laptop computer, a computer workstation, a server, a mainframe computer, a handheld computer, a personal digital assistant, a cellular/mobile telephone, a smart appliance, a gaming console, a digital camera, a digital camcorder, a camera phone, an portable music player, a tablet computer, a video player, a DVD writer/player, a Blu-ray writer/player, a television and a home entertainment system. 15. An apparatus comprising:
a. a memory for storing an application, the application for:
i. determining intra quantization matrices of square-shaped blocks; and
ii. converting the intra quantization matrices of the square-shaped blocks into corresponding inter square-shaped quantization matrices; and
b. a processing component coupled to the memory, the processing component configured for processing the application. 16. The apparatus of claim 15 further comprising determining intra quantization matrices of rectangular-shaped blocks. 17. The apparatus of claim 16 further comprising converting the intra quantization matrices of the rectangular-shaped blocks into corresponding inter rectangular-shaped quantization matrices. 18. The apparatus of claim 15 wherein converting comprises using reference advanced video coding quantization matrices model-based algorithms. 19. The apparatus of claim 15 wherein the intra quantization matrices are derived from contrast sensitivity functions adjustment-based algorithms. 20. The apparatus of claim 15 wherein the intra quantization matrices are selected from the group consisting of 4×4, 8×8, 16×16 and 32×32. 21. The apparatus of claim 16 wherein the intra quantization matrices are selected from the group consisting of 16×4, 32×8, 8×2 and 32×2. 22. The apparatus of claim 15 wherein the apparatus is selected from the group consisting of a personal computer, a laptop computer, a computer workstation, a server, a mainframe computer, a handheld computer, a personal digital assistant, a cellular/mobile telephone, a smart appliance, a gaming console, a digital camera, a digital camcorder, a camera phone, an portable music player, a tablet computer, a video player, a DVD writer/player, a Blu-ray writer/player, a television and a home entertainment system. | 2,400 |
7,575 | 7,575 | 14,303,788 | 2,462 | Embodiments of the present invention provide a mapping method and apparatus for a search space of a physical downlink control channel. The method includes: determining a search space allocated to the PDCCH according to a resource allocation scheme; and mapping each of candidates of the PDCCH to a logic time frequency resource of the search space according to a predefined interval. With the method and apparatus of the embodiments of the present invention, a frequency selective scheduling gain is obtained by mapping different candidates onto discrete time frequency resources, or a frequency diversity gain is obtained by mapping one candidate onto discrete time frequency resources, thereby improving the performance of the PDCCH. | 1. A mapping method for a search space of a physical downlink control channel (PDCCH), comprising:
determining a search space allocated to the PDCCH according to a resource allocation scheme; and mapping each of candidates of the PDCCH onto a time frequency resource of the search space according to a predefined interval. 2. The method according to claim 1, wherein,
the predefined interval is a value that different candidates of control channel are mapped onto resources not adjacent each other. 3. The method according to claim 2, wherein,
the predefined interval is a subband size fed back by user equipment (UE). 4. The method according to claim 2, wherein, the predefined interval is a resource block group (RBG) of user equipment (UE). 5. The method according to claim 2, wherein,
if the number of the resources of the search space is less than a product of a total number of the candidates of the PDCCH and the predefined interval, part of the candidates of the PDCCH are mapped first onto time frequency resources to which each of the predefined intervals corresponds, and then the rest of the candidates are mapped in a cyclic shift scheme onto time frequency resources to which each of the predefined intervals corresponds. 6. The method according to claim 5, wherein the method further comprises:
determining the total number of the candidates of the PDCCH according to an aggregation level. 7. The method according to claim 2, wherein,
a position of each of the candidates of the PDCCH at a corresponding time frequency resource to which the predefined interval corresponds is random. 8. The method according to claim 2, wherein the method further comprises:
mapping an eCCE contained in each candidate of the PDCCH onto subcarriers within the same resource block (RB) or subcarriers within neighboring RBs. 9. The method according to claim 2, wherein the method further comprises:
transmitting an index of a corresponding allocating unit within one predefined interval to user equipment (UE). 10. A mapping method for a search space of a physical downlink control channel (PDCCH), comprising:
determining a search space allocated to the PDCCH according to a resource allocation scheme; and mapping discretely multiple allocating units of a resource block (RB) contained in each candidate of the PDCCH onto the allocated search space. 11. A base station, applicable to mapping of a search space of a PDCCH, wherein the base station comprises:
a first determining unit configured to determine a search space allocated to the PDCCH according to a resource allocation scheme; and a first mapping unit configured to map each of candidates of the PDCCH onto a time frequency resource of the search space according to a predefined interval. 12. The base station according to claim 11, wherein
the predefined interval is a value that different candidates of control channel are mapped to resources not adjacent each other. 13. The base station according to claim 12, wherein,
the predefined interval is a subband size fed back by UE. 14. The base station according to claim 12, wherein,
the predefined interval is an RBG of UE. 15. The base station according to claim 12, wherein,
if the number of the resources of the search space is less than a product of a total number of the candidates of the PDCCH and the predefined interval, the first mapping unit maps first part of the candidates of the PDCCH onto time frequency resources to which each of the predefined intervals corresponds, and then maps in a cyclic shift scheme the rest of the candidates onto time frequency resources to which each of the predefined intervals corresponds. 16. The base station according to claim 15, wherein the base station further comprises:
a second determining unit configured to determine the total number of the candidates of the PDCCH according to an aggregation level. 17. The base station according to claim 12, wherein,
a position of each of the candidates of the PDCCH at a corresponding time frequency resource to which the predefined interval corresponds is random. 18. The base station according to claim 12, wherein the base station further comprises:
a second mapping unit configured to map an eCCE contained in each candidate of the PDCCH onto subcarriers within the same RB or subcarriers within neighboring RBs. 19. The base station according to claim 12, wherein the base station further comprises:
a transmitting unit configured to transmit an index of a corresponding allocating unit within one predefined interval to UE. 20. A base station, applicable to mapping of a search space of a PDCCH, wherein the base station comprises:
a determining unit configured to determine a search space allocated to the PDCCH according to a resource allocation scheme; and a mapping unit configured to map discretely multiple allocating units of an RB contained in each candidate of the PDCCH onto the allocated search space. | Embodiments of the present invention provide a mapping method and apparatus for a search space of a physical downlink control channel. The method includes: determining a search space allocated to the PDCCH according to a resource allocation scheme; and mapping each of candidates of the PDCCH to a logic time frequency resource of the search space according to a predefined interval. With the method and apparatus of the embodiments of the present invention, a frequency selective scheduling gain is obtained by mapping different candidates onto discrete time frequency resources, or a frequency diversity gain is obtained by mapping one candidate onto discrete time frequency resources, thereby improving the performance of the PDCCH.1. A mapping method for a search space of a physical downlink control channel (PDCCH), comprising:
determining a search space allocated to the PDCCH according to a resource allocation scheme; and mapping each of candidates of the PDCCH onto a time frequency resource of the search space according to a predefined interval. 2. The method according to claim 1, wherein,
the predefined interval is a value that different candidates of control channel are mapped onto resources not adjacent each other. 3. The method according to claim 2, wherein,
the predefined interval is a subband size fed back by user equipment (UE). 4. The method according to claim 2, wherein, the predefined interval is a resource block group (RBG) of user equipment (UE). 5. The method according to claim 2, wherein,
if the number of the resources of the search space is less than a product of a total number of the candidates of the PDCCH and the predefined interval, part of the candidates of the PDCCH are mapped first onto time frequency resources to which each of the predefined intervals corresponds, and then the rest of the candidates are mapped in a cyclic shift scheme onto time frequency resources to which each of the predefined intervals corresponds. 6. The method according to claim 5, wherein the method further comprises:
determining the total number of the candidates of the PDCCH according to an aggregation level. 7. The method according to claim 2, wherein,
a position of each of the candidates of the PDCCH at a corresponding time frequency resource to which the predefined interval corresponds is random. 8. The method according to claim 2, wherein the method further comprises:
mapping an eCCE contained in each candidate of the PDCCH onto subcarriers within the same resource block (RB) or subcarriers within neighboring RBs. 9. The method according to claim 2, wherein the method further comprises:
transmitting an index of a corresponding allocating unit within one predefined interval to user equipment (UE). 10. A mapping method for a search space of a physical downlink control channel (PDCCH), comprising:
determining a search space allocated to the PDCCH according to a resource allocation scheme; and mapping discretely multiple allocating units of a resource block (RB) contained in each candidate of the PDCCH onto the allocated search space. 11. A base station, applicable to mapping of a search space of a PDCCH, wherein the base station comprises:
a first determining unit configured to determine a search space allocated to the PDCCH according to a resource allocation scheme; and a first mapping unit configured to map each of candidates of the PDCCH onto a time frequency resource of the search space according to a predefined interval. 12. The base station according to claim 11, wherein
the predefined interval is a value that different candidates of control channel are mapped to resources not adjacent each other. 13. The base station according to claim 12, wherein,
the predefined interval is a subband size fed back by UE. 14. The base station according to claim 12, wherein,
the predefined interval is an RBG of UE. 15. The base station according to claim 12, wherein,
if the number of the resources of the search space is less than a product of a total number of the candidates of the PDCCH and the predefined interval, the first mapping unit maps first part of the candidates of the PDCCH onto time frequency resources to which each of the predefined intervals corresponds, and then maps in a cyclic shift scheme the rest of the candidates onto time frequency resources to which each of the predefined intervals corresponds. 16. The base station according to claim 15, wherein the base station further comprises:
a second determining unit configured to determine the total number of the candidates of the PDCCH according to an aggregation level. 17. The base station according to claim 12, wherein,
a position of each of the candidates of the PDCCH at a corresponding time frequency resource to which the predefined interval corresponds is random. 18. The base station according to claim 12, wherein the base station further comprises:
a second mapping unit configured to map an eCCE contained in each candidate of the PDCCH onto subcarriers within the same RB or subcarriers within neighboring RBs. 19. The base station according to claim 12, wherein the base station further comprises:
a transmitting unit configured to transmit an index of a corresponding allocating unit within one predefined interval to UE. 20. A base station, applicable to mapping of a search space of a PDCCH, wherein the base station comprises:
a determining unit configured to determine a search space allocated to the PDCCH according to a resource allocation scheme; and a mapping unit configured to map discretely multiple allocating units of an RB contained in each candidate of the PDCCH onto the allocated search space. | 2,400 |
7,576 | 7,576 | 14,522,459 | 2,425 | Techniques for on-demand metadata insertion into single-stream content are described. In one or more implementations, media content is obtained responsive to a request. The media content can be included in a content stream that also includes alternate content that is spliced into the content stream. Metadata is injected into the content stream at runtime in association with a starting point of the alternate content. The metadata can enable a media player to identify the alternate content and a location of the alternate content within the content stream. The content stream is then transmitted as a single stream to the media player for playback of both the media content and the alternate content. | 1. A computer-implemented method, comprising:
obtaining media content responsive to a request, the media content being included in a content stream that also includes alternate content that is spliced into the content stream; injecting metadata into the content stream at runtime in association with a starting point of the alternate content, the metadata configured to enable a media player to identify the alternate content and a location of the alternate content within the content stream; and transmitting the content stream as a single stream to the media player for playback of both the media content and the alternate content. 2. A computer-implemented method as recited in claim 1, further comprising:
determining whether the metadata is pre-packaged with the alternate content; and performing said injecting of the metadata responsive to a determination that the metadata is not pre-packaged with the alternate content. 3. A computer-implemented method as recited in claim 1, wherein the metadata is generated at runtime. 4. A computer-implemented method as recited in claim 1, wherein the alternate content is obtained from an ad server. 5. A computer-implemented method as recited in claim 1, wherein the alternate content comprises one or more advertisements. 6. A computer-implemented method as recited in claim 1, wherein the metadata includes a link to additional content associated with the alternate content. 7. A system, comprising:
one or more processors; and a memory having instructions that are executable by the one or more processors to implement a media content service, the media content service configured to at least: receive a request for content to be delivered via a single stream of content; identify advertisement locations corresponding to one or more advertisements that are included in the single stream of content; and embed metadata into the single stream of content at runtime based on the request, the metadata being associated with the one or more advertisements to enable a client device to identify the one or more advertisements and ascertain when the one or more advertisements begin playback within the single stream of content based on the advertisement locations. 8. A system as recited in claim 7, wherein the media content service is further configured to generate the metadata for the one or more advertisements in response to the request being received. 9. A system as recited in claim 8, wherein the metadata is embedded into the single stream of content in response to a determination that the one or more advertisements do not include associated pre-packaged metadata. 10. A system as recited in claim 7, wherein the metadata includes a link to additional content that is associated with the one or more advertisements. 11. A system as recited in claim 7, wherein the metadata is configured to identify a beginning location of the one or more advertisements within the single stream of content. 12. A system as recited in claim 7, wherein the metadata includes a link to a merchant site that sells products or services associated with the one or more advertisements. 13. A system as recited in claim 7, wherein metadata embedded into the single stream is configured to enable the client device to track the one or more advertisements. 14. Computer-readable storage media comprising instructions that are executable by a computing device to implement a video player, the video player configured to perform operations comprising:
transmitting a request for media content to a media source; receiving a single content stream comprising the media content, additional media content that was spliced in to the single content stream, and metadata associated with the additional media content; processing the single content stream to playback the media content and the additional media content; responsive to encountering the metadata during said processing of the single content stream, parsing the metadata to identify the additional media content and ascertain when the additional media content begins playback; and using the metadata to track the additional media content. 15. Computer-readable storage media as recited in claim 14, wherein the video player comprises a single player and is configured to playback both the media content and the additional media content in the single content stream without utilizing a second video player to playback the additional media content. 16. Computer-readable storage media as recited in claim 14, wherein the operations further comprise playing back additional media content in the single content stream without blocking playback of the media content. 17. Computer-readable storage media as recited in claim 14, wherein the metadata was embedded into the single content stream at runtime in response to the request for the media content. 18. Computer-readable storage media as recited in claim 14, wherein the metadata is used to determine when playback of the additional media content has ended. 19. Computer-readable storage media as recited in claim 14, wherein the additional media content comprises one or more advertisements. 20. Computer-readable storage media as recited in claim 14, wherein the metadata includes a link to a merchant site that sells products or services associated with the additional media content. | Techniques for on-demand metadata insertion into single-stream content are described. In one or more implementations, media content is obtained responsive to a request. The media content can be included in a content stream that also includes alternate content that is spliced into the content stream. Metadata is injected into the content stream at runtime in association with a starting point of the alternate content. The metadata can enable a media player to identify the alternate content and a location of the alternate content within the content stream. The content stream is then transmitted as a single stream to the media player for playback of both the media content and the alternate content.1. A computer-implemented method, comprising:
obtaining media content responsive to a request, the media content being included in a content stream that also includes alternate content that is spliced into the content stream; injecting metadata into the content stream at runtime in association with a starting point of the alternate content, the metadata configured to enable a media player to identify the alternate content and a location of the alternate content within the content stream; and transmitting the content stream as a single stream to the media player for playback of both the media content and the alternate content. 2. A computer-implemented method as recited in claim 1, further comprising:
determining whether the metadata is pre-packaged with the alternate content; and performing said injecting of the metadata responsive to a determination that the metadata is not pre-packaged with the alternate content. 3. A computer-implemented method as recited in claim 1, wherein the metadata is generated at runtime. 4. A computer-implemented method as recited in claim 1, wherein the alternate content is obtained from an ad server. 5. A computer-implemented method as recited in claim 1, wherein the alternate content comprises one or more advertisements. 6. A computer-implemented method as recited in claim 1, wherein the metadata includes a link to additional content associated with the alternate content. 7. A system, comprising:
one or more processors; and a memory having instructions that are executable by the one or more processors to implement a media content service, the media content service configured to at least: receive a request for content to be delivered via a single stream of content; identify advertisement locations corresponding to one or more advertisements that are included in the single stream of content; and embed metadata into the single stream of content at runtime based on the request, the metadata being associated with the one or more advertisements to enable a client device to identify the one or more advertisements and ascertain when the one or more advertisements begin playback within the single stream of content based on the advertisement locations. 8. A system as recited in claim 7, wherein the media content service is further configured to generate the metadata for the one or more advertisements in response to the request being received. 9. A system as recited in claim 8, wherein the metadata is embedded into the single stream of content in response to a determination that the one or more advertisements do not include associated pre-packaged metadata. 10. A system as recited in claim 7, wherein the metadata includes a link to additional content that is associated with the one or more advertisements. 11. A system as recited in claim 7, wherein the metadata is configured to identify a beginning location of the one or more advertisements within the single stream of content. 12. A system as recited in claim 7, wherein the metadata includes a link to a merchant site that sells products or services associated with the one or more advertisements. 13. A system as recited in claim 7, wherein metadata embedded into the single stream is configured to enable the client device to track the one or more advertisements. 14. Computer-readable storage media comprising instructions that are executable by a computing device to implement a video player, the video player configured to perform operations comprising:
transmitting a request for media content to a media source; receiving a single content stream comprising the media content, additional media content that was spliced in to the single content stream, and metadata associated with the additional media content; processing the single content stream to playback the media content and the additional media content; responsive to encountering the metadata during said processing of the single content stream, parsing the metadata to identify the additional media content and ascertain when the additional media content begins playback; and using the metadata to track the additional media content. 15. Computer-readable storage media as recited in claim 14, wherein the video player comprises a single player and is configured to playback both the media content and the additional media content in the single content stream without utilizing a second video player to playback the additional media content. 16. Computer-readable storage media as recited in claim 14, wherein the operations further comprise playing back additional media content in the single content stream without blocking playback of the media content. 17. Computer-readable storage media as recited in claim 14, wherein the metadata was embedded into the single content stream at runtime in response to the request for the media content. 18. Computer-readable storage media as recited in claim 14, wherein the metadata is used to determine when playback of the additional media content has ended. 19. Computer-readable storage media as recited in claim 14, wherein the additional media content comprises one or more advertisements. 20. Computer-readable storage media as recited in claim 14, wherein the metadata includes a link to a merchant site that sells products or services associated with the additional media content. | 2,400 |
7,577 | 7,577 | 15,083,006 | 2,457 | Methods, systems, and computer-readable media for automating deployment of service applications by exposing environmental constraints in a service model are provided. In general, the methods are performed in the context of a general purpose platform configured as a server cloud to run various service applications distributed thereon. Accordingly, the general purpose platform may be flexibly configured to manage varying degrees of characteristics associated with each of the various service applications. Typically, these characteristics are provided in the service model that governs the environmental constraints under which each component program of the service application operates. As such, hosting environments are selected and adapted to satisfy the environmental constraints associated with each component program. Adapting the hosting environments includes installing parameters transformed from configuration settings of each component program via map constructs, thereby refining the hosting environment to support operation of the component program. | 1. One or more computer-readable media having computer-executable instructions embodied thereon that, when executed, perform a method for configuring a hosting environment of a data center based on a definition of a subject role of a service application, the method comprising:
offering a set of base hosting environments, wherein each of the base hosting environments includes a predefined interface architecture; receiving a service model from the developer that provides definitions of one or more roles of the service application, wherein the one or more roles represent component programs that support the functionality of the service application; automatically applying the service model to configure the hosting environment to support the implementation of a subject role of the one or more roles, wherein applying comprises:
(a) selecting one of the set of base hosting environments based on the definition of the subject role; and
(b) refining the selected hosting environment according to map constructs derived from configuration settings of the subject role; and
at least temporarily storing the refined hosting environment in conjunction with the subject role. 2. The computer-readable media of claim 1, wherein the method further comprises establishing the refined hosting environment on a node to underlie the implementation of the subject role, wherein the node represents a computing device of a plurality of distributed computing devices interconnected via a network cloud. 3. The computer-readable media of claim 2, wherein each of the plurality of distributed computing devices is capable of executing a plurality of instances of the one or more roles of the service application, and wherein a particular node of the plurality of nodes is capable of accommodating two or more of hosting environments. 4. The computer-readable media of claim 2, wherein the method further comprises:
allocating the node for supporting execution of the subject role in accordance with a deployment specification of the service model, wherein the deployment specification provides instructions for installing instances of the one or more roles throughout the data center; placing the subject role on the allocated node; and administering values to configuration settings of the placed subject role. 5. The computer-readable media of claim 4, wherein installing values to configuration settings of the placed subject role comprises utilizing map constructs to transform the definition of the subject role in the service model into values that are capable of being installed into the configuration settings of the subject role. 6. The computer-readable media of claim 5, wherein the definitions of the one or more roles are based, in part, on environmental dependencies of each of the one or more roles, and wherein the environmental dependencies of each of the one or more roles are specified by a developer of the service application. 7. The computer-readable media of claim 6, wherein the environmental dependencies comprise at least one of resources made accessible by the underlying hosting environment on which the subject role depends for proper implementation or other instances of the one or more roles of the service application with which the subject role interacts. 8. The computer-readable media of claim 7, wherein the specification for deployment inspects the environmental dependencies to define which channels, established within the data center, are utilized as communication paths between the placed subject role and the instantiated instances of the one or more roles. 9. The computer-readable media of claim 8, wherein refining the selected hosting environment according to map constructs derived from configuration settings of the subject role comprises:
tailoring a hosting environment in accordance with the map constructs associated with the subject role; stacking the tailored hosting environment onto the selected base hosting environment, thereby affecting a configuration of the selected base hosting environment; and merging the stacked tailored and base hosting environments to form the refined hosting environment. 10. The computer-readable media of claim 9, wherein the map constructs transform the values administered to the configuration settings of the subject role into parameters that are utilized to configure the tailored hosting environment. 11. The computer-readable media of claim 9, wherein refining the selected hosting environment according to map constructs ensures a particular level of security for the subject role supported thereby, and wherein the level of security corresponds with a degree of spatial isolation of the refined hosting environment from other hosting environments installed in the data center. 12. A computerized method for updating a service application operating within a distributing data center based on a service model, the method comprising:
receiving an indication to increase a number of instances of a role of the service application, wherein the role represents a particular class of components that operate in conjunction with other roles of the service application to realize distributed functionality thereof; allocating a node within the data center for instantiating an instance of the role thereon, wherein the processes of allocation and instantiation are carried out in accordance with a definition of the role retained at the service model; automatically configuring a hosting environment to underlie implementation of the instantiated role instance, wherein automatically configuring comprises:
(a) forming mapping constructs that transform configuration settings of the instantiated role instance into parameters; and
(b) utilizing the parameters to refine a base hosting environment having a predefined configuration; and
installing the refined hosting environment onto the allocated node. 13. The computerized method of claim 12, wherein the indication arises from an event comprising at least one of a change in a remote-usage workload of the service application or one or more nodes of the data center falling offline. 14. The computerized method of claim 12, further comprising:
identifying a component program established on the node of the data center; abstracting map constructs from the definition of the role retained in the service model; and utilizing the map constructs to perform the instantiation of the instance of the role at the component program. 15. The computerized method of claim 14, wherein abstracting map constructs from the definition of the role retained in the service model comprises transforming environmental dependencies within the role definition into values; and wherein utilizing the map constructs to instantiate an instance of the role at the component program comprises administering the values to configuration settings of the component program. 16. The computerized method of claim 15, wherein utilizing the parameters to refine a base hosting environment having a predefined configuration comprises:
based on the values administered to the configuration settings of the component program, selecting the base hosting environment from a set of base hosting environments offered by the data center; tailoring a hosting environment in accordance with the map constructs associated with the role; and stacking the tailored hosting environment onto the selected base hosting environment, thereby affecting a configuration of the selected base hosting environment. 17. The computerized method of claim 16, wherein stacked tailored and base hosting environments encompass a set of concrete application programming interfaces (APIs) that are at the disposal of the instantiated role. 18. The computerized method of claim 17, wherein the set of concrete APIs facilitate communication with resources and other role instances instantiated within the data center accessible to the instantiated role. 19. The computerized method of claim 16, wherein stacked tailored and base hosting environments encompass interactive APIs that reveal to a developer of the service application information comprising at least one of a configuration of the stacked tailored and base hosting environments or the resources accessible to the instantiated role. 20. A computer system for performing a method that automatically configures a hosting environment upon instantiating a role instance of a service application within a data center, wherein the data center includes distributed computing devices, the computer system comprising a computer storage medium having a plurality of computer software components embodied thereon, the computer software components comprising:
a service model that exposes environmental dependencies of the role instance; a fabric controller for interpreting the service model to abstract a first map construct configured for transforming the environmental dependencies into values that are administered to configuration settings of the role, and for allocating one of the distributed computing devices for installing the role and the hosting environment thereon; and an agent disposed on the allocated computing device for employing a second map construct that transforms the configuration settings of the role into parameters that are utilized to automatically select a base hosting environment and to automatically refine the base hosting environment, thereby providing APIs that connect the role to resources of the data center that support implementation of the role. | Methods, systems, and computer-readable media for automating deployment of service applications by exposing environmental constraints in a service model are provided. In general, the methods are performed in the context of a general purpose platform configured as a server cloud to run various service applications distributed thereon. Accordingly, the general purpose platform may be flexibly configured to manage varying degrees of characteristics associated with each of the various service applications. Typically, these characteristics are provided in the service model that governs the environmental constraints under which each component program of the service application operates. As such, hosting environments are selected and adapted to satisfy the environmental constraints associated with each component program. Adapting the hosting environments includes installing parameters transformed from configuration settings of each component program via map constructs, thereby refining the hosting environment to support operation of the component program.1. One or more computer-readable media having computer-executable instructions embodied thereon that, when executed, perform a method for configuring a hosting environment of a data center based on a definition of a subject role of a service application, the method comprising:
offering a set of base hosting environments, wherein each of the base hosting environments includes a predefined interface architecture; receiving a service model from the developer that provides definitions of one or more roles of the service application, wherein the one or more roles represent component programs that support the functionality of the service application; automatically applying the service model to configure the hosting environment to support the implementation of a subject role of the one or more roles, wherein applying comprises:
(a) selecting one of the set of base hosting environments based on the definition of the subject role; and
(b) refining the selected hosting environment according to map constructs derived from configuration settings of the subject role; and
at least temporarily storing the refined hosting environment in conjunction with the subject role. 2. The computer-readable media of claim 1, wherein the method further comprises establishing the refined hosting environment on a node to underlie the implementation of the subject role, wherein the node represents a computing device of a plurality of distributed computing devices interconnected via a network cloud. 3. The computer-readable media of claim 2, wherein each of the plurality of distributed computing devices is capable of executing a plurality of instances of the one or more roles of the service application, and wherein a particular node of the plurality of nodes is capable of accommodating two or more of hosting environments. 4. The computer-readable media of claim 2, wherein the method further comprises:
allocating the node for supporting execution of the subject role in accordance with a deployment specification of the service model, wherein the deployment specification provides instructions for installing instances of the one or more roles throughout the data center; placing the subject role on the allocated node; and administering values to configuration settings of the placed subject role. 5. The computer-readable media of claim 4, wherein installing values to configuration settings of the placed subject role comprises utilizing map constructs to transform the definition of the subject role in the service model into values that are capable of being installed into the configuration settings of the subject role. 6. The computer-readable media of claim 5, wherein the definitions of the one or more roles are based, in part, on environmental dependencies of each of the one or more roles, and wherein the environmental dependencies of each of the one or more roles are specified by a developer of the service application. 7. The computer-readable media of claim 6, wherein the environmental dependencies comprise at least one of resources made accessible by the underlying hosting environment on which the subject role depends for proper implementation or other instances of the one or more roles of the service application with which the subject role interacts. 8. The computer-readable media of claim 7, wherein the specification for deployment inspects the environmental dependencies to define which channels, established within the data center, are utilized as communication paths between the placed subject role and the instantiated instances of the one or more roles. 9. The computer-readable media of claim 8, wherein refining the selected hosting environment according to map constructs derived from configuration settings of the subject role comprises:
tailoring a hosting environment in accordance with the map constructs associated with the subject role; stacking the tailored hosting environment onto the selected base hosting environment, thereby affecting a configuration of the selected base hosting environment; and merging the stacked tailored and base hosting environments to form the refined hosting environment. 10. The computer-readable media of claim 9, wherein the map constructs transform the values administered to the configuration settings of the subject role into parameters that are utilized to configure the tailored hosting environment. 11. The computer-readable media of claim 9, wherein refining the selected hosting environment according to map constructs ensures a particular level of security for the subject role supported thereby, and wherein the level of security corresponds with a degree of spatial isolation of the refined hosting environment from other hosting environments installed in the data center. 12. A computerized method for updating a service application operating within a distributing data center based on a service model, the method comprising:
receiving an indication to increase a number of instances of a role of the service application, wherein the role represents a particular class of components that operate in conjunction with other roles of the service application to realize distributed functionality thereof; allocating a node within the data center for instantiating an instance of the role thereon, wherein the processes of allocation and instantiation are carried out in accordance with a definition of the role retained at the service model; automatically configuring a hosting environment to underlie implementation of the instantiated role instance, wherein automatically configuring comprises:
(a) forming mapping constructs that transform configuration settings of the instantiated role instance into parameters; and
(b) utilizing the parameters to refine a base hosting environment having a predefined configuration; and
installing the refined hosting environment onto the allocated node. 13. The computerized method of claim 12, wherein the indication arises from an event comprising at least one of a change in a remote-usage workload of the service application or one or more nodes of the data center falling offline. 14. The computerized method of claim 12, further comprising:
identifying a component program established on the node of the data center; abstracting map constructs from the definition of the role retained in the service model; and utilizing the map constructs to perform the instantiation of the instance of the role at the component program. 15. The computerized method of claim 14, wherein abstracting map constructs from the definition of the role retained in the service model comprises transforming environmental dependencies within the role definition into values; and wherein utilizing the map constructs to instantiate an instance of the role at the component program comprises administering the values to configuration settings of the component program. 16. The computerized method of claim 15, wherein utilizing the parameters to refine a base hosting environment having a predefined configuration comprises:
based on the values administered to the configuration settings of the component program, selecting the base hosting environment from a set of base hosting environments offered by the data center; tailoring a hosting environment in accordance with the map constructs associated with the role; and stacking the tailored hosting environment onto the selected base hosting environment, thereby affecting a configuration of the selected base hosting environment. 17. The computerized method of claim 16, wherein stacked tailored and base hosting environments encompass a set of concrete application programming interfaces (APIs) that are at the disposal of the instantiated role. 18. The computerized method of claim 17, wherein the set of concrete APIs facilitate communication with resources and other role instances instantiated within the data center accessible to the instantiated role. 19. The computerized method of claim 16, wherein stacked tailored and base hosting environments encompass interactive APIs that reveal to a developer of the service application information comprising at least one of a configuration of the stacked tailored and base hosting environments or the resources accessible to the instantiated role. 20. A computer system for performing a method that automatically configures a hosting environment upon instantiating a role instance of a service application within a data center, wherein the data center includes distributed computing devices, the computer system comprising a computer storage medium having a plurality of computer software components embodied thereon, the computer software components comprising:
a service model that exposes environmental dependencies of the role instance; a fabric controller for interpreting the service model to abstract a first map construct configured for transforming the environmental dependencies into values that are administered to configuration settings of the role, and for allocating one of the distributed computing devices for installing the role and the hosting environment thereon; and an agent disposed on the allocated computing device for employing a second map construct that transforms the configuration settings of the role into parameters that are utilized to automatically select a base hosting environment and to automatically refine the base hosting environment, thereby providing APIs that connect the role to resources of the data center that support implementation of the role. | 2,400 |
7,578 | 7,578 | 14,020,926 | 2,441 | Supporting combination decisions of virtual resources to be located in a single physical resource includes acquiring change status information indicating a change in each of a plurality of virtual resources designated as virtual resource candidates to be located in a single physical resource and determining, using a processor, whether a high correlation exists between changes in the loads of each of the plurality of virtual resources indicated in the change status information by performing statistical processing on the change status information. A suitability of locating the plurality of virtual resources in a single physical resource is determined according to the determination of whether a high correlation exists. | 1. A method of supporting combination decisions of virtual resources to be located in a single physical resource, the method comprising:
acquiring change status information indicating a change in each of a plurality of virtual resources designated as virtual resource candidates to be located in a single physical resource; determining, using a processor, whether a high correlation exists between changes in the loads of each of the plurality of virtual resources indicated in the change status information by performing statistical processing on the change status information; and determining a suitability of locating the plurality of virtual resources in a single physical resource according to the determination of whether a high correlation exists. 2. The method of claim 1, wherein determining the suitability of locating a plurality of virtual resources in a single physical resource comprises:
determining that locating the plurality of virtual resources in the single physical resource is suitable responsive to a determination that a high correlation does not exist between changes in the loads of each of the plurality of virtual resources. 3. The method of claim 1, wherein determining the suitability of locating a plurality of virtual resources in a single physical resource comprises:
determining that locating the plurality of virtual resources in the single physical resource is unsuitable responsive to a determination that a high correlation does exist between changes in the loads of each of the plurality of virtual resources. 4. The method of claim 1, wherein the change status information indicates a direction of change in each load of the plurality of virtual resources, and wherein determining whether a high correlation exists comprises:
determining whether a high correlation exists between the directions of change in each load of the plurality of virtual resources by referencing the occurrences of a rise or a fall in the directions of change among each load of the plurality of virtual resources indicated in the change status information. 5. The method of claim 4, wherein determining whether a high correlation exists between the directions of change in each load of the plurality of virtual resources comprises:
determining that a high correlation exists between the directions of change in each load of a plurality of virtual resources when the direction of change in the loads of some of the plurality of virtual resources is in the rising direction during the same period of time, the direction of change in the loads of some of the plurality of virtual resources is in the falling direction during the same period of time, and the occurrence of either exceeds a predetermined reference. 6. The method of claim 1, wherein the change status information indicates values after a change in each of the loads of a plurality of virtual resources, wherein determining whether a high correlation exists comprises:
determining whether a high correlation exists between the values after the change in each of the loads of the plurality of virtual resources by referencing the occurrences between values after the change in each of the loads of the plurality of virtual resources indicated in the change status information which are greater than a predetermined typical value and are less than a predetermined typical value. 7. The method of claim 6, wherein determining whether a high correlation exists between the values after the change comprises:
determining that a high correlation exists between the values after the change in each of the loads of the plurality of virtual resources when some values after the change in each of the loads of the plurality of virtual resources are greater than a predetermined typical value during the same period of time, some values after the change in each of the loads of the plurality of virtual resources are less than a predetermined typical value during the same period of time, and the occurrences of either exceed a predetermined reference. 8. A system for supporting combination decisions of virtual resources to be located in a single physical resource, the system comprising:
a processor programmed to initiate executable operations comprising: acquiring change status information indicating a change in each of a plurality of virtual resources designated as virtual resource candidates to be located in a single physical resource; determining whether a high correlation exists between changes in the loads of each of the plurality of virtual resources indicated in the change status information by performing statistical processing on the change status information; and determining a suitability of locating the plurality of virtual resources in a single physical resource according to the determination of whether a high correlation exists. 9. The system of claim 8, wherein determining the suitability of locating a plurality of virtual resources in a single physical resource comprises:
determining that locating the plurality of virtual resources in the single physical resource is suitable responsive to a determination that a high correlation does not exist between changes in the loads of each of the plurality of virtual resources. 10. The system of claim 8, wherein determining the suitability of locating a plurality of virtual resources in a single physical resource comprises:
determining that locating the plurality of virtual resources in the single physical resource is unsuitable responsive to a determination that a high correlation does exist between changes in the loads of each of the plurality of virtual resources. 11. The system of claim 8, wherein the change status information indicates a direction of change in each load of the plurality of virtual resources, and wherein determining whether a high correlation exists comprises:
determining whether a high correlation exists between the directions of change in each load of the plurality of virtual resources by referencing the occurrences of a rise or a fall in the directions of change among each load of the plurality of virtual resources indicated in the change status information. 12. The system of claim 11, wherein determining whether a high correlation exists between the directions of change in each load of the plurality of virtual resources comprises:
determining that a high correlation exists between the directions of change in each load of a plurality of virtual resources when the direction of change in the loads of some of the plurality of virtual resources is in the rising direction during the same period of time, the direction of change in the loads of some of the plurality of virtual resources is in the falling direction during the same period of time, and the occurrence of either exceeds a predetermined reference. 13. The system of claim 8, wherein the change status information indicates values after a change in each of the loads of a plurality of virtual resources, wherein determining whether a high correlation exists comprises:
determining whether a high correlation exists between the values after the change in each of the loads of the plurality of virtual resources by referencing the occurrences between values after the change in each of the loads of the plurality of virtual resources indicated in the change status information which are greater than a predetermined typical value and are less than a predetermined typical value. 14. The system of claim 13, wherein determining whether a high correlation exists between the values after the change comprises:
determining that a high correlation exists between the values after the change in each of the loads of the plurality of virtual resources when some values after the change in each of the loads of the plurality of virtual resources are greater than a predetermined typical value during the same period of time, some values after the change in each of the loads of the plurality of virtual resources are less than a predetermined typical value during the same period of time, and the occurrences of either exceed a predetermined reference. 15. A computer program product for supporting combination decisions of virtual resources to be located in a single physical resource, the computer program product comprising a computer readable storage medium having program code stored thereon, the program code executable by a processor to perform a method comprising:
acquiring, using the processor, change status information indicating a change in each of a plurality of virtual resources designated as virtual resource candidates to be located in a single physical resource; determining, using the processor, whether a high correlation exists between changes in the loads of each of the plurality of virtual resources indicated in the change status information by performing statistical processing on the change status information; and determining, using the processor, a suitability of locating the plurality of virtual resources in a single physical resource according to the determination of whether a high correlation exists. 16. The computer program product of claim 15, wherein determining the suitability of locating a plurality of virtual resources in a single physical resource comprises:
determining that locating the plurality of virtual resources in the single physical resource is suitable responsive to a determination that a high correlation does not exist between changes in the loads of each of the plurality of virtual resources. 17. The computer program product of claim 15, wherein determining the suitability of locating a plurality of virtual resources in a single physical resource comprises:
determining that locating the plurality of virtual resources in the single physical resource is unsuitable responsive to a determination that a high correlation does exist between changes in the loads of each of the plurality of virtual resources. 18. The computer program product of claim 15, wherein the change status information indicates a direction of change in each load of the plurality of virtual resources, and wherein determining whether a high correlation exists comprises:
determining whether a high correlation exists between the directions of change in each load of the plurality of virtual resources by referencing the occurrences of a rise or a fall in the directions of change among each load of the plurality of virtual resources indicated in the change status information. 19. The computer program product of claim 18, wherein determining whether a high correlation exists between the directions of change in each load of the plurality of virtual resources comprises:
determining that a high correlation exists between the directions of change in each load of a plurality of virtual resources when the direction of change in the loads of some of the plurality of virtual resources is in the rising direction during the same period of time, the direction of change in the loads of some of the plurality of virtual resources is in the falling direction during the same period of time, and the occurrence of either exceeds a predetermined reference. 20. The computer program product of claim 15, wherein the change status information indicates values after a change in each of the loads of a plurality of virtual resources, wherein determining whether a high correlation exists comprises:
determining whether a high correlation exists between the values after the change in each of the loads of the plurality of virtual resources by referencing the occurrences between values after the change in each of the loads of the plurality of virtual resources indicated in the change status information which are greater than a predetermined typical value and are less than a predetermined typical value. | Supporting combination decisions of virtual resources to be located in a single physical resource includes acquiring change status information indicating a change in each of a plurality of virtual resources designated as virtual resource candidates to be located in a single physical resource and determining, using a processor, whether a high correlation exists between changes in the loads of each of the plurality of virtual resources indicated in the change status information by performing statistical processing on the change status information. A suitability of locating the plurality of virtual resources in a single physical resource is determined according to the determination of whether a high correlation exists.1. A method of supporting combination decisions of virtual resources to be located in a single physical resource, the method comprising:
acquiring change status information indicating a change in each of a plurality of virtual resources designated as virtual resource candidates to be located in a single physical resource; determining, using a processor, whether a high correlation exists between changes in the loads of each of the plurality of virtual resources indicated in the change status information by performing statistical processing on the change status information; and determining a suitability of locating the plurality of virtual resources in a single physical resource according to the determination of whether a high correlation exists. 2. The method of claim 1, wherein determining the suitability of locating a plurality of virtual resources in a single physical resource comprises:
determining that locating the plurality of virtual resources in the single physical resource is suitable responsive to a determination that a high correlation does not exist between changes in the loads of each of the plurality of virtual resources. 3. The method of claim 1, wherein determining the suitability of locating a plurality of virtual resources in a single physical resource comprises:
determining that locating the plurality of virtual resources in the single physical resource is unsuitable responsive to a determination that a high correlation does exist between changes in the loads of each of the plurality of virtual resources. 4. The method of claim 1, wherein the change status information indicates a direction of change in each load of the plurality of virtual resources, and wherein determining whether a high correlation exists comprises:
determining whether a high correlation exists between the directions of change in each load of the plurality of virtual resources by referencing the occurrences of a rise or a fall in the directions of change among each load of the plurality of virtual resources indicated in the change status information. 5. The method of claim 4, wherein determining whether a high correlation exists between the directions of change in each load of the plurality of virtual resources comprises:
determining that a high correlation exists between the directions of change in each load of a plurality of virtual resources when the direction of change in the loads of some of the plurality of virtual resources is in the rising direction during the same period of time, the direction of change in the loads of some of the plurality of virtual resources is in the falling direction during the same period of time, and the occurrence of either exceeds a predetermined reference. 6. The method of claim 1, wherein the change status information indicates values after a change in each of the loads of a plurality of virtual resources, wherein determining whether a high correlation exists comprises:
determining whether a high correlation exists between the values after the change in each of the loads of the plurality of virtual resources by referencing the occurrences between values after the change in each of the loads of the plurality of virtual resources indicated in the change status information which are greater than a predetermined typical value and are less than a predetermined typical value. 7. The method of claim 6, wherein determining whether a high correlation exists between the values after the change comprises:
determining that a high correlation exists between the values after the change in each of the loads of the plurality of virtual resources when some values after the change in each of the loads of the plurality of virtual resources are greater than a predetermined typical value during the same period of time, some values after the change in each of the loads of the plurality of virtual resources are less than a predetermined typical value during the same period of time, and the occurrences of either exceed a predetermined reference. 8. A system for supporting combination decisions of virtual resources to be located in a single physical resource, the system comprising:
a processor programmed to initiate executable operations comprising: acquiring change status information indicating a change in each of a plurality of virtual resources designated as virtual resource candidates to be located in a single physical resource; determining whether a high correlation exists between changes in the loads of each of the plurality of virtual resources indicated in the change status information by performing statistical processing on the change status information; and determining a suitability of locating the plurality of virtual resources in a single physical resource according to the determination of whether a high correlation exists. 9. The system of claim 8, wherein determining the suitability of locating a plurality of virtual resources in a single physical resource comprises:
determining that locating the plurality of virtual resources in the single physical resource is suitable responsive to a determination that a high correlation does not exist between changes in the loads of each of the plurality of virtual resources. 10. The system of claim 8, wherein determining the suitability of locating a plurality of virtual resources in a single physical resource comprises:
determining that locating the plurality of virtual resources in the single physical resource is unsuitable responsive to a determination that a high correlation does exist between changes in the loads of each of the plurality of virtual resources. 11. The system of claim 8, wherein the change status information indicates a direction of change in each load of the plurality of virtual resources, and wherein determining whether a high correlation exists comprises:
determining whether a high correlation exists between the directions of change in each load of the plurality of virtual resources by referencing the occurrences of a rise or a fall in the directions of change among each load of the plurality of virtual resources indicated in the change status information. 12. The system of claim 11, wherein determining whether a high correlation exists between the directions of change in each load of the plurality of virtual resources comprises:
determining that a high correlation exists between the directions of change in each load of a plurality of virtual resources when the direction of change in the loads of some of the plurality of virtual resources is in the rising direction during the same period of time, the direction of change in the loads of some of the plurality of virtual resources is in the falling direction during the same period of time, and the occurrence of either exceeds a predetermined reference. 13. The system of claim 8, wherein the change status information indicates values after a change in each of the loads of a plurality of virtual resources, wherein determining whether a high correlation exists comprises:
determining whether a high correlation exists between the values after the change in each of the loads of the plurality of virtual resources by referencing the occurrences between values after the change in each of the loads of the plurality of virtual resources indicated in the change status information which are greater than a predetermined typical value and are less than a predetermined typical value. 14. The system of claim 13, wherein determining whether a high correlation exists between the values after the change comprises:
determining that a high correlation exists between the values after the change in each of the loads of the plurality of virtual resources when some values after the change in each of the loads of the plurality of virtual resources are greater than a predetermined typical value during the same period of time, some values after the change in each of the loads of the plurality of virtual resources are less than a predetermined typical value during the same period of time, and the occurrences of either exceed a predetermined reference. 15. A computer program product for supporting combination decisions of virtual resources to be located in a single physical resource, the computer program product comprising a computer readable storage medium having program code stored thereon, the program code executable by a processor to perform a method comprising:
acquiring, using the processor, change status information indicating a change in each of a plurality of virtual resources designated as virtual resource candidates to be located in a single physical resource; determining, using the processor, whether a high correlation exists between changes in the loads of each of the plurality of virtual resources indicated in the change status information by performing statistical processing on the change status information; and determining, using the processor, a suitability of locating the plurality of virtual resources in a single physical resource according to the determination of whether a high correlation exists. 16. The computer program product of claim 15, wherein determining the suitability of locating a plurality of virtual resources in a single physical resource comprises:
determining that locating the plurality of virtual resources in the single physical resource is suitable responsive to a determination that a high correlation does not exist between changes in the loads of each of the plurality of virtual resources. 17. The computer program product of claim 15, wherein determining the suitability of locating a plurality of virtual resources in a single physical resource comprises:
determining that locating the plurality of virtual resources in the single physical resource is unsuitable responsive to a determination that a high correlation does exist between changes in the loads of each of the plurality of virtual resources. 18. The computer program product of claim 15, wherein the change status information indicates a direction of change in each load of the plurality of virtual resources, and wherein determining whether a high correlation exists comprises:
determining whether a high correlation exists between the directions of change in each load of the plurality of virtual resources by referencing the occurrences of a rise or a fall in the directions of change among each load of the plurality of virtual resources indicated in the change status information. 19. The computer program product of claim 18, wherein determining whether a high correlation exists between the directions of change in each load of the plurality of virtual resources comprises:
determining that a high correlation exists between the directions of change in each load of a plurality of virtual resources when the direction of change in the loads of some of the plurality of virtual resources is in the rising direction during the same period of time, the direction of change in the loads of some of the plurality of virtual resources is in the falling direction during the same period of time, and the occurrence of either exceeds a predetermined reference. 20. The computer program product of claim 15, wherein the change status information indicates values after a change in each of the loads of a plurality of virtual resources, wherein determining whether a high correlation exists comprises:
determining whether a high correlation exists between the values after the change in each of the loads of the plurality of virtual resources by referencing the occurrences between values after the change in each of the loads of the plurality of virtual resources indicated in the change status information which are greater than a predetermined typical value and are less than a predetermined typical value. | 2,400 |
7,579 | 7,579 | 11,503,825 | 2,414 | This invention discloses a system of remote user authentication to an authentication server, with a telephone interface to the authentication server that only receives routed calls that have originated from a cell phone in a cellular network and a call handling logic function which routs only those calls to the authentication server over the interface that have originated from a cell phone with a subscriber identity module (SIM) card and for which the cellular company maintains an individual subscriber identification data. In a different embodiment a remote user authentication system has different interfaces and different authentication processes that correspond with a telephone network interface and with a cellular telephone company network interface, enabling the authentication system to have different methods of authentication depending upon which interface a remote user connection authentication request originated from. The method uses the SIM card of a cell phone as a “something you have” factor as part of a two-factor authentication mechanism to an authentication server. The telephone network uses a call back feature. | 1. A system of remote user authentication to an authentication server, comprising:
a telephone interface to the authentication server that only receives routed calls that have originated from a cell phone in a cellular network. 2. The system as in claim 1, the cellular network comprising:
a call handling logic function which routs only those calls to the authentication server over the interface that have originated from a cell phone with a subscriber identity module (SIM) card and for which the cellular company maintains an individual subscriber identification data. 3. The system as in claim 2, the cellular network comprising:
the call handling logic function does not route those calls to the authentication server over the interface that have originated from a cell phone with the SIM card, but for which the cellular company does not maintain an individual subscriber identification data such as, for prepaid phones and phones that are owned by business entities. 4. A method of authentication to a service system on a global computer network comprising the step of:
adapting an authentication server to receive only those incoming telephone calls from a service customer that have originated, by the customer, on the cellular network. 5. The method as in claim 4, comprising the steps of:
matching the caller id of the incoming call in a database in the authentication server and annunciating an “unauthorized call” message if not matched, otherwise a greeting message for the service. 6. The method as in claim 4, the adaptation comprising the step of:
interfacing the server to a private line corresponding to a telephone number managed by a cell network for receiving cellular network originated calls. 7. The method as in claim 6, the adaptation comprising the step of:
forwarding, by the cellular network only those calls that have been verified by the cell service provider having a customer identity verified account with the cell company. 8. The method as in claim 5, comprising the step of:
verifying the service customer to the authentication server by an entered personal identification number that matches the number stored in the database, for authenticating the service and providing the service by the service system. 9. The method as in claim 8, comprising the steps:
delivering services the service customer is authorized to receive that include from a group of, a banking transaction via the phone, providing an access code to gain entrance to a facility, and providing an access code to gain entry to an automated teller machine. 10. A remote user authentication system comprising:
a. an interface A with a telephone network to an authentication server; b. an Interface B with a cellular telephone network to the authentication server, c. the authentication server having different methods of authentication A and B respectively depending upon which interface a remote user connection authentication request originated from. 11. The claim as in 10, the method B of the system comprising the steps:
a. verifying an incoming caller id for a match in an authentication system database; b. prompting by an interactive voice response system, if caller id is in database, for entry of a PIN, otherwise delivering a message of an unauthorized call; c. verifying the PIN in the database to authenticate the remote user. 12. The claim as in 10, the method A of the system comprising the steps:
a. prompting by an interactive voice response system for entry of a PIN-1; b. verifying PIN-1 in an authentication database and delivering a message of an “To hang up now” otherwise a message of “an unauthorized call” c. calling back by the system on a caller id that is present for this PIN-1 in the database immediately after step (b); d. prompting for entry of PIN-2, a secret number, and checking it in database to authenticate the remote user. 13. The system as in claim 12, the PIN-1 comprising:
a primary caller id of the caller plus a 4 digit number that identifies one of many secondary caller ids for a call back as in step (c) in claim 12. 14. The system as in claim 13, further comprising:
when there are no secondary caller ids, then the last four digits of PIN-1 are secret numbers. 15. The system as in claim 14, further comprising:
When last four digits of PIN-1 are a secret number, then these and PIN-2 may be the same secret number. 16. The claim as in 10, the method A of the system comprising the step of:
delivering a message “such calls are not accepted, hang up and call on your registered cell phone” 17. The system as in claim 10, comprising the step of:
delivering services a caller is authorized to receive by a number of means that include from a group of, delivering a temporary password for access to a system, routing the connection to an online bank telephone network for banking transaction via the phone. 18. The system as in claim 10, comprising the step of:
delivering services a caller is authorized to receive by a number of means that include from a group of, providing an access code to gain entrance to a facility, providing an access code to gain entry to an automated teller machine. | This invention discloses a system of remote user authentication to an authentication server, with a telephone interface to the authentication server that only receives routed calls that have originated from a cell phone in a cellular network and a call handling logic function which routs only those calls to the authentication server over the interface that have originated from a cell phone with a subscriber identity module (SIM) card and for which the cellular company maintains an individual subscriber identification data. In a different embodiment a remote user authentication system has different interfaces and different authentication processes that correspond with a telephone network interface and with a cellular telephone company network interface, enabling the authentication system to have different methods of authentication depending upon which interface a remote user connection authentication request originated from. The method uses the SIM card of a cell phone as a “something you have” factor as part of a two-factor authentication mechanism to an authentication server. The telephone network uses a call back feature.1. A system of remote user authentication to an authentication server, comprising:
a telephone interface to the authentication server that only receives routed calls that have originated from a cell phone in a cellular network. 2. The system as in claim 1, the cellular network comprising:
a call handling logic function which routs only those calls to the authentication server over the interface that have originated from a cell phone with a subscriber identity module (SIM) card and for which the cellular company maintains an individual subscriber identification data. 3. The system as in claim 2, the cellular network comprising:
the call handling logic function does not route those calls to the authentication server over the interface that have originated from a cell phone with the SIM card, but for which the cellular company does not maintain an individual subscriber identification data such as, for prepaid phones and phones that are owned by business entities. 4. A method of authentication to a service system on a global computer network comprising the step of:
adapting an authentication server to receive only those incoming telephone calls from a service customer that have originated, by the customer, on the cellular network. 5. The method as in claim 4, comprising the steps of:
matching the caller id of the incoming call in a database in the authentication server and annunciating an “unauthorized call” message if not matched, otherwise a greeting message for the service. 6. The method as in claim 4, the adaptation comprising the step of:
interfacing the server to a private line corresponding to a telephone number managed by a cell network for receiving cellular network originated calls. 7. The method as in claim 6, the adaptation comprising the step of:
forwarding, by the cellular network only those calls that have been verified by the cell service provider having a customer identity verified account with the cell company. 8. The method as in claim 5, comprising the step of:
verifying the service customer to the authentication server by an entered personal identification number that matches the number stored in the database, for authenticating the service and providing the service by the service system. 9. The method as in claim 8, comprising the steps:
delivering services the service customer is authorized to receive that include from a group of, a banking transaction via the phone, providing an access code to gain entrance to a facility, and providing an access code to gain entry to an automated teller machine. 10. A remote user authentication system comprising:
a. an interface A with a telephone network to an authentication server; b. an Interface B with a cellular telephone network to the authentication server, c. the authentication server having different methods of authentication A and B respectively depending upon which interface a remote user connection authentication request originated from. 11. The claim as in 10, the method B of the system comprising the steps:
a. verifying an incoming caller id for a match in an authentication system database; b. prompting by an interactive voice response system, if caller id is in database, for entry of a PIN, otherwise delivering a message of an unauthorized call; c. verifying the PIN in the database to authenticate the remote user. 12. The claim as in 10, the method A of the system comprising the steps:
a. prompting by an interactive voice response system for entry of a PIN-1; b. verifying PIN-1 in an authentication database and delivering a message of an “To hang up now” otherwise a message of “an unauthorized call” c. calling back by the system on a caller id that is present for this PIN-1 in the database immediately after step (b); d. prompting for entry of PIN-2, a secret number, and checking it in database to authenticate the remote user. 13. The system as in claim 12, the PIN-1 comprising:
a primary caller id of the caller plus a 4 digit number that identifies one of many secondary caller ids for a call back as in step (c) in claim 12. 14. The system as in claim 13, further comprising:
when there are no secondary caller ids, then the last four digits of PIN-1 are secret numbers. 15. The system as in claim 14, further comprising:
When last four digits of PIN-1 are a secret number, then these and PIN-2 may be the same secret number. 16. The claim as in 10, the method A of the system comprising the step of:
delivering a message “such calls are not accepted, hang up and call on your registered cell phone” 17. The system as in claim 10, comprising the step of:
delivering services a caller is authorized to receive by a number of means that include from a group of, delivering a temporary password for access to a system, routing the connection to an online bank telephone network for banking transaction via the phone. 18. The system as in claim 10, comprising the step of:
delivering services a caller is authorized to receive by a number of means that include from a group of, providing an access code to gain entrance to a facility, providing an access code to gain entry to an automated teller machine. | 2,400 |
7,580 | 7,580 | 14,846,125 | 2,478 | In accordance with the present invention, in a case of using persistent scheduling, when detecting a transition from a talk state to a silent state, resources to be allocated during a silent state are allocated to a channel exclusive for silent period, and the resources which have been allocated to a mobile terminal during the talk spurt are released. Therefore, the useless allocation of resources can be reduced, and the throughput of the system can be improved. | 1-3. (canceled) 4. A mobile communication system for performing radio communication between a mobile terminal and a base station by using dynamic scheduling of performing scheduling dynamically and persistent scheduling of performing scheduling persistently, wherein
a radio resource allocated for radio communication performed by using the dynamic scheduling is notified from said base station to said mobile terminal by using a control signal of a lower layer, and a radio resource allocated for radio communication performed by using the persistent scheduling is notified from said base station to said mobile terminal by using a control signal of a upper layer. 5. The mobile communication system according to claim 4, wherein occupation of the radio resource allocated for radio communication performed by using the persistent scheduling is notified from said base station to said mobile terminal by using a control signal of the lower layer. 6. The mobile communication system according to claim 4, wherein release of the radio resource allocated in the persistent scheduling is notified from said base station to said mobile terminal by using a control signal of the lower layer. 7. The mobile communication system according to claim 4, wherein the radio resource allocated for radio communication performed by using the persistent scheduling is allocated for downlink communication. 8. The mobile communication system according to claim 4, wherein the radio resource allocated for radio communication performed by using the persistent scheduling is allocated for uplink communication. 9. A base station for performing radio communication with a mobile terminal by using dynamic scheduling of performing scheduling dynamically and persistent scheduling of performing scheduling persistently, wherein
said base station notifies said mobile terminal of a radio resource allocated for radio communication performed by using the dynamic scheduling by using a control signal of a lower layer, and notifies said mobile terminal of a radio resource allocated for radio communication performed by using the persistent scheduling by using a control signal of a upper layer. 10. A mobile terminal for performing radio communication with a base station by using dynamic scheduling of performing scheduling dynamically and persistent scheduling of performing scheduling persistently, wherein
said mobile terminal receives a radio resource allocated for radio communication performed by using the dynamic scheduling and transmitted from said base station by using a control signal of a lower layer, and receives a radio resource allocated for radio communication performed by using the persistent scheduling and transmitted from said base station by using a control signal of a upper layer. | In accordance with the present invention, in a case of using persistent scheduling, when detecting a transition from a talk state to a silent state, resources to be allocated during a silent state are allocated to a channel exclusive for silent period, and the resources which have been allocated to a mobile terminal during the talk spurt are released. Therefore, the useless allocation of resources can be reduced, and the throughput of the system can be improved.1-3. (canceled) 4. A mobile communication system for performing radio communication between a mobile terminal and a base station by using dynamic scheduling of performing scheduling dynamically and persistent scheduling of performing scheduling persistently, wherein
a radio resource allocated for radio communication performed by using the dynamic scheduling is notified from said base station to said mobile terminal by using a control signal of a lower layer, and a radio resource allocated for radio communication performed by using the persistent scheduling is notified from said base station to said mobile terminal by using a control signal of a upper layer. 5. The mobile communication system according to claim 4, wherein occupation of the radio resource allocated for radio communication performed by using the persistent scheduling is notified from said base station to said mobile terminal by using a control signal of the lower layer. 6. The mobile communication system according to claim 4, wherein release of the radio resource allocated in the persistent scheduling is notified from said base station to said mobile terminal by using a control signal of the lower layer. 7. The mobile communication system according to claim 4, wherein the radio resource allocated for radio communication performed by using the persistent scheduling is allocated for downlink communication. 8. The mobile communication system according to claim 4, wherein the radio resource allocated for radio communication performed by using the persistent scheduling is allocated for uplink communication. 9. A base station for performing radio communication with a mobile terminal by using dynamic scheduling of performing scheduling dynamically and persistent scheduling of performing scheduling persistently, wherein
said base station notifies said mobile terminal of a radio resource allocated for radio communication performed by using the dynamic scheduling by using a control signal of a lower layer, and notifies said mobile terminal of a radio resource allocated for radio communication performed by using the persistent scheduling by using a control signal of a upper layer. 10. A mobile terminal for performing radio communication with a base station by using dynamic scheduling of performing scheduling dynamically and persistent scheduling of performing scheduling persistently, wherein
said mobile terminal receives a radio resource allocated for radio communication performed by using the dynamic scheduling and transmitted from said base station by using a control signal of a lower layer, and receives a radio resource allocated for radio communication performed by using the persistent scheduling and transmitted from said base station by using a control signal of a upper layer. | 2,400 |
7,581 | 7,581 | 14,922,714 | 2,487 | Disclosed is a 3D video motion estimating apparatus and method. The 3D video motion estimating apparatus may enable a motion vector of a color image and a motion vector of a depth image refer to each other, thereby increasing a compression rate. | 1. A apparatus of estimating a motion of a three dimensional (3D) video, comprising:
receiver configured to receive a color image macroblock and a depth image macroblock; and determiner configured to determine a motion vector of the color image macroblock and determine a motion of the depth image macroblock based on the determined motion vector of the color image macroblock, wherein the determined motion vector of the color image macroblock is used as a motion vector for the depth image macroblock. 2. The apparatus of claim 1, wherein the determiner determines the motion vector image macroblock and then determines the motion of the depth image macroblock based on the determined motion vector of the color image macroblock. 3. The apparatus of claim 1, wherein a predictive motion vector is determined based on a median of vertical directions of motion vectors of block adjacent to a current input block in the color image macroblock and a median of horizontal directions of motion vectors of blocks adjacent to the current input block in the color image macro block, and
wherein the determiner determines the motion vector of the color image macroblock based on the predictive motion vector. 4. The apparatus of claim 3, wherein the determiner estimates a motion from a color image based on the predictive motion vector to estimate a final motion vector of the color image macroblock. 5. The apparatus of claim 3, wherein the blocks adjacent to the current input block comprises a left block, an upper block, and an upper-right block of the current input block. 6. A non-transitory computer readable medium storing computer readable instructions that control at least one processor to implement operations comprising:
receiving a color image macroblock and a depth image macroblock; and determining a motion vector of the color image macroblock and determining a motion of the depth image macroblock based on the determined motion vector of the color image macroblock, wherein the determined motion vector of the color image macroblock is used as a motion vector for the depth image macroblock. 7. The non-transitory computer readable medium of claim 6, wherein the instructions for determining the motion vector of the color image block and determining the motion of the depth image macroblock comprises instructions for:
determining the motion vector image macroblock and then determining the motion of the depth image macroblock based on the determined motion vector of the color image macroblock. | Disclosed is a 3D video motion estimating apparatus and method. The 3D video motion estimating apparatus may enable a motion vector of a color image and a motion vector of a depth image refer to each other, thereby increasing a compression rate.1. A apparatus of estimating a motion of a three dimensional (3D) video, comprising:
receiver configured to receive a color image macroblock and a depth image macroblock; and determiner configured to determine a motion vector of the color image macroblock and determine a motion of the depth image macroblock based on the determined motion vector of the color image macroblock, wherein the determined motion vector of the color image macroblock is used as a motion vector for the depth image macroblock. 2. The apparatus of claim 1, wherein the determiner determines the motion vector image macroblock and then determines the motion of the depth image macroblock based on the determined motion vector of the color image macroblock. 3. The apparatus of claim 1, wherein a predictive motion vector is determined based on a median of vertical directions of motion vectors of block adjacent to a current input block in the color image macroblock and a median of horizontal directions of motion vectors of blocks adjacent to the current input block in the color image macro block, and
wherein the determiner determines the motion vector of the color image macroblock based on the predictive motion vector. 4. The apparatus of claim 3, wherein the determiner estimates a motion from a color image based on the predictive motion vector to estimate a final motion vector of the color image macroblock. 5. The apparatus of claim 3, wherein the blocks adjacent to the current input block comprises a left block, an upper block, and an upper-right block of the current input block. 6. A non-transitory computer readable medium storing computer readable instructions that control at least one processor to implement operations comprising:
receiving a color image macroblock and a depth image macroblock; and determining a motion vector of the color image macroblock and determining a motion of the depth image macroblock based on the determined motion vector of the color image macroblock, wherein the determined motion vector of the color image macroblock is used as a motion vector for the depth image macroblock. 7. The non-transitory computer readable medium of claim 6, wherein the instructions for determining the motion vector of the color image block and determining the motion of the depth image macroblock comprises instructions for:
determining the motion vector image macroblock and then determining the motion of the depth image macroblock based on the determined motion vector of the color image macroblock. | 2,400 |
7,582 | 7,582 | 14,634,842 | 2,413 | A method for communication includes configuring a router to forward data packets in a network in accordance with MPLS labels appended to the packets. A group of two or more of the interfaces is defined as a multi-path routing group in a forwarding table within the router. A plurality of records are stored in an ILM in the router, corresponding to different, respective label IDs, all pointing to the set of the entries in the forwarding table that belong to the multi-path routing group. Upon receiving in the router an incoming data packet having a label ID corresponding to any given record in the plurality, one of the interfaces in the group is selected, responsively to the given record and to the set of the entries in the forwarding table to which the given record points, for forwarding the incoming data packet without changing the label ID. | 1. A method for communication, comprising:
configuring a router, having multiple interfaces connected to a network, to forward data packets in the network in accordance with Multiprotocol Label Switching (MPLS) labels appended to the data packets; defining a group of two or more of the interfaces as a multi-path routing group, and storing, in a forwarding table within the router, a set of entries consisting of one respective entry for each of the interfaces in the group; storing, in an incoming label map (ILM) within the router, a plurality of records corresponding to different, respective label IDs contained in the MPLS labels, such that all of the records in the plurality point to the set of the entries in the forwarding table that belong to the multi-path routing group; and upon receiving in the router an incoming data packet having a label ID corresponding to any given record in the plurality, selecting, responsively to the given record and to the set of the entries in the forwarding table to which the given record points, one of the interfaces in the group, and forwarding the incoming data packet through the one of the interfaces without changing the label ID. 2. The method according to claim 1, wherein the set of the records is configured as an equal cost multi-path (ECMP) group within the forwarding table. 3. The method according to claim 1, wherein defining the group comprises defining at least first and second, different multi-path routing groups, and wherein storing the plurality of the records comprises defining different, first and second pluralities of the records, pointing to the entries in the forwarding table that belong respectively to the first and second multi-path routing groups. 4. The method according to claim 1, wherein forwarding the incoming data packet comprises updating a time-to-live (TTL) field in the label without changing the label ID. 5. The method according to claim 1, wherein forwarding the incoming data packet comprises updating a traffic class field in the label without changing the label ID. 6. The method according to claim 1, wherein the forwarding table comprises a Next Hop Label Forwarding Entry (NHLFE) table. 7. The method according to claim 6, wherein the plurality of the records in the ILM indicate that no labels should be popped from the incoming data packet, and the set of the entries in the NHLFE table indicate that no labels should be pushed onto the incoming data packet. 8. The method according to claim 6, wherein the set of the entries in the NHLFE table indicate that a label at a top of a label stack in the incoming packet should not be swapped. 9. The method according to claim 6, wherein the NHLFE table contains further entries pointed to by one or more further records in the ILM that are outside the plurality and indicate that the labels of the data packets having label IDs corresponding to the further records should be swapped by the router. 10. The method according to claim 1, wherein the plurality of the records in the ILM indicate that an existing label at a top of a label stack in the incoming packet should be swapped with a new label having the same label ID as the existing label. 11. The method according to claim 1, wherein each of the records in the plurality points to a respective entry in a Next Hop Label Forwarding Entry (NHLFE) table, which indicates that an existing label at a top of a label stack in the incoming packet should be swapped with a new label having the same label ID as the existing label, and which points to the set of the entries in the forwarding table that belong to the multi-path routing group. 12. The method according to claim 1, wherein the label ID comprises a label space. 13. Packet routing apparatus, comprising:
multiple interfaces connected to a network; switching logic configured to transfer data packets among the interfaces; and packet processing logic, which is configured to cause the switching logic to forward the data packets in accordance with Multiprotocol Label Switching (MPLS) labels appended to the data packets and comprises:
a forwarding table, in which a group of two or more of the interfaces is defined as a multi-path routing group, and a set of entries is stored consisting of one respective entry for each of the interfaces in the group; and
an incoming label map (ILM), in which a plurality of records are stored corresponding to different, respective label IDs contained in the MPLS labels, such that all of the records in the plurality point to the set of the entries in the forwarding table that belong to the multi-path routing group,
such that upon receiving via one of the interfaces an incoming data packet having a label ID corresponding to any given record in the plurality, the packet processing logic selects, responsively to the given record and to the set of the entries in the forwarding table to which the given record points, one of the interfaces in the group, and causes the switching logic to forward the incoming data packet through the one of the interfaces without changing the label ID. 14. The apparatus according to claim 13, wherein the set of the records is configured as an equal cost multi-path (ECMP) group within the forwarding table. 15. The apparatus according to claim 13, wherein the forwarding table contains at least first and second, different multi-path routing groups, and wherein different, first and second pluralities of the records in the ILM point to the entries in the forwarding table that belong respectively to the first and second multi-path routing groups. 16. The apparatus according to claim 13, wherein the packet processing logic is configured to update a time-to-live (TTL) field in the label without changing the label ID. 17. The apparatus according to claim 13, wherein the packet processing logic is configured to update a traffic class field in the label without changing the label ID. 18. The apparatus according to claim 13, wherein the forwarding table comprises a Next Hop Label Forwarding Entry (NHLFE) table. 19. The apparatus according to claim 18, wherein the plurality of the records in the ILM indicate that no labels should be popped from the incoming data packet, and the set of the entries in the NHLFE table indicate that no labels should be pushed onto the incoming data packet. 20. The apparatus according to claim 18, wherein the set of the entries in the NHLFE table indicate that a label at a top of a label stack in the incoming packet should not be swapped. 21. The apparatus according to claim 18, wherein the NHLFE table contains further entries pointed to by one or more further records in the ILM that are outside the plurality and indicate that the labels of the data packets having label IDs corresponding to the further records should be swapped by the router. 22. The apparatus according to claim 13, wherein the plurality of the records in the ILM indicate that an existing label at a top of a label stack in the incoming packet should be swapped with a new label having the same label ID as the existing label. 23. The apparatus according to claim 13, wherein each of the records in the plurality points to a respective entry in a Next Hop Label Forwarding Entry (NHLFE) table, which indicates that an existing label at a top of a label stack in the incoming packet should be swapped with a new label having the same label ID as the existing label, and which points to the set of the entries in the forwarding table that belong to the multi-path routing group. 24. The apparatus according to claim 13, wherein the label ID comprises a label space. 25. A method for communication, comprising:
configuring a router, having multiple interfaces connected to a network, to forward data packets in the network using Multiprotocol Label Switching (MPLS) labels appended to the data packets; defining a group of two or more of the interfaces as a multi-path routing group, and storing, in a forwarding table within the router, a set of entries consisting of one respective entry for each of the interfaces in the group; upon receiving incoming data packets in the router:
looking up in the router respective label IDs that are to be associated with the data packets to be forwarded from the router through the network; and
mapping the data packets to respective egress interfaces of the router, such that at least first and second data packets having different, respective first and second label IDs are mapped to the same multi-path routing group; and
forwarding the data packets through the respective egress interfaces to which the data packets are mapped. 26. The method according to claim 25, wherein looking up the respective label IDs comprises reading and applying at least one label ID of an incoming data packet as a key in an incoming label map (ILM) within the router. 27. The method according to claim 26, wherein reading and applying the at least one label ID comprises reading and looking up in the ILM two or more label IDs contained in the incoming data packet. 28. The method according to claim 25, wherein looking up the respective label IDs comprises reading and applying one or more fields in a header of an incoming data packet as a key in an incoming label map (ILM) within the router. 29. The method according to claim 25, wherein looking up the respective label IDs comprises reading and applying a traffic class field from a label of an incoming data packet as a key in an incoming label map (ILM) within the router. 30. The method according to claim 25, wherein forwarding the incoming data packet comprises updating at least one of time-to-live (TTL) field a traffic class field in the incoming data packets. 31. The method according to claim 25, wherein the forwarding table comprises a Next Hop Label Forwarding Entry (NHLFE) table. | A method for communication includes configuring a router to forward data packets in a network in accordance with MPLS labels appended to the packets. A group of two or more of the interfaces is defined as a multi-path routing group in a forwarding table within the router. A plurality of records are stored in an ILM in the router, corresponding to different, respective label IDs, all pointing to the set of the entries in the forwarding table that belong to the multi-path routing group. Upon receiving in the router an incoming data packet having a label ID corresponding to any given record in the plurality, one of the interfaces in the group is selected, responsively to the given record and to the set of the entries in the forwarding table to which the given record points, for forwarding the incoming data packet without changing the label ID.1. A method for communication, comprising:
configuring a router, having multiple interfaces connected to a network, to forward data packets in the network in accordance with Multiprotocol Label Switching (MPLS) labels appended to the data packets; defining a group of two or more of the interfaces as a multi-path routing group, and storing, in a forwarding table within the router, a set of entries consisting of one respective entry for each of the interfaces in the group; storing, in an incoming label map (ILM) within the router, a plurality of records corresponding to different, respective label IDs contained in the MPLS labels, such that all of the records in the plurality point to the set of the entries in the forwarding table that belong to the multi-path routing group; and upon receiving in the router an incoming data packet having a label ID corresponding to any given record in the plurality, selecting, responsively to the given record and to the set of the entries in the forwarding table to which the given record points, one of the interfaces in the group, and forwarding the incoming data packet through the one of the interfaces without changing the label ID. 2. The method according to claim 1, wherein the set of the records is configured as an equal cost multi-path (ECMP) group within the forwarding table. 3. The method according to claim 1, wherein defining the group comprises defining at least first and second, different multi-path routing groups, and wherein storing the plurality of the records comprises defining different, first and second pluralities of the records, pointing to the entries in the forwarding table that belong respectively to the first and second multi-path routing groups. 4. The method according to claim 1, wherein forwarding the incoming data packet comprises updating a time-to-live (TTL) field in the label without changing the label ID. 5. The method according to claim 1, wherein forwarding the incoming data packet comprises updating a traffic class field in the label without changing the label ID. 6. The method according to claim 1, wherein the forwarding table comprises a Next Hop Label Forwarding Entry (NHLFE) table. 7. The method according to claim 6, wherein the plurality of the records in the ILM indicate that no labels should be popped from the incoming data packet, and the set of the entries in the NHLFE table indicate that no labels should be pushed onto the incoming data packet. 8. The method according to claim 6, wherein the set of the entries in the NHLFE table indicate that a label at a top of a label stack in the incoming packet should not be swapped. 9. The method according to claim 6, wherein the NHLFE table contains further entries pointed to by one or more further records in the ILM that are outside the plurality and indicate that the labels of the data packets having label IDs corresponding to the further records should be swapped by the router. 10. The method according to claim 1, wherein the plurality of the records in the ILM indicate that an existing label at a top of a label stack in the incoming packet should be swapped with a new label having the same label ID as the existing label. 11. The method according to claim 1, wherein each of the records in the plurality points to a respective entry in a Next Hop Label Forwarding Entry (NHLFE) table, which indicates that an existing label at a top of a label stack in the incoming packet should be swapped with a new label having the same label ID as the existing label, and which points to the set of the entries in the forwarding table that belong to the multi-path routing group. 12. The method according to claim 1, wherein the label ID comprises a label space. 13. Packet routing apparatus, comprising:
multiple interfaces connected to a network; switching logic configured to transfer data packets among the interfaces; and packet processing logic, which is configured to cause the switching logic to forward the data packets in accordance with Multiprotocol Label Switching (MPLS) labels appended to the data packets and comprises:
a forwarding table, in which a group of two or more of the interfaces is defined as a multi-path routing group, and a set of entries is stored consisting of one respective entry for each of the interfaces in the group; and
an incoming label map (ILM), in which a plurality of records are stored corresponding to different, respective label IDs contained in the MPLS labels, such that all of the records in the plurality point to the set of the entries in the forwarding table that belong to the multi-path routing group,
such that upon receiving via one of the interfaces an incoming data packet having a label ID corresponding to any given record in the plurality, the packet processing logic selects, responsively to the given record and to the set of the entries in the forwarding table to which the given record points, one of the interfaces in the group, and causes the switching logic to forward the incoming data packet through the one of the interfaces without changing the label ID. 14. The apparatus according to claim 13, wherein the set of the records is configured as an equal cost multi-path (ECMP) group within the forwarding table. 15. The apparatus according to claim 13, wherein the forwarding table contains at least first and second, different multi-path routing groups, and wherein different, first and second pluralities of the records in the ILM point to the entries in the forwarding table that belong respectively to the first and second multi-path routing groups. 16. The apparatus according to claim 13, wherein the packet processing logic is configured to update a time-to-live (TTL) field in the label without changing the label ID. 17. The apparatus according to claim 13, wherein the packet processing logic is configured to update a traffic class field in the label without changing the label ID. 18. The apparatus according to claim 13, wherein the forwarding table comprises a Next Hop Label Forwarding Entry (NHLFE) table. 19. The apparatus according to claim 18, wherein the plurality of the records in the ILM indicate that no labels should be popped from the incoming data packet, and the set of the entries in the NHLFE table indicate that no labels should be pushed onto the incoming data packet. 20. The apparatus according to claim 18, wherein the set of the entries in the NHLFE table indicate that a label at a top of a label stack in the incoming packet should not be swapped. 21. The apparatus according to claim 18, wherein the NHLFE table contains further entries pointed to by one or more further records in the ILM that are outside the plurality and indicate that the labels of the data packets having label IDs corresponding to the further records should be swapped by the router. 22. The apparatus according to claim 13, wherein the plurality of the records in the ILM indicate that an existing label at a top of a label stack in the incoming packet should be swapped with a new label having the same label ID as the existing label. 23. The apparatus according to claim 13, wherein each of the records in the plurality points to a respective entry in a Next Hop Label Forwarding Entry (NHLFE) table, which indicates that an existing label at a top of a label stack in the incoming packet should be swapped with a new label having the same label ID as the existing label, and which points to the set of the entries in the forwarding table that belong to the multi-path routing group. 24. The apparatus according to claim 13, wherein the label ID comprises a label space. 25. A method for communication, comprising:
configuring a router, having multiple interfaces connected to a network, to forward data packets in the network using Multiprotocol Label Switching (MPLS) labels appended to the data packets; defining a group of two or more of the interfaces as a multi-path routing group, and storing, in a forwarding table within the router, a set of entries consisting of one respective entry for each of the interfaces in the group; upon receiving incoming data packets in the router:
looking up in the router respective label IDs that are to be associated with the data packets to be forwarded from the router through the network; and
mapping the data packets to respective egress interfaces of the router, such that at least first and second data packets having different, respective first and second label IDs are mapped to the same multi-path routing group; and
forwarding the data packets through the respective egress interfaces to which the data packets are mapped. 26. The method according to claim 25, wherein looking up the respective label IDs comprises reading and applying at least one label ID of an incoming data packet as a key in an incoming label map (ILM) within the router. 27. The method according to claim 26, wherein reading and applying the at least one label ID comprises reading and looking up in the ILM two or more label IDs contained in the incoming data packet. 28. The method according to claim 25, wherein looking up the respective label IDs comprises reading and applying one or more fields in a header of an incoming data packet as a key in an incoming label map (ILM) within the router. 29. The method according to claim 25, wherein looking up the respective label IDs comprises reading and applying a traffic class field from a label of an incoming data packet as a key in an incoming label map (ILM) within the router. 30. The method according to claim 25, wherein forwarding the incoming data packet comprises updating at least one of time-to-live (TTL) field a traffic class field in the incoming data packets. 31. The method according to claim 25, wherein the forwarding table comprises a Next Hop Label Forwarding Entry (NHLFE) table. | 2,400 |
7,583 | 7,583 | 12,811,815 | 2,473 | An approach is provided for padding a protocol data unit. A protocol data unit is generated. A dummy padding sub-header is inserted within a header of the protocol data unit. | 1-33. (canceled) 34. A method comprising:
generating a protocol data unit; and inserting at least one padding sub-header within a header of the protocol data unit, wherein the padding is only within the header. 35. A method according to claim 34, wherein the protocol data unit is a Medium Access Control protocol data unit including a header part and a payload part. 36. A method according to claim 34, wherein a difference in size of a Medium Access Control protocol data unit and size of a Radio Link Control protocol data unit is either 2 bytes or 3 bytes. 37. A method according to claim 34, wherein the padding sub-header includes, a reserved logical channel identifier field to indicate padding, and an extension field to specify whether an additional field is present. 38. A method according to claim 34, wherein the padding sub-header is inserted at the beginning of the header part. 39. A method according to claim 34, wherein one padding sub-header or two padding sub-headers are inserted within the header of the protocol data unit for padding of 1 byte or 2 bytes. 40. An apparatus comprising:
at least one processor and at least one memory including software instructions, the at least one memory and the software instructions configured to, working with the at least one processor, cause the apparatus to perform at least the following: generate a protocol data unit, and insert at least one padding sub-header within a header of the protocol data unit, wherein the padding is only within the header. 41. An apparatus according to claim 40, wherein the protocol data unit is a Medium Access Control protocol data unit including a header part and a payload part. 42. An apparatus according to claim 40, wherein a difference in size of a Medium Access Control protocol data unit and size of a Radio Link Control protocol data unit is either 2 bytes or 3 bytes. 43. An apparatus according to claim 40, wherein the padding sub-header includes, a reserved logical channel identifier field to indicate padding, and an extension field to specify whether an additional field is present. 44. An apparatus according to claim 40, wherein the padding sub-header is inserted at the beginning of the header part. 45. An apparatus according to claim 40, wherein one padding sub-header or two padding sub-headers are inserted within the header of the protocol data unit for padding of 1 byte or 2 bytes. 46. An apparatus according to claim 40, further comprising:
a transmission module configured to transmit the protocol data unit over a wireless network. 47. An apparatus according to claim 40, wherein the apparatus is a mobile station or a base station. 48. A method comprising:
receiving a protocol data unit that includes at least one padding sub-header within a header of the protocol data unit, wherein the padding is only within the header; and removing the at least one padding sub-header. 49. A method according to claim 48, wherein the padding sub-header includes, a reserved logical channel identifier field to indicate padding, and an extension field to specify whether an additional field is present. 50. A method according to claim 48, wherein the padding sub-header is inserted at the beginning of the header part. 51. A method according to claim 48, wherein one padding sub-header or two padding sub-headers are inserted within the header of the protocol data unit for padding of 1 byte or 2 bytes. 52. An apparatus comprising:
at least one processor and at least one memory including software instructions, the at least one memory and the software instructions configured to, working with the at least one processor, cause the apparatus to perform at least the following: receive a protocol data unit that includes at least one padding sub-header within a header of the protocol data unit, wherein the padding is only within the header. 53. An apparatus according to claim 52, wherein the padding sub-header includes, a reserved logical channel identifier field to indicate padding, and an extension field to specify whether an additional field is present. 54. An apparatus according to claim 52, wherein the padding sub-header is inserted at the beginning of the header part. 55. An apparatus according to claim 52, wherein one padding sub-header or two padding sub-headers are inserted within the header of the protocol data unit for padding of 1 byte or 2 bytes. 56. An apparatus according to claim 52, wherein the apparatus is a mobile station or a base station. | An approach is provided for padding a protocol data unit. A protocol data unit is generated. A dummy padding sub-header is inserted within a header of the protocol data unit.1-33. (canceled) 34. A method comprising:
generating a protocol data unit; and inserting at least one padding sub-header within a header of the protocol data unit, wherein the padding is only within the header. 35. A method according to claim 34, wherein the protocol data unit is a Medium Access Control protocol data unit including a header part and a payload part. 36. A method according to claim 34, wherein a difference in size of a Medium Access Control protocol data unit and size of a Radio Link Control protocol data unit is either 2 bytes or 3 bytes. 37. A method according to claim 34, wherein the padding sub-header includes, a reserved logical channel identifier field to indicate padding, and an extension field to specify whether an additional field is present. 38. A method according to claim 34, wherein the padding sub-header is inserted at the beginning of the header part. 39. A method according to claim 34, wherein one padding sub-header or two padding sub-headers are inserted within the header of the protocol data unit for padding of 1 byte or 2 bytes. 40. An apparatus comprising:
at least one processor and at least one memory including software instructions, the at least one memory and the software instructions configured to, working with the at least one processor, cause the apparatus to perform at least the following: generate a protocol data unit, and insert at least one padding sub-header within a header of the protocol data unit, wherein the padding is only within the header. 41. An apparatus according to claim 40, wherein the protocol data unit is a Medium Access Control protocol data unit including a header part and a payload part. 42. An apparatus according to claim 40, wherein a difference in size of a Medium Access Control protocol data unit and size of a Radio Link Control protocol data unit is either 2 bytes or 3 bytes. 43. An apparatus according to claim 40, wherein the padding sub-header includes, a reserved logical channel identifier field to indicate padding, and an extension field to specify whether an additional field is present. 44. An apparatus according to claim 40, wherein the padding sub-header is inserted at the beginning of the header part. 45. An apparatus according to claim 40, wherein one padding sub-header or two padding sub-headers are inserted within the header of the protocol data unit for padding of 1 byte or 2 bytes. 46. An apparatus according to claim 40, further comprising:
a transmission module configured to transmit the protocol data unit over a wireless network. 47. An apparatus according to claim 40, wherein the apparatus is a mobile station or a base station. 48. A method comprising:
receiving a protocol data unit that includes at least one padding sub-header within a header of the protocol data unit, wherein the padding is only within the header; and removing the at least one padding sub-header. 49. A method according to claim 48, wherein the padding sub-header includes, a reserved logical channel identifier field to indicate padding, and an extension field to specify whether an additional field is present. 50. A method according to claim 48, wherein the padding sub-header is inserted at the beginning of the header part. 51. A method according to claim 48, wherein one padding sub-header or two padding sub-headers are inserted within the header of the protocol data unit for padding of 1 byte or 2 bytes. 52. An apparatus comprising:
at least one processor and at least one memory including software instructions, the at least one memory and the software instructions configured to, working with the at least one processor, cause the apparatus to perform at least the following: receive a protocol data unit that includes at least one padding sub-header within a header of the protocol data unit, wherein the padding is only within the header. 53. An apparatus according to claim 52, wherein the padding sub-header includes, a reserved logical channel identifier field to indicate padding, and an extension field to specify whether an additional field is present. 54. An apparatus according to claim 52, wherein the padding sub-header is inserted at the beginning of the header part. 55. An apparatus according to claim 52, wherein one padding sub-header or two padding sub-headers are inserted within the header of the protocol data unit for padding of 1 byte or 2 bytes. 56. An apparatus according to claim 52, wherein the apparatus is a mobile station or a base station. | 2,400 |
7,584 | 7,584 | 15,006,695 | 2,438 | A method and proxy device for detecting cyber threats against cloud-based application are presented. The method includes receiving a request from a client device, the request directed to a cloud-based application computing platform, wherein the client device is associated with a user attempting to access the cloud-based application; determining whether the received request belongs to a current session of the client device accessing the cloud-based application; extracting, from the received request, at least one application-layer parameter of the current session; comparing the at least one extracted application-layer parameter to application-layer parameters extracted from previous sessions to determine at least one risk factor; and computing a risk score based on the determined at least one risk factor, wherein the risk score is indicative of a potential cyber threat. | 1. A method for detecting cyber threats against a cloud-based application, comprising:
receiving a request from a client device, the request directed to a cloud-based application computing platform, wherein the client device is associated with a user attempting to access the cloud-based application; determining whether the received request belongs to a current session of the client device accessing the cloud-based application; extracting, from the received request, at least one application-layer parameter of the current session; comparing the at least one extracted application-layer parameter to application-layer parameters extracted from previous sessions to determine at least one risk factor; and computing a risk score based on the determined at least one risk factor, wherein the risk score is indicative of a potential cyber threat. 2. The method of claim 1, further comprising:
identifying, based on the at least one extracted application-layer parameter, an identity of the user attempting to access the cloud-based application. 3. The method of claim 2, wherein the application-layer parameters extracted from previous sessions are from previous sessions across a plurality of cloud-based applications accessed by the user. 4. The method of claim 2, further comprising:
identifying at least one of: an organization associated with the user, and a department associated with the user. 5. The method of claim 4, wherein the application-layer parameters gathered from previous sessions are gathered across a plurality of cloud-based applications accessed by a group of users, wherein the group of users belongs to at least one of: the organization associated with the user, and the department associated with the user. 6. The method of claim 1, wherein the current session is a sequence of cloud-based application actions performed by the user during an uninterrupted period of activity by the user. 7. The method of claim 1, wherein comparing the at least one extracted application-layer parameter to application-layer parameters extracted from previous sessions to determine the at least one risk factor further comprises:
detecting, based on the comparison, at least one anomalous operation performed the user; and associating the at least one anomalous operation with the at least one risk factor. 8. The method of claim 7, wherein each of the at least one anomalous operation includes a pattern of anomalous actions performed by each user. 9. The method of claim 1, wherein the at least one risk factor includes any one of: an anomalous location of the user accessing the cloud-based application; an anomalous ISP; an anomalous user-agent installed in a client device of the user accessing the cloud-based application; anomalous actions performed by the user; simultaneous access by the user to the cloud-based application from different locations; a request to perform an administrative action; access to the cloud-based application by the user as an administrator; a current time of the request being over a predefined time period since a last login by the user; specific number of anomalous actions performed during a predefined time interval, and a use of an anomalous proxy or internet protocol (IP) address to access the cloud-based application. 10. The method of claim 1, wherein computing the risk score further comprises:
assigning a value to each of the at least one determined risk factor, wherein each assigned value is based on the severity of the respective determined risk factor; and computing the risk score as a function of the at least one assigned value. 11. The method of claim 1, further comprising:
comparing the computed risk score to at least one predefined threshold; and selecting a mitigation action based on the value of the risk score, when the computed risk score is above any of the at least one predefined threshold; and performing the mitigation action to mitigate the potential cyber threat. 12. The method of claim 1, wherein the potential cyber threat includes at least one of: accessing the cloud-based application using stolen user credentials; a rough insider leaking data from the cloud-based application; and access to the cloud-based application using common credentials. 13. A computer readable medium having stored thereon instructions for causing one or more processing units to execute the method according to claim 1. 14. A proxy device for detecting cyber threats against a cloud-based application, comprising:
a processing system; and a memory, the memory containing instructions that, when executed by the processor, configure the proxy device to: receive a request from client device to a cloud-based application computing platform, wherein the client device is associated with a user attempting to access the cloud-based application; determine whether the received request belongs to a current session of the client device accessing the cloud-based application; extract, from the received request, at least one application-layer parameter of the current session; compare the at least one extracted application-layer parameter to application-layer parameters extracted from previous sessions to determine at least one risk factor; and compute a risk score based on the determined at least one risk factor, wherein the risk score is indicative of a potential cyber threat. 15. The proxy device of claim 14, wherein the system is further configured to:
identify, based on the at least one extracted application-layer parameter, an identity of the user attempting to access the cloud-based application. 16. The proxy device of claim 15, wherein the application-layer parameters gathered from previous sessions are gathered across a plurality of cloud-based applications accessed by the user. 17. The proxy device of claim 16, wherein the application-layer parameters gathered from previous sessions are gathered across a plurality of cloud-based applications accessed by a group of users, wherein the group of users belongs to at least one of: the organization associated with the user, and the department associated with the user. 18. The proxy device of claim 14, wherein the current session is a sequence of cloud-based application actions performed by the user during an uninterrupted period of activity by the user. 19. The proxy device of claim 14, wherein comparing the at least one extracted application-layer parameter to application-layer parameters extracted from previous sessions to determine the at least one risk factor further comprises:
detecting, based on the comparison, at least one anomalous operation performed the user; and associate the at least one anomalous operation with the at least one risk factor. 20. The proxy device of claim 19, wherein each of the at least one anomalous operation includes a pattern of anomalous actions performed by each user. 21. The proxy device of claim 14, wherein the at least one risk factor includes any one of: an anomalous location of the user accessing the cloud-based application; an anomalous ISP; an anomalous user-agent installed in a client device of the user accessing the cloud-based application; anomalous actions performed by the user; simultaneous access by the user to the cloud-based application from different locations; a request to perform an administrative action; access to the cloud-based application by the user as an administrator; a current time of the request being over a predefined time period since a last login by the user; specific number of anomalous actions performed during a predefined time interval, and a use of an anomalous proxy or internet protocol (IP) address to access the cloud-based application. 22. The proxy device of claim 14, wherein the system is further configured to:
assign a value to each of the at least one determined risk factor, wherein each assigned value is based on the severity of the respective determined risk factor; and compute the risk score as a function of the at least one assigned value. 23. The proxy device of claim 14, wherein the system is further configured to:
compare the computed risk score to at least one predefined threshold; and select a mitigation action based on the value of the risk score, when the computed risk score is above any of the at least one predefined threshold; and perform the mitigation action to mitigate the potential cyber threat. 24. The proxy device of claim 14, wherein the potential cyber threat includes at least one of: accessing the cloud-based application using stolen user credentials; a rough insider leaking data from the cloud-based application; and access to the cloud-based application using common credentials. 25. A cloud computing platform, comprising:
at least one server configured to host at least one cloud-based application; and a device communicatively connected to the at least one server, wherein the device includes a processor; and a memory, the memory containing instructions that, when executed by the processing system, configure the device to detect cyber threats against a cloud-based application, wherein the device is further configured to:
receive a request from client device to a cloud-based application computing platform, wherein the client device is associated with a user attempting to access the cloud-based application;
determine whether the received request belongs to a current session of the at least one client device accessing the cloud-based application;
extract, from the received request, at least one application-layer parameter of the current session;
compare the at least one extracted application-layer parameter to application-layer parameters extracted from previous sessions to determine at least one risk factor; and
compute a risk score based on the determined at least one risk factor, wherein the risk score is indicative of a potential cyber threat. | A method and proxy device for detecting cyber threats against cloud-based application are presented. The method includes receiving a request from a client device, the request directed to a cloud-based application computing platform, wherein the client device is associated with a user attempting to access the cloud-based application; determining whether the received request belongs to a current session of the client device accessing the cloud-based application; extracting, from the received request, at least one application-layer parameter of the current session; comparing the at least one extracted application-layer parameter to application-layer parameters extracted from previous sessions to determine at least one risk factor; and computing a risk score based on the determined at least one risk factor, wherein the risk score is indicative of a potential cyber threat.1. A method for detecting cyber threats against a cloud-based application, comprising:
receiving a request from a client device, the request directed to a cloud-based application computing platform, wherein the client device is associated with a user attempting to access the cloud-based application; determining whether the received request belongs to a current session of the client device accessing the cloud-based application; extracting, from the received request, at least one application-layer parameter of the current session; comparing the at least one extracted application-layer parameter to application-layer parameters extracted from previous sessions to determine at least one risk factor; and computing a risk score based on the determined at least one risk factor, wherein the risk score is indicative of a potential cyber threat. 2. The method of claim 1, further comprising:
identifying, based on the at least one extracted application-layer parameter, an identity of the user attempting to access the cloud-based application. 3. The method of claim 2, wherein the application-layer parameters extracted from previous sessions are from previous sessions across a plurality of cloud-based applications accessed by the user. 4. The method of claim 2, further comprising:
identifying at least one of: an organization associated with the user, and a department associated with the user. 5. The method of claim 4, wherein the application-layer parameters gathered from previous sessions are gathered across a plurality of cloud-based applications accessed by a group of users, wherein the group of users belongs to at least one of: the organization associated with the user, and the department associated with the user. 6. The method of claim 1, wherein the current session is a sequence of cloud-based application actions performed by the user during an uninterrupted period of activity by the user. 7. The method of claim 1, wherein comparing the at least one extracted application-layer parameter to application-layer parameters extracted from previous sessions to determine the at least one risk factor further comprises:
detecting, based on the comparison, at least one anomalous operation performed the user; and associating the at least one anomalous operation with the at least one risk factor. 8. The method of claim 7, wherein each of the at least one anomalous operation includes a pattern of anomalous actions performed by each user. 9. The method of claim 1, wherein the at least one risk factor includes any one of: an anomalous location of the user accessing the cloud-based application; an anomalous ISP; an anomalous user-agent installed in a client device of the user accessing the cloud-based application; anomalous actions performed by the user; simultaneous access by the user to the cloud-based application from different locations; a request to perform an administrative action; access to the cloud-based application by the user as an administrator; a current time of the request being over a predefined time period since a last login by the user; specific number of anomalous actions performed during a predefined time interval, and a use of an anomalous proxy or internet protocol (IP) address to access the cloud-based application. 10. The method of claim 1, wherein computing the risk score further comprises:
assigning a value to each of the at least one determined risk factor, wherein each assigned value is based on the severity of the respective determined risk factor; and computing the risk score as a function of the at least one assigned value. 11. The method of claim 1, further comprising:
comparing the computed risk score to at least one predefined threshold; and selecting a mitigation action based on the value of the risk score, when the computed risk score is above any of the at least one predefined threshold; and performing the mitigation action to mitigate the potential cyber threat. 12. The method of claim 1, wherein the potential cyber threat includes at least one of: accessing the cloud-based application using stolen user credentials; a rough insider leaking data from the cloud-based application; and access to the cloud-based application using common credentials. 13. A computer readable medium having stored thereon instructions for causing one or more processing units to execute the method according to claim 1. 14. A proxy device for detecting cyber threats against a cloud-based application, comprising:
a processing system; and a memory, the memory containing instructions that, when executed by the processor, configure the proxy device to: receive a request from client device to a cloud-based application computing platform, wherein the client device is associated with a user attempting to access the cloud-based application; determine whether the received request belongs to a current session of the client device accessing the cloud-based application; extract, from the received request, at least one application-layer parameter of the current session; compare the at least one extracted application-layer parameter to application-layer parameters extracted from previous sessions to determine at least one risk factor; and compute a risk score based on the determined at least one risk factor, wherein the risk score is indicative of a potential cyber threat. 15. The proxy device of claim 14, wherein the system is further configured to:
identify, based on the at least one extracted application-layer parameter, an identity of the user attempting to access the cloud-based application. 16. The proxy device of claim 15, wherein the application-layer parameters gathered from previous sessions are gathered across a plurality of cloud-based applications accessed by the user. 17. The proxy device of claim 16, wherein the application-layer parameters gathered from previous sessions are gathered across a plurality of cloud-based applications accessed by a group of users, wherein the group of users belongs to at least one of: the organization associated with the user, and the department associated with the user. 18. The proxy device of claim 14, wherein the current session is a sequence of cloud-based application actions performed by the user during an uninterrupted period of activity by the user. 19. The proxy device of claim 14, wherein comparing the at least one extracted application-layer parameter to application-layer parameters extracted from previous sessions to determine the at least one risk factor further comprises:
detecting, based on the comparison, at least one anomalous operation performed the user; and associate the at least one anomalous operation with the at least one risk factor. 20. The proxy device of claim 19, wherein each of the at least one anomalous operation includes a pattern of anomalous actions performed by each user. 21. The proxy device of claim 14, wherein the at least one risk factor includes any one of: an anomalous location of the user accessing the cloud-based application; an anomalous ISP; an anomalous user-agent installed in a client device of the user accessing the cloud-based application; anomalous actions performed by the user; simultaneous access by the user to the cloud-based application from different locations; a request to perform an administrative action; access to the cloud-based application by the user as an administrator; a current time of the request being over a predefined time period since a last login by the user; specific number of anomalous actions performed during a predefined time interval, and a use of an anomalous proxy or internet protocol (IP) address to access the cloud-based application. 22. The proxy device of claim 14, wherein the system is further configured to:
assign a value to each of the at least one determined risk factor, wherein each assigned value is based on the severity of the respective determined risk factor; and compute the risk score as a function of the at least one assigned value. 23. The proxy device of claim 14, wherein the system is further configured to:
compare the computed risk score to at least one predefined threshold; and select a mitigation action based on the value of the risk score, when the computed risk score is above any of the at least one predefined threshold; and perform the mitigation action to mitigate the potential cyber threat. 24. The proxy device of claim 14, wherein the potential cyber threat includes at least one of: accessing the cloud-based application using stolen user credentials; a rough insider leaking data from the cloud-based application; and access to the cloud-based application using common credentials. 25. A cloud computing platform, comprising:
at least one server configured to host at least one cloud-based application; and a device communicatively connected to the at least one server, wherein the device includes a processor; and a memory, the memory containing instructions that, when executed by the processing system, configure the device to detect cyber threats against a cloud-based application, wherein the device is further configured to:
receive a request from client device to a cloud-based application computing platform, wherein the client device is associated with a user attempting to access the cloud-based application;
determine whether the received request belongs to a current session of the at least one client device accessing the cloud-based application;
extract, from the received request, at least one application-layer parameter of the current session;
compare the at least one extracted application-layer parameter to application-layer parameters extracted from previous sessions to determine at least one risk factor; and
compute a risk score based on the determined at least one risk factor, wherein the risk score is indicative of a potential cyber threat. | 2,400 |
7,585 | 7,585 | 14,741,150 | 2,461 | Virtual machine environments are provided in the switches that form a network, with the virtual machines executing network services previously performed by dedicated appliances. The virtual machines can be executed on a single multi-core processor in combination with normal switch functions or on dedicated services processor boards. Packet processors analyze incoming packets and add a services tag containing services entries to any packets. Each switch reviews the services tag and performs any network services resident on that switch. This allows services to be deployed at the optimal locations in the network. The network services may be deployed by use of drag and drop operations. A topology view is presented, along with network services that may be deployed. Services may be selected and dragged to a single switch or multiple switches. The management tool deploys the network services software, with virtual machines being instantiated on the switches as needed. | 1. A network device comprising:
at least one processor core and associated memory; a memory coupled to said at least one processor core and storing a table containing the service software instance deployment to network devices which execute the service software instances; and management tool software executing on said at least one processor core and stored in said associated memory and coupled to said memory storing the table containing the service software instance deployment to network devices which execute the service software instances, said management tool software causing said at least one processor core to provide information to network devices that add a services tag to packets and to network devices that route packets to service software instances based on a services tag to store network services tables to allow such services tag addition and packet routing. 2. The network device of claim 1,
wherein said management tool software receives an indication that an additional service software instance has been deployed to a network device, wherein said management tool software causes said at least one processor core to update said table containing the service software instance deployment to network devices which execute the service software instances to include the additional service software instance, and wherein said management tool software causes said processor to provide information to network devices that add a services tag to packets and to network devices that route packets to service software instances based on a services tag to store network services tables of the addition of the service software instance to allow such service tag addition and packet routing. 3. A method of operating a network device, the method comprising:
storing in a memory coupled to at least one processor core a table containing the service software instance deployment to network devices which execute the service software instances; and executing management tool software on said at least one processor core to provide information to network devices that add a services tag to packets and to network devices that route packets to service software instances based on a services tag to store network services tables to allow such services tag addition and packet routing. 4. The method of claim 3,
wherein said management tool software receives an indication that an additional service software instance has been deployed to a network device, wherein said management tool software causes said at least one processor core to update said table containing the service software instance deployment to network devices which execute the service software instances to include the additional service software instance, and wherein said management tool software causes said processor to provide information to network devices that add a services tag to packets and to network devices that route packets to service software instances based on a services tag to store network services tables of the addition of the service software instance to allow such service tag addition and packet routing. | Virtual machine environments are provided in the switches that form a network, with the virtual machines executing network services previously performed by dedicated appliances. The virtual machines can be executed on a single multi-core processor in combination with normal switch functions or on dedicated services processor boards. Packet processors analyze incoming packets and add a services tag containing services entries to any packets. Each switch reviews the services tag and performs any network services resident on that switch. This allows services to be deployed at the optimal locations in the network. The network services may be deployed by use of drag and drop operations. A topology view is presented, along with network services that may be deployed. Services may be selected and dragged to a single switch or multiple switches. The management tool deploys the network services software, with virtual machines being instantiated on the switches as needed.1. A network device comprising:
at least one processor core and associated memory; a memory coupled to said at least one processor core and storing a table containing the service software instance deployment to network devices which execute the service software instances; and management tool software executing on said at least one processor core and stored in said associated memory and coupled to said memory storing the table containing the service software instance deployment to network devices which execute the service software instances, said management tool software causing said at least one processor core to provide information to network devices that add a services tag to packets and to network devices that route packets to service software instances based on a services tag to store network services tables to allow such services tag addition and packet routing. 2. The network device of claim 1,
wherein said management tool software receives an indication that an additional service software instance has been deployed to a network device, wherein said management tool software causes said at least one processor core to update said table containing the service software instance deployment to network devices which execute the service software instances to include the additional service software instance, and wherein said management tool software causes said processor to provide information to network devices that add a services tag to packets and to network devices that route packets to service software instances based on a services tag to store network services tables of the addition of the service software instance to allow such service tag addition and packet routing. 3. A method of operating a network device, the method comprising:
storing in a memory coupled to at least one processor core a table containing the service software instance deployment to network devices which execute the service software instances; and executing management tool software on said at least one processor core to provide information to network devices that add a services tag to packets and to network devices that route packets to service software instances based on a services tag to store network services tables to allow such services tag addition and packet routing. 4. The method of claim 3,
wherein said management tool software receives an indication that an additional service software instance has been deployed to a network device, wherein said management tool software causes said at least one processor core to update said table containing the service software instance deployment to network devices which execute the service software instances to include the additional service software instance, and wherein said management tool software causes said processor to provide information to network devices that add a services tag to packets and to network devices that route packets to service software instances based on a services tag to store network services tables of the addition of the service software instance to allow such service tag addition and packet routing. | 2,400 |
7,586 | 7,586 | 14,133,230 | 2,486 | In some examples, a method for nondestructive testing of a component may include flashing the component using a flash lamp configured for flash thermography, collecting first image data regarding the component using an infrared camera, flowing a fluid through the component, and collecting second image data regarding the component using the infrared camera. A system for nondestructive testing of a component may include a single inspection station and a flash lamp configured for flash thermography, means for supplying a fluid to the component, and an infrared camera disposed at the inspection station. | 1. A method for nondestructive testing of a component, comprising:
flashing the component using a flash lamp configured for flash thermography; collecting first image data regarding the component using an infrared camera; flowing a fluid through the component; and collecting second image data regarding the component using the infrared camera. 2. The method of claim 1, further comprising:
rotating the component to a rotated position; flashing the component using the flash lamp configured for flash thermography; and collecting first image data regarding the component at the rotated position. 3. The method of claim 1, further comprising:
rotating the component to a rotated position; flowing the fluid through the component; and collecting second image data regarding the component at the rotated position. 4. The method of claim 1, wherein flowing the fluid through the component comprises pulsing the fluid through the component. 5. The method of claim 1, wherein the fluid comprises a cooled fluid. 6. The method of claim 1, further comprising:
rotating the component a first plurality of times to a first plurality of rotated positions; at each respective rotated position of the first plurality of rotated positions, flashing the component using the flash lamp configured for flash thermography; at each respective rotated position of the first plurality of rotated positions, collecting first image data regarding the component at the respective rotated position using the infrared camera; rotating the component a second plurality of times to a second plurality of rotated positions; at each respective rotated position of the second plurality of rotated positions, flowing the fluid through the component; and at each respective rotated position of the second plurality of rotated positions, collecting second image data regarding the component at the respective rotated position using the infrared camera. 7. The method of claim 6, wherein at least one of rotating the component the first plurality of times to a first plurality of rotated positions or rotating the component a second plurality of times to the second plurality of rotated positions comprises using a three-axis stage to rotate the component. 8. The method of claim 1, further comprising:
providing relative rotation between the infrared camera and the component; collecting first image data from each perspective of a plurality of perspectives of the component, wherein each perspective is at a different degree of at least one of relative rotation or translation between the infrared camera and the component; and generating a composite image of the component based on the first image data collected at each perspective of the plurality of perspectives. 9. The method of claim 8, wherein the plurality of perspectives comprises a first plurality of perspectives, further comprising:
collecting second image data from each perspective of a second plurality of perspectives of the component, wherein each perspective of the second plurality of perspectives is at a different degree of at least one of relative rotation or translation between the infrared camera and the component; and generating the composite image of the component based on the first image data collected at the first plurality of perspectives and based on the second image data collected at the second plurality of perspectives. 10. The method of claim 9, wherein the first plurality of perspectives and the other second plurality of perspectives include the same perspectives. 11. The method of claim 9, wherein the composite image comprises a two dimensional image spanning 360° of the component in a desired plane. 12. The method of claim 1, wherein the flash lamp is not flashed while collecting second image data regarding the component. 13. A method for nondestructive testing of a component, comprising:
providing a heat pulse to the exterior of the component; collecting first image data regarding the component using an infrared camera; flowing a fluid through internal passages of the component; and collecting second image data regarding the component using the infrared camera. 14. The method of claim 13, wherein providing the heat pulse comprising flashing a flash lamp configured for flash thermography. 15. The method of claim 13, further comprising:
rotating the component to a rotated position; providing the heat pulse to the exterior of the component; and collecting first image data regarding the component at the rotated position using the infrared camera. 16. The method of claim 13, further comprising:
rotating the component to a rotated position; flowing the fluid through internal passages of the component; and collecting second image data regarding the component at the rotated position using the infrared camera. 17. The method of claim 13, wherein flowing the fluid through internal passages of the component comprises pulsing the fluid through internal passages of the component. 18. The method of claim 13, further comprising:
providing relative rotation between the infrared camera and the component; collecting first image data from each perspective of a plurality of perspectives of the component, wherein each perspective is at a different degree of at least one of relative rotation or translation between the infrared camera and the component; and generating a composite image of the component based on the first image data collected at each perspective of the plurality of perspectives. 19. The method of claim 18, wherein the plurality of perspectives comprises a first plurality of perspectives, further comprising:
collecting second image data from each perspective of a second plurality of perspectives of the component, wherein each perspective of the second plurality of perspectives is at a different degree of at least one of relative rotation or translation between the infrared camera and the component; and generating the composite image of the component based on the first image data collected at the first plurality of perspectives and based on the second image data collected at the second plurality of perspectives. 20. A system for nondestructive testing of a component, comprising:
a single inspection station for inspecting the component; means for at least one of translating or rotating the component, wherein the means for at least one of translating or rotating is disposed at the inspection station; a flash lamp configured for flash thermography disposed at the inspection station; means for supplying a fluid to the component at the inspection station for flowing thermography; and an infrared camera disposed at the inspection station and configured to capture first image data from the flash thermography and second image data from the flow thermography. | In some examples, a method for nondestructive testing of a component may include flashing the component using a flash lamp configured for flash thermography, collecting first image data regarding the component using an infrared camera, flowing a fluid through the component, and collecting second image data regarding the component using the infrared camera. A system for nondestructive testing of a component may include a single inspection station and a flash lamp configured for flash thermography, means for supplying a fluid to the component, and an infrared camera disposed at the inspection station.1. A method for nondestructive testing of a component, comprising:
flashing the component using a flash lamp configured for flash thermography; collecting first image data regarding the component using an infrared camera; flowing a fluid through the component; and collecting second image data regarding the component using the infrared camera. 2. The method of claim 1, further comprising:
rotating the component to a rotated position; flashing the component using the flash lamp configured for flash thermography; and collecting first image data regarding the component at the rotated position. 3. The method of claim 1, further comprising:
rotating the component to a rotated position; flowing the fluid through the component; and collecting second image data regarding the component at the rotated position. 4. The method of claim 1, wherein flowing the fluid through the component comprises pulsing the fluid through the component. 5. The method of claim 1, wherein the fluid comprises a cooled fluid. 6. The method of claim 1, further comprising:
rotating the component a first plurality of times to a first plurality of rotated positions; at each respective rotated position of the first plurality of rotated positions, flashing the component using the flash lamp configured for flash thermography; at each respective rotated position of the first plurality of rotated positions, collecting first image data regarding the component at the respective rotated position using the infrared camera; rotating the component a second plurality of times to a second plurality of rotated positions; at each respective rotated position of the second plurality of rotated positions, flowing the fluid through the component; and at each respective rotated position of the second plurality of rotated positions, collecting second image data regarding the component at the respective rotated position using the infrared camera. 7. The method of claim 6, wherein at least one of rotating the component the first plurality of times to a first plurality of rotated positions or rotating the component a second plurality of times to the second plurality of rotated positions comprises using a three-axis stage to rotate the component. 8. The method of claim 1, further comprising:
providing relative rotation between the infrared camera and the component; collecting first image data from each perspective of a plurality of perspectives of the component, wherein each perspective is at a different degree of at least one of relative rotation or translation between the infrared camera and the component; and generating a composite image of the component based on the first image data collected at each perspective of the plurality of perspectives. 9. The method of claim 8, wherein the plurality of perspectives comprises a first plurality of perspectives, further comprising:
collecting second image data from each perspective of a second plurality of perspectives of the component, wherein each perspective of the second plurality of perspectives is at a different degree of at least one of relative rotation or translation between the infrared camera and the component; and generating the composite image of the component based on the first image data collected at the first plurality of perspectives and based on the second image data collected at the second plurality of perspectives. 10. The method of claim 9, wherein the first plurality of perspectives and the other second plurality of perspectives include the same perspectives. 11. The method of claim 9, wherein the composite image comprises a two dimensional image spanning 360° of the component in a desired plane. 12. The method of claim 1, wherein the flash lamp is not flashed while collecting second image data regarding the component. 13. A method for nondestructive testing of a component, comprising:
providing a heat pulse to the exterior of the component; collecting first image data regarding the component using an infrared camera; flowing a fluid through internal passages of the component; and collecting second image data regarding the component using the infrared camera. 14. The method of claim 13, wherein providing the heat pulse comprising flashing a flash lamp configured for flash thermography. 15. The method of claim 13, further comprising:
rotating the component to a rotated position; providing the heat pulse to the exterior of the component; and collecting first image data regarding the component at the rotated position using the infrared camera. 16. The method of claim 13, further comprising:
rotating the component to a rotated position; flowing the fluid through internal passages of the component; and collecting second image data regarding the component at the rotated position using the infrared camera. 17. The method of claim 13, wherein flowing the fluid through internal passages of the component comprises pulsing the fluid through internal passages of the component. 18. The method of claim 13, further comprising:
providing relative rotation between the infrared camera and the component; collecting first image data from each perspective of a plurality of perspectives of the component, wherein each perspective is at a different degree of at least one of relative rotation or translation between the infrared camera and the component; and generating a composite image of the component based on the first image data collected at each perspective of the plurality of perspectives. 19. The method of claim 18, wherein the plurality of perspectives comprises a first plurality of perspectives, further comprising:
collecting second image data from each perspective of a second plurality of perspectives of the component, wherein each perspective of the second plurality of perspectives is at a different degree of at least one of relative rotation or translation between the infrared camera and the component; and generating the composite image of the component based on the first image data collected at the first plurality of perspectives and based on the second image data collected at the second plurality of perspectives. 20. A system for nondestructive testing of a component, comprising:
a single inspection station for inspecting the component; means for at least one of translating or rotating the component, wherein the means for at least one of translating or rotating is disposed at the inspection station; a flash lamp configured for flash thermography disposed at the inspection station; means for supplying a fluid to the component at the inspection station for flowing thermography; and an infrared camera disposed at the inspection station and configured to capture first image data from the flash thermography and second image data from the flow thermography. | 2,400 |
7,587 | 7,587 | 13,690,174 | 2,432 | Systems and methods for processing protected electronic communications are disclosed. According to one embodiment, a method for processing inbound messages may include (1) receiving a message containing protected content at an electronic device comprising at least one computer processor; (2) using the at least one computer processor, determining a manner in which unprotected content corresponding to the protected content is accessed by a user; and (3) using the at least one computer processor, automatically sending the unprotected content to a first storage location. | 1. A method for processing inbound messages, comprising:
receiving a message containing protected content at an electronic device comprising at least one computer processor; using the at least one computer processor, determining a manner in which unprotected content corresponding to the protected content is accessed by a user; and using the at least one computer processor, automatically sending the unprotected content to a first storage location. 2. The method of claim 1, further comprising:
using the at least one computer processor, automatically sending the manner used to access the unprotected content to a second storage location. 3. The method of claim 2, wherein the first storage location and the second storage location are the same storage location. 4. The method of claim 1, further comprising:
using the at least one computer processor, automatically sending the message containing protected content to the first storage location. 5. The method of claim 1, wherein the manner comprises an application of a key. 6. The method of claim 1, wherein the manner comprises an entry of a password. 7. A method for processing inbound messages, comprising:
receiving an inbound message for a recipient; determining, using at least one computer processor, whether the inbound message contains protected content; using the at least one computer processor, marking the inbound message with an indication that the inbound message contains protected content; and sending the marked received message to an electronic device associated with the recipient. 8. The method of claim 7, wherein the step of marking the inbound message comprises:
using the at least one computer processor, rewriting envelope information for the inbound message. 9. The method of claim 7, wherein the received message comprises at least one of an email, a file attachment, audio, picture, image, video, text, chat, and SMS. 10. The method of claim 7, further comprising:
receiving, from the electronic device associated with the intended recipient, a message containing non-protected content corresponding to the protected content; and providing the message containing non-protected content corresponding to the protected content to a first storage location. 11. The method of claim 7, further comprising:
providing the inbound message containing protected content to a second storage location. 12. The method of claim 7, further comprising:
receiving, from the electronic device associated with the intended recipient, a tool used to access the non-protected content from the protected content; and providing the tool to a third storage location. 13. The method of claim 12, wherein the access tool is one of a password and a key. 14. A method for processing inbound messages, comprising:
receiving an inbound message including protected content; using at least one computer processor, applying a tool to the inbound message to access non-protected content corresponding to the protected content; and providing the non-protected content to a first storage location. 15. The method of claim 14, further comprising:
providing the non-protected content to the intended recipient 16. The method of claim 14, further comprising:
providing the inbound message to the intended recipient 17. The method of claim 14, further comprising:
providing the tool to a second storage location. 18. The method of claim 14, further comprising:
providing the received message containing protected content to a third storage location. 19. The method of claim 14, wherein the access tool is one of a password and a key. 20. The method of claim 14, wherein the tool is retrieved from a database comprising a plurality of tools. | Systems and methods for processing protected electronic communications are disclosed. According to one embodiment, a method for processing inbound messages may include (1) receiving a message containing protected content at an electronic device comprising at least one computer processor; (2) using the at least one computer processor, determining a manner in which unprotected content corresponding to the protected content is accessed by a user; and (3) using the at least one computer processor, automatically sending the unprotected content to a first storage location.1. A method for processing inbound messages, comprising:
receiving a message containing protected content at an electronic device comprising at least one computer processor; using the at least one computer processor, determining a manner in which unprotected content corresponding to the protected content is accessed by a user; and using the at least one computer processor, automatically sending the unprotected content to a first storage location. 2. The method of claim 1, further comprising:
using the at least one computer processor, automatically sending the manner used to access the unprotected content to a second storage location. 3. The method of claim 2, wherein the first storage location and the second storage location are the same storage location. 4. The method of claim 1, further comprising:
using the at least one computer processor, automatically sending the message containing protected content to the first storage location. 5. The method of claim 1, wherein the manner comprises an application of a key. 6. The method of claim 1, wherein the manner comprises an entry of a password. 7. A method for processing inbound messages, comprising:
receiving an inbound message for a recipient; determining, using at least one computer processor, whether the inbound message contains protected content; using the at least one computer processor, marking the inbound message with an indication that the inbound message contains protected content; and sending the marked received message to an electronic device associated with the recipient. 8. The method of claim 7, wherein the step of marking the inbound message comprises:
using the at least one computer processor, rewriting envelope information for the inbound message. 9. The method of claim 7, wherein the received message comprises at least one of an email, a file attachment, audio, picture, image, video, text, chat, and SMS. 10. The method of claim 7, further comprising:
receiving, from the electronic device associated with the intended recipient, a message containing non-protected content corresponding to the protected content; and providing the message containing non-protected content corresponding to the protected content to a first storage location. 11. The method of claim 7, further comprising:
providing the inbound message containing protected content to a second storage location. 12. The method of claim 7, further comprising:
receiving, from the electronic device associated with the intended recipient, a tool used to access the non-protected content from the protected content; and providing the tool to a third storage location. 13. The method of claim 12, wherein the access tool is one of a password and a key. 14. A method for processing inbound messages, comprising:
receiving an inbound message including protected content; using at least one computer processor, applying a tool to the inbound message to access non-protected content corresponding to the protected content; and providing the non-protected content to a first storage location. 15. The method of claim 14, further comprising:
providing the non-protected content to the intended recipient 16. The method of claim 14, further comprising:
providing the inbound message to the intended recipient 17. The method of claim 14, further comprising:
providing the tool to a second storage location. 18. The method of claim 14, further comprising:
providing the received message containing protected content to a third storage location. 19. The method of claim 14, wherein the access tool is one of a password and a key. 20. The method of claim 14, wherein the tool is retrieved from a database comprising a plurality of tools. | 2,400 |
7,588 | 7,588 | 14,024,058 | 2,487 | This disclosure describes techniques for improving coding efficiency of motion prediction in multiview and 3D video coding. In one example, a method of decoding video data comprises deriving one or more disparity vectors for a current block, the disparity vectors being derived from neighboring blocks relative to the current block, converting a disparity vector to one or more of inter-view predicted motion vector candidates and inter-view disparity motion vector candidates, adding the one or more inter-view predicted motion vector candidates and the one or more inter-view disparity motion vector candidates to a candidate list for a motion vector prediction mode, and decoding the current block using the candidate list. | 1. A method of decoding multi-view video data, the method comprising:
deriving one or more disparity vectors for a current block, the disparity vectors being derived from neighboring blocks relative to the current block; converting a disparity vector to one or more of inter-view predicted motion vector candidates and inter-view disparity motion vector candidates; adding the one or more inter-view predicted motion vector candidates and the one or more inter-view disparity motion vector candidates to a candidate list for a motion vector prediction mode; and decoding the current block using the candidate list. 2. The method of claim 1, wherein decoding the current block comprises one of decoding the current block using inter-view motion prediction and decoding the current block using inter-view residual prediction. 3. The method of claim 1, wherein the motion vector prediction mode is one of a skip mode, a merge mode, and an advanced motion vector prediction (AMVP) mode. 4. The method of claim 1, further comprising:
pruning the candidate list based on a comparison of the added one or more of the inter-view predicted motion vector and inter-view disparity motion vector to more than one selected spatial merging candidates. 5. A method of decoding multi-view video data, the method comprising:
deriving one or more disparity vectors for a current block, the disparity vectors being derived from neighboring blocks relative to the current block; using one disparity vector to locate one or more reference blocks in a reference view, wherein the one or more reference blocks are located based on shifting a disparity vector by one or more values; adding motion information of a plurality of the reference blocks to a candidate list for a motion vector prediction mode, the added motion information being one or more inter-view motion vector candidates; adding the one or more inter-view disparity motion vector candidates to the candidate list by shifting a disparity vector by one or more values; and decoding the current block using the candidate list. 6. The method of claim 5, further comprising shifting the one or more disparity vectors by a value from −4 to 4 horizontally, such that the shifted disparity vectors are fixed within a slice. 7. The method of claim 5, further comprising shifting the one or more disparity vectors by a value based on a width of a prediction unit (PU) containing a reference block. 8. The method of claim 5, further comprising shifting the one or more disparity vectors by a value based on a width of the current block. 9. The method of claim 5, wherein decoding the current block comprises one of decoding the current block using inter-view motion prediction and decoding the current block using inter-view residual prediction. 10. The method of claim 5, further comprising:
pruning the candidate list based on a comparison of the one or more added inter-view motion vector candidates to spatial merging candidates. 11. The method of claim 5, further comprising:
pruning the candidate list based on a comparison of the one or more added inter-view motion vector candidates without shifting to inter-view motion vector candidates based on a shifted disparity vector. 12. An apparatus configured to decode multi-view video data, the apparatus comprising:
a video decoder configured to:
derive one or more disparity vectors for a current block, the disparity vectors being derived from neighboring blocks relative to the current block;
convert a disparity vector to one or more of inter-view predicted motion vector candidates and inter-view disparity motion vector candidates;
add the one or more inter-view predicted motion vector candidates and the one or more inter-view disparity motion vector candidates to a candidate list for a motion vector prediction mode; and
decode the current block using the candidate list. 13. The apparatus of claim 12, wherein the video decoder decodes the current block by performing one of decoding the current block using inter-view motion prediction and decoding the current block using inter-view residual prediction. 14. The apparatus of claim 12, wherein the motion vector prediction mode is one of a skip mode, a merge mode, and an advanced motion vector prediction (AMVP) mode. 15. The apparatus of claim 12, wherein the video decoder is further configured to:
prune the candidate list based on a comparison of the added one or more of the inter-view predicted motion vector and inter-view disparity motion vector to more than one selected spatial merging candidates. 16. An apparatus configured to decode multi-view video data, the apparatus comprising:
a video decoder configured to:
derive one or more disparity vectors for a current block, the disparity vectors being derived from neighboring blocks relative to the current block;
use one disparity vector to locate one or more reference blocks in a reference view, wherein the one or more reference blocks are located based on shifting a disparity vector by one or more values;
add motion information of a plurality of the reference blocks to a candidate list for a motion vector prediction mode, the added motion information being one or more inter-view motion vector candidates;
add the one or more inter-view disparity motion vector candidates to the candidate list by shifting a disparity vector by one or more values; and
decode the current block using the candidate list. 17. The apparatus of claim 16, wherein the video decoder is further configured to shift the one or more disparity vectors by a value from −4 to 4 horizontally, such that the shifted disparity vectors are fixed within a slice. 18. The apparatus of claim 16, wherein the video decoder is further configured to shift the one or more disparity vectors by a value based on a width of a prediction unit (PU) containing a reference block. 19. The apparatus of claim 16, wherein the video decoder is further configured to shift the one or more disparity vectors by a value based on a width of the current block. 20. The apparatus of claim 16, wherein the video decoder decodes the current block by performing one of decoding the current block using inter-view motion prediction and decoding the current block using inter-view residual prediction. 21. The apparatus of claim 16, wherein the video decoder is further configured to:
prune the candidate list based on a comparison of the one or more added inter-view motion vector candidates to spatial merging candidates. 22. The apparatus of claim 16, wherein the video decoder is further configured to:
prune the candidate list based on a comparison of the one or more added inter-view motion vector candidates without shifting to inter-view motion vector candidates based on a shifted disparity vector. 23. An apparatus configured to decode multi-view video data, the apparatus comprising:
means for deriving one or more disparity vectors for a current block, the disparity vectors being derived from neighboring blocks relative to the current block; means for converting a disparity vector to one or more of inter-view predicted motion vector candidates and inter-view disparity motion vector candidates; means for adding the one or more inter-view predicted motion vector candidates and the one or more inter-view disparity motion vector candidates to a candidate list for a motion vector prediction mode; and means for decoding the current block using the candidate list. 24. An apparatus configured to decode multi-view video data, the apparatus comprising:
means for deriving one or more disparity vectors for a current block, the disparity vectors being derived from neighboring blocks relative to the current block; means for using one disparity vector to locate one or more reference blocks in a reference view, wherein the one or more reference blocks are located based on shifting a disparity vector by one or more values; means for adding motion information of a plurality of the reference blocks to a candidate list for a motion vector prediction mode, the added motion information being one or more inter-view motion vector candidates; means for adding the one or more inter-view disparity motion vector candidates to the candidate list by shifting a disparity vector by one or more values; and means for decoding the current block using the candidate list. 25. A computer-readable storage medium storing instructions that, when executed, cause one or more processors of a device configured to decode video data to:
derive one or more disparity vectors for a current block, the disparity vectors being derived from neighboring blocks relative to the current block; convert a disparity vector to one or more of inter-view predicted motion vector candidates and inter-view disparity motion vector candidates; add the one or more inter-view predicted motion vector candidates and the one or more inter-view disparity motion vector candidates to a candidate list for a motion vector prediction mode; and decode the current block using the candidate list. 26. A computer-readable storage medium storing instructions that, when executed, cause one or more processors of a device configured to decode video data to:
derive one or more disparity vectors for a current block, the disparity vectors being derived from neighboring blocks relative to the current block; use one disparity vector to locate one or more reference blocks in a reference view, wherein the one or more reference blocks are located based on shifting a disparity vector by one or more values; add motion information of a plurality of the reference blocks to a candidate list for a motion vector prediction mode, the added motion information being one or more inter-view motion vector candidates; add the one or more inter-view disparity motion vector candidates to the candidate list by shifting a disparity vector by one or more values; and decode the current block using the candidate list. 27. A method of encoding multi-view video data, the method comprising:
deriving one or more disparity vectors for a current block, the disparity vectors being derived from neighboring blocks relative to the current block; converting a disparity vector to one or more of inter-view predicted motion vector candidates and inter-view disparity motion vector candidates; adding the one or more inter-view predicted motion vector candidates and the one or more inter-view disparity motion vector candidates to a candidate list for a motion vector prediction mode; and encoding the current block using the candidate list. 28. The method of claim 27, wherein encoding the current block comprises one of encoding the current block using inter-view motion prediction and encoding the current block using inter-view residual prediction. 29. The method of claim 27, wherein the motion vector prediction mode is one of a skip mode, a merge mode, and an advanced motion vector prediction (AMVP) mode. 30. The method of claim 27, further comprising:
pruning the candidate list based on a comparison of the added one or more of the inter-view predicted motion vector and inter-view disparity motion vector to more than one selected spatial merging candidates. 31. A method of encoding multi-view video data, the method comprising:
deriving one or more disparity vectors for a current block, the disparity vectors being derived from neighboring blocks relative to the current block; using one disparity vector to locate one or more reference blocks in a reference view, wherein the one or more reference blocks are located based on shifting a disparity vector by one or more values; adding motion information of a plurality of the reference blocks to a candidate list for a motion vector prediction mode, the added motion information being one or more inter-view motion vector candidates; adding the one or more inter-view disparity motion vector candidates to the candidate list by shifting a disparity vector by one or more values; and encoding the current block using the candidate list. 32. The method of claim 31, further comprising shifting the one or more disparity vectors by a value from −4 to 4 horizontally, such that the shifted disparity vectors are fixed within a slice. 33. The method of claim 31, further comprising shifting the one or more disparity vectors by a value based on a width of a prediction unit (PU) containing a reference block. 34. The method of claim 31, further comprising shifting the one or more disparity vectors by a value based on a width of the current block. 35. The method of claim 31, wherein encoding the current block comprises one of encoding the current block using inter-view motion prediction and encoding the current block using inter-view residual prediction. 36. The method of claim 31, further comprising:
pruning the candidate list based on a comparison of the one or more added inter-view motion vector candidates to spatial merging candidates. 37. The method of claim 31, further comprising:
pruning the candidate list based on a comparison of the one or more added inter-view motion vector candidates without shifting to inter-view motion vector candidates based on a shifted disparity vector. 38. An apparatus configured to encode multi-view video data, the apparatus comprising:
a video encoder configured to:
derive one or more disparity vectors for a current block, the disparity vectors being derived from neighboring blocks relative to the current block;
convert a disparity vector to one or more of inter-view predicted motion vector candidates and inter-view disparity motion vector candidates;
add the one or more inter-view predicted motion vector candidates and the one or more inter-view disparity motion vector candidates to a candidate list for a motion vector prediction mode; and
encode the current block using the candidate list. 39. The apparatus of claim 38, wherein the video encoder encodes the current block by performing one of encoding the current block using inter-view motion prediction and encoding the current block using inter-view residual prediction. 40. The apparatus of claim 38, wherein the motion vector prediction mode is one of a skip mode, a merge mode, and an advanced motion vector prediction (AMVP) mode. 41. The apparatus of claim 38, wherein the video encoder is further configured to:
prune the candidate list based on a comparison of the added one or more of the inter-view predicted motion vector and inter-view disparity motion vector to more than one selected spatial merging candidates. 42. An apparatus configured to encode multi-view video data, the apparatus comprising:
a video encoder configured to:
derive one or more disparity vectors for a current block, the disparity vectors being derived from neighboring blocks relative to the current block;
use one disparity vector to locate one or more reference blocks in a reference view, wherein the one or more reference blocks are located based on shifting a disparity vector by one or more values;
add motion information of a plurality of the reference blocks to a candidate list for a motion vector prediction mode, the added motion information being one or more inter-view motion vector candidates;
add the one or more inter-view disparity motion vector candidates to the candidate list by shifting a disparity vector by one or more values; and
encode the current block using the candidate list. 43. The apparatus of claim 42, wherein the video encoder is further configured to shift the one or more disparity vectors by a value from −4 to 4 horizontally, such that the shifted disparity vectors are fixed within a slice. 44. The apparatus of claim 42, wherein the video encoder is further configured to shift the one or more disparity vectors by a value based on a width of a prediction unit (PU) containing a reference block. 45. The apparatus of claim 42, wherein the video encoder is further configured to shift the one or more disparity vectors by a value based on a width of the current block. 46. The apparatus of claim 42, wherein the video encoder encodes the current block by performing one of encoding the current block using inter-view motion prediction and encoding the current block using inter-view residual prediction. 47. The apparatus of claim 42, wherein the video encoder is further configured to:
prune the candidate list based on a comparison of the one or more added inter-view motion vector candidates to spatial merging candidates. 48. The apparatus of claim 42, wherein the video encoder is further configured to:
prune the candidate list based on a comparison of the one or more added inter-view motion vector candidates without shifting to inter-view motion vector candidates based on a shifted disparity vector. | This disclosure describes techniques for improving coding efficiency of motion prediction in multiview and 3D video coding. In one example, a method of decoding video data comprises deriving one or more disparity vectors for a current block, the disparity vectors being derived from neighboring blocks relative to the current block, converting a disparity vector to one or more of inter-view predicted motion vector candidates and inter-view disparity motion vector candidates, adding the one or more inter-view predicted motion vector candidates and the one or more inter-view disparity motion vector candidates to a candidate list for a motion vector prediction mode, and decoding the current block using the candidate list.1. A method of decoding multi-view video data, the method comprising:
deriving one or more disparity vectors for a current block, the disparity vectors being derived from neighboring blocks relative to the current block; converting a disparity vector to one or more of inter-view predicted motion vector candidates and inter-view disparity motion vector candidates; adding the one or more inter-view predicted motion vector candidates and the one or more inter-view disparity motion vector candidates to a candidate list for a motion vector prediction mode; and decoding the current block using the candidate list. 2. The method of claim 1, wherein decoding the current block comprises one of decoding the current block using inter-view motion prediction and decoding the current block using inter-view residual prediction. 3. The method of claim 1, wherein the motion vector prediction mode is one of a skip mode, a merge mode, and an advanced motion vector prediction (AMVP) mode. 4. The method of claim 1, further comprising:
pruning the candidate list based on a comparison of the added one or more of the inter-view predicted motion vector and inter-view disparity motion vector to more than one selected spatial merging candidates. 5. A method of decoding multi-view video data, the method comprising:
deriving one or more disparity vectors for a current block, the disparity vectors being derived from neighboring blocks relative to the current block; using one disparity vector to locate one or more reference blocks in a reference view, wherein the one or more reference blocks are located based on shifting a disparity vector by one or more values; adding motion information of a plurality of the reference blocks to a candidate list for a motion vector prediction mode, the added motion information being one or more inter-view motion vector candidates; adding the one or more inter-view disparity motion vector candidates to the candidate list by shifting a disparity vector by one or more values; and decoding the current block using the candidate list. 6. The method of claim 5, further comprising shifting the one or more disparity vectors by a value from −4 to 4 horizontally, such that the shifted disparity vectors are fixed within a slice. 7. The method of claim 5, further comprising shifting the one or more disparity vectors by a value based on a width of a prediction unit (PU) containing a reference block. 8. The method of claim 5, further comprising shifting the one or more disparity vectors by a value based on a width of the current block. 9. The method of claim 5, wherein decoding the current block comprises one of decoding the current block using inter-view motion prediction and decoding the current block using inter-view residual prediction. 10. The method of claim 5, further comprising:
pruning the candidate list based on a comparison of the one or more added inter-view motion vector candidates to spatial merging candidates. 11. The method of claim 5, further comprising:
pruning the candidate list based on a comparison of the one or more added inter-view motion vector candidates without shifting to inter-view motion vector candidates based on a shifted disparity vector. 12. An apparatus configured to decode multi-view video data, the apparatus comprising:
a video decoder configured to:
derive one or more disparity vectors for a current block, the disparity vectors being derived from neighboring blocks relative to the current block;
convert a disparity vector to one or more of inter-view predicted motion vector candidates and inter-view disparity motion vector candidates;
add the one or more inter-view predicted motion vector candidates and the one or more inter-view disparity motion vector candidates to a candidate list for a motion vector prediction mode; and
decode the current block using the candidate list. 13. The apparatus of claim 12, wherein the video decoder decodes the current block by performing one of decoding the current block using inter-view motion prediction and decoding the current block using inter-view residual prediction. 14. The apparatus of claim 12, wherein the motion vector prediction mode is one of a skip mode, a merge mode, and an advanced motion vector prediction (AMVP) mode. 15. The apparatus of claim 12, wherein the video decoder is further configured to:
prune the candidate list based on a comparison of the added one or more of the inter-view predicted motion vector and inter-view disparity motion vector to more than one selected spatial merging candidates. 16. An apparatus configured to decode multi-view video data, the apparatus comprising:
a video decoder configured to:
derive one or more disparity vectors for a current block, the disparity vectors being derived from neighboring blocks relative to the current block;
use one disparity vector to locate one or more reference blocks in a reference view, wherein the one or more reference blocks are located based on shifting a disparity vector by one or more values;
add motion information of a plurality of the reference blocks to a candidate list for a motion vector prediction mode, the added motion information being one or more inter-view motion vector candidates;
add the one or more inter-view disparity motion vector candidates to the candidate list by shifting a disparity vector by one or more values; and
decode the current block using the candidate list. 17. The apparatus of claim 16, wherein the video decoder is further configured to shift the one or more disparity vectors by a value from −4 to 4 horizontally, such that the shifted disparity vectors are fixed within a slice. 18. The apparatus of claim 16, wherein the video decoder is further configured to shift the one or more disparity vectors by a value based on a width of a prediction unit (PU) containing a reference block. 19. The apparatus of claim 16, wherein the video decoder is further configured to shift the one or more disparity vectors by a value based on a width of the current block. 20. The apparatus of claim 16, wherein the video decoder decodes the current block by performing one of decoding the current block using inter-view motion prediction and decoding the current block using inter-view residual prediction. 21. The apparatus of claim 16, wherein the video decoder is further configured to:
prune the candidate list based on a comparison of the one or more added inter-view motion vector candidates to spatial merging candidates. 22. The apparatus of claim 16, wherein the video decoder is further configured to:
prune the candidate list based on a comparison of the one or more added inter-view motion vector candidates without shifting to inter-view motion vector candidates based on a shifted disparity vector. 23. An apparatus configured to decode multi-view video data, the apparatus comprising:
means for deriving one or more disparity vectors for a current block, the disparity vectors being derived from neighboring blocks relative to the current block; means for converting a disparity vector to one or more of inter-view predicted motion vector candidates and inter-view disparity motion vector candidates; means for adding the one or more inter-view predicted motion vector candidates and the one or more inter-view disparity motion vector candidates to a candidate list for a motion vector prediction mode; and means for decoding the current block using the candidate list. 24. An apparatus configured to decode multi-view video data, the apparatus comprising:
means for deriving one or more disparity vectors for a current block, the disparity vectors being derived from neighboring blocks relative to the current block; means for using one disparity vector to locate one or more reference blocks in a reference view, wherein the one or more reference blocks are located based on shifting a disparity vector by one or more values; means for adding motion information of a plurality of the reference blocks to a candidate list for a motion vector prediction mode, the added motion information being one or more inter-view motion vector candidates; means for adding the one or more inter-view disparity motion vector candidates to the candidate list by shifting a disparity vector by one or more values; and means for decoding the current block using the candidate list. 25. A computer-readable storage medium storing instructions that, when executed, cause one or more processors of a device configured to decode video data to:
derive one or more disparity vectors for a current block, the disparity vectors being derived from neighboring blocks relative to the current block; convert a disparity vector to one or more of inter-view predicted motion vector candidates and inter-view disparity motion vector candidates; add the one or more inter-view predicted motion vector candidates and the one or more inter-view disparity motion vector candidates to a candidate list for a motion vector prediction mode; and decode the current block using the candidate list. 26. A computer-readable storage medium storing instructions that, when executed, cause one or more processors of a device configured to decode video data to:
derive one or more disparity vectors for a current block, the disparity vectors being derived from neighboring blocks relative to the current block; use one disparity vector to locate one or more reference blocks in a reference view, wherein the one or more reference blocks are located based on shifting a disparity vector by one or more values; add motion information of a plurality of the reference blocks to a candidate list for a motion vector prediction mode, the added motion information being one or more inter-view motion vector candidates; add the one or more inter-view disparity motion vector candidates to the candidate list by shifting a disparity vector by one or more values; and decode the current block using the candidate list. 27. A method of encoding multi-view video data, the method comprising:
deriving one or more disparity vectors for a current block, the disparity vectors being derived from neighboring blocks relative to the current block; converting a disparity vector to one or more of inter-view predicted motion vector candidates and inter-view disparity motion vector candidates; adding the one or more inter-view predicted motion vector candidates and the one or more inter-view disparity motion vector candidates to a candidate list for a motion vector prediction mode; and encoding the current block using the candidate list. 28. The method of claim 27, wherein encoding the current block comprises one of encoding the current block using inter-view motion prediction and encoding the current block using inter-view residual prediction. 29. The method of claim 27, wherein the motion vector prediction mode is one of a skip mode, a merge mode, and an advanced motion vector prediction (AMVP) mode. 30. The method of claim 27, further comprising:
pruning the candidate list based on a comparison of the added one or more of the inter-view predicted motion vector and inter-view disparity motion vector to more than one selected spatial merging candidates. 31. A method of encoding multi-view video data, the method comprising:
deriving one or more disparity vectors for a current block, the disparity vectors being derived from neighboring blocks relative to the current block; using one disparity vector to locate one or more reference blocks in a reference view, wherein the one or more reference blocks are located based on shifting a disparity vector by one or more values; adding motion information of a plurality of the reference blocks to a candidate list for a motion vector prediction mode, the added motion information being one or more inter-view motion vector candidates; adding the one or more inter-view disparity motion vector candidates to the candidate list by shifting a disparity vector by one or more values; and encoding the current block using the candidate list. 32. The method of claim 31, further comprising shifting the one or more disparity vectors by a value from −4 to 4 horizontally, such that the shifted disparity vectors are fixed within a slice. 33. The method of claim 31, further comprising shifting the one or more disparity vectors by a value based on a width of a prediction unit (PU) containing a reference block. 34. The method of claim 31, further comprising shifting the one or more disparity vectors by a value based on a width of the current block. 35. The method of claim 31, wherein encoding the current block comprises one of encoding the current block using inter-view motion prediction and encoding the current block using inter-view residual prediction. 36. The method of claim 31, further comprising:
pruning the candidate list based on a comparison of the one or more added inter-view motion vector candidates to spatial merging candidates. 37. The method of claim 31, further comprising:
pruning the candidate list based on a comparison of the one or more added inter-view motion vector candidates without shifting to inter-view motion vector candidates based on a shifted disparity vector. 38. An apparatus configured to encode multi-view video data, the apparatus comprising:
a video encoder configured to:
derive one or more disparity vectors for a current block, the disparity vectors being derived from neighboring blocks relative to the current block;
convert a disparity vector to one or more of inter-view predicted motion vector candidates and inter-view disparity motion vector candidates;
add the one or more inter-view predicted motion vector candidates and the one or more inter-view disparity motion vector candidates to a candidate list for a motion vector prediction mode; and
encode the current block using the candidate list. 39. The apparatus of claim 38, wherein the video encoder encodes the current block by performing one of encoding the current block using inter-view motion prediction and encoding the current block using inter-view residual prediction. 40. The apparatus of claim 38, wherein the motion vector prediction mode is one of a skip mode, a merge mode, and an advanced motion vector prediction (AMVP) mode. 41. The apparatus of claim 38, wherein the video encoder is further configured to:
prune the candidate list based on a comparison of the added one or more of the inter-view predicted motion vector and inter-view disparity motion vector to more than one selected spatial merging candidates. 42. An apparatus configured to encode multi-view video data, the apparatus comprising:
a video encoder configured to:
derive one or more disparity vectors for a current block, the disparity vectors being derived from neighboring blocks relative to the current block;
use one disparity vector to locate one or more reference blocks in a reference view, wherein the one or more reference blocks are located based on shifting a disparity vector by one or more values;
add motion information of a plurality of the reference blocks to a candidate list for a motion vector prediction mode, the added motion information being one or more inter-view motion vector candidates;
add the one or more inter-view disparity motion vector candidates to the candidate list by shifting a disparity vector by one or more values; and
encode the current block using the candidate list. 43. The apparatus of claim 42, wherein the video encoder is further configured to shift the one or more disparity vectors by a value from −4 to 4 horizontally, such that the shifted disparity vectors are fixed within a slice. 44. The apparatus of claim 42, wherein the video encoder is further configured to shift the one or more disparity vectors by a value based on a width of a prediction unit (PU) containing a reference block. 45. The apparatus of claim 42, wherein the video encoder is further configured to shift the one or more disparity vectors by a value based on a width of the current block. 46. The apparatus of claim 42, wherein the video encoder encodes the current block by performing one of encoding the current block using inter-view motion prediction and encoding the current block using inter-view residual prediction. 47. The apparatus of claim 42, wherein the video encoder is further configured to:
prune the candidate list based on a comparison of the one or more added inter-view motion vector candidates to spatial merging candidates. 48. The apparatus of claim 42, wherein the video encoder is further configured to:
prune the candidate list based on a comparison of the one or more added inter-view motion vector candidates without shifting to inter-view motion vector candidates based on a shifted disparity vector. | 2,400 |
7,589 | 7,589 | 13,216,375 | 2,482 | A broadcasting system includes: a transmitting apparatus transmitting contents, and a receiving apparatus receiving the contents transmitted thereto. The transmitting apparatus includes a trigger information generating section, an encoding section, a multiplexing section, and a sending section. The receiving apparatus includes a receiving section, a multiply separating section, a decoding section, and a control section. | 1. A transmitting apparatus transmitting contents, comprising:
a trigger information generating section configured to generate trigger information on control for an application program which is executed in conjunction with the contents in a receiving apparatus; an encoding section configured to encode the contents to generate an encoded stream; a multiplexing section configured to multiplex the encoded stream to generate a multiplexed stream; and a sending section configured to send the multiplexed stream, wherein the trigger information is sent by carrying out at least one of the encoding with the contents by the encoding section or the multiplexing with the encoded stream by the multiplexing section. 2. The transmitting apparatus according to claim 1, wherein the encoding section encodes the contents to generate an MPEG2 video stream; and
the trigger information is described in use_data in a picture layer within video_sequence of the MPEG2 video stream to be encoded. 3. The transmitting apparatus according to claim 1, wherein the encoding section encodes the contents to generate an H.264 video stream; and
the trigger information is described in Supplementary Enhancement Information (SEI) of the H.264 video stream to be encoded. 4. The transmitting apparatus according to claim 1, wherein the multiplexing section multiplexes the encoded stream to generate a Transport Stream (TS); and
the trigger information is defined in either a program map table (PMT) or a selection information table (SIT) of the TS to be multiplexed into the TS. 5. The transmitting apparatus according to claim 1, wherein the multiplexing section multiplexes the encoded stream to generate an MP4 file in accordance with an ISO base media file format; and
the trigger information is disposed in a Box defined in File, Movie, Trak, Movie Fragment, or Track Fragment within the MP4 file. 6. A transmitting method for use in a transmitting apparatus transmitting contents, comprising:
generating trigger information on control for an application program which is executed in conjunction with the contents in a receiving apparatus by the transmitting apparatus; encoding the contents to generate an encoded stream by the transmitting apparatus; multiplexing the encoded stream to generate a multiplexed stream by the transmitting apparatus; and sending the multiplexed stream by the transmitting apparatus, wherein the trigger information is sent by carrying out at least one of the encoding with the contents in the encoding processing, or the multiplexing with the encoded stream in the multiplexing processing. 7. A program controlling a transmitting apparatus transmitting contents, the program causing a computer of the transmitting apparatus to execute processing, comprising:
generating trigger information on control for an application program which is executed in conjunction with the contents in a receiving apparatus; encoding the contents to generate an encoded stream; multiplexing the encoded stream to generate a multiplexed stream; and sending the multiplexed stream, wherein the trigger information is sent by carrying out at least one of the encoding with the contents in the encoding processing, or the multiplexing with the encoded stream in the multiplexing processing. 8. A receiving apparatus receiving contents transmitted thereto, comprising:
a receiving section configured to receive a multiplexed stream into which the contents are encoded to be multiplexed; a multiply separating section configured to multiply separate the multiplexed stream; a decoding section configured to decode an encoded stream multiply separated from the multiplexed stream to reproduce the contents; and a control section configured to control processing about an application program which is executed in conjunction with the contents in accordance with trigger information acquired, wherein the trigger information is acquired by carrying out at least one of the multiple separation from the multiplexed stream by the multiply separating section, or the decoding from the encoded stream by the decoding section. 9. The receiving apparatus according to claim 8, wherein the decoding section acquires the trigger information from use_date in a picture layer within video_sequence of an MPEG2 video stream multiply separated from the multiplexed stream. 10. The receiving apparatus according to claim 8, wherein the decoding section acquires the trigger information from Supplementary Enhancement Information (SEI) of an H.264 video stream multiply separated from the multiplexed stream. 11. The receiving apparatus according to claim 8, wherein the multiply separating section multiply separates the trigger information from a Transport Stream (TS) in accordance with a definition of either a program map table (PMT) or a selection information table (SIT) of the TS as the multiplexed stream. 12. The receiving apparatus according to claim 8, wherein the multiply separating section multiply separates the trigger information from a Box defined in File, Movie, Trak, Movie Fragment, or Trank Fragment of an MP4 file as the multiplexed stream. 13. A receiving method for use in a receiving apparatus receiving contents transmitted thereto, comprising:
receiving a multiplexed stream into which the contents are encoded to be multiplexed by the receiving apparatus; multiply separating the multiplexed stream by the receiving apparatus; decoding an encoded stream multiply separated from the multiplexed stream to reproduce the contents by the receiving apparatus; and controlling processing about an application program which is executed in conjunction with the contents in accordance with trigger information acquired by the receiving apparatus, wherein the trigger information is acquired by carrying out at least one of the multiple separation from the multiplexed stream in the multiply separating processing, or the decoding from the encoded stream in the decoding processing. 14. A program controlling a receiving apparatus receiving contents transmitted thereto, the program causing a computer of the receiving apparatus to execute processing, comprising:
receiving a multiplexed stream into which the contents are encoded to be multiplexed; multiply separating the multiplexed stream; decoding an encoded stream multiply separated from the multiplexed stream to reproduce the contents; and controlling processing about an application program which is executed in conjunction with the contents in accordance with trigger information acquired, wherein the trigger information is acquired by carrying out at least one of the multiple separation from the multiplexed stream in the multiply separating processing, or the decoding from the encoded stream in the decoding processing. 15. A broadcasting system comprising:
a transmitting apparatus transmitting contents; and a receiving apparatus receiving the contents transmitted thereto, wherein the transmitting apparatus includes
a trigger information generating section configured to generate trigger information on control for an application program which is executed in conjunction with the contents in a receiving apparatus,
an encoding section configured to encode the contents to generate an encoded stream,
a multiplexing section configured to multiplex the encoded stream to generate a multiplexed stream, and
a sending section configured to send the multiplexed stream,
in which the trigger information is sent by carrying out at least one of the encoding with the contents by the encoding section or the multiplexing with the encoded stream by the multiplexing section, and
the receiving apparatus includes
a receiving section configured to receive a multiplexed stream,
a multiply separating section configured to multiply separate the multiplexed stream,
a decoding section configured to decode an encoded stream multiply separated from the multiplexed stream to reproduce the contents, and
a control section configured to control processing about an application program which is executed in conjunction with the contents in accordance with trigger information acquired,
in which the trigger information is acquired by carrying out at least one of the multiple separation from the multiplexed stream by the multiply separating section, or the decoding from the encoded stream by the decoding section. | A broadcasting system includes: a transmitting apparatus transmitting contents, and a receiving apparatus receiving the contents transmitted thereto. The transmitting apparatus includes a trigger information generating section, an encoding section, a multiplexing section, and a sending section. The receiving apparatus includes a receiving section, a multiply separating section, a decoding section, and a control section.1. A transmitting apparatus transmitting contents, comprising:
a trigger information generating section configured to generate trigger information on control for an application program which is executed in conjunction with the contents in a receiving apparatus; an encoding section configured to encode the contents to generate an encoded stream; a multiplexing section configured to multiplex the encoded stream to generate a multiplexed stream; and a sending section configured to send the multiplexed stream, wherein the trigger information is sent by carrying out at least one of the encoding with the contents by the encoding section or the multiplexing with the encoded stream by the multiplexing section. 2. The transmitting apparatus according to claim 1, wherein the encoding section encodes the contents to generate an MPEG2 video stream; and
the trigger information is described in use_data in a picture layer within video_sequence of the MPEG2 video stream to be encoded. 3. The transmitting apparatus according to claim 1, wherein the encoding section encodes the contents to generate an H.264 video stream; and
the trigger information is described in Supplementary Enhancement Information (SEI) of the H.264 video stream to be encoded. 4. The transmitting apparatus according to claim 1, wherein the multiplexing section multiplexes the encoded stream to generate a Transport Stream (TS); and
the trigger information is defined in either a program map table (PMT) or a selection information table (SIT) of the TS to be multiplexed into the TS. 5. The transmitting apparatus according to claim 1, wherein the multiplexing section multiplexes the encoded stream to generate an MP4 file in accordance with an ISO base media file format; and
the trigger information is disposed in a Box defined in File, Movie, Trak, Movie Fragment, or Track Fragment within the MP4 file. 6. A transmitting method for use in a transmitting apparatus transmitting contents, comprising:
generating trigger information on control for an application program which is executed in conjunction with the contents in a receiving apparatus by the transmitting apparatus; encoding the contents to generate an encoded stream by the transmitting apparatus; multiplexing the encoded stream to generate a multiplexed stream by the transmitting apparatus; and sending the multiplexed stream by the transmitting apparatus, wherein the trigger information is sent by carrying out at least one of the encoding with the contents in the encoding processing, or the multiplexing with the encoded stream in the multiplexing processing. 7. A program controlling a transmitting apparatus transmitting contents, the program causing a computer of the transmitting apparatus to execute processing, comprising:
generating trigger information on control for an application program which is executed in conjunction with the contents in a receiving apparatus; encoding the contents to generate an encoded stream; multiplexing the encoded stream to generate a multiplexed stream; and sending the multiplexed stream, wherein the trigger information is sent by carrying out at least one of the encoding with the contents in the encoding processing, or the multiplexing with the encoded stream in the multiplexing processing. 8. A receiving apparatus receiving contents transmitted thereto, comprising:
a receiving section configured to receive a multiplexed stream into which the contents are encoded to be multiplexed; a multiply separating section configured to multiply separate the multiplexed stream; a decoding section configured to decode an encoded stream multiply separated from the multiplexed stream to reproduce the contents; and a control section configured to control processing about an application program which is executed in conjunction with the contents in accordance with trigger information acquired, wherein the trigger information is acquired by carrying out at least one of the multiple separation from the multiplexed stream by the multiply separating section, or the decoding from the encoded stream by the decoding section. 9. The receiving apparatus according to claim 8, wherein the decoding section acquires the trigger information from use_date in a picture layer within video_sequence of an MPEG2 video stream multiply separated from the multiplexed stream. 10. The receiving apparatus according to claim 8, wherein the decoding section acquires the trigger information from Supplementary Enhancement Information (SEI) of an H.264 video stream multiply separated from the multiplexed stream. 11. The receiving apparatus according to claim 8, wherein the multiply separating section multiply separates the trigger information from a Transport Stream (TS) in accordance with a definition of either a program map table (PMT) or a selection information table (SIT) of the TS as the multiplexed stream. 12. The receiving apparatus according to claim 8, wherein the multiply separating section multiply separates the trigger information from a Box defined in File, Movie, Trak, Movie Fragment, or Trank Fragment of an MP4 file as the multiplexed stream. 13. A receiving method for use in a receiving apparatus receiving contents transmitted thereto, comprising:
receiving a multiplexed stream into which the contents are encoded to be multiplexed by the receiving apparatus; multiply separating the multiplexed stream by the receiving apparatus; decoding an encoded stream multiply separated from the multiplexed stream to reproduce the contents by the receiving apparatus; and controlling processing about an application program which is executed in conjunction with the contents in accordance with trigger information acquired by the receiving apparatus, wherein the trigger information is acquired by carrying out at least one of the multiple separation from the multiplexed stream in the multiply separating processing, or the decoding from the encoded stream in the decoding processing. 14. A program controlling a receiving apparatus receiving contents transmitted thereto, the program causing a computer of the receiving apparatus to execute processing, comprising:
receiving a multiplexed stream into which the contents are encoded to be multiplexed; multiply separating the multiplexed stream; decoding an encoded stream multiply separated from the multiplexed stream to reproduce the contents; and controlling processing about an application program which is executed in conjunction with the contents in accordance with trigger information acquired, wherein the trigger information is acquired by carrying out at least one of the multiple separation from the multiplexed stream in the multiply separating processing, or the decoding from the encoded stream in the decoding processing. 15. A broadcasting system comprising:
a transmitting apparatus transmitting contents; and a receiving apparatus receiving the contents transmitted thereto, wherein the transmitting apparatus includes
a trigger information generating section configured to generate trigger information on control for an application program which is executed in conjunction with the contents in a receiving apparatus,
an encoding section configured to encode the contents to generate an encoded stream,
a multiplexing section configured to multiplex the encoded stream to generate a multiplexed stream, and
a sending section configured to send the multiplexed stream,
in which the trigger information is sent by carrying out at least one of the encoding with the contents by the encoding section or the multiplexing with the encoded stream by the multiplexing section, and
the receiving apparatus includes
a receiving section configured to receive a multiplexed stream,
a multiply separating section configured to multiply separate the multiplexed stream,
a decoding section configured to decode an encoded stream multiply separated from the multiplexed stream to reproduce the contents, and
a control section configured to control processing about an application program which is executed in conjunction with the contents in accordance with trigger information acquired,
in which the trigger information is acquired by carrying out at least one of the multiple separation from the multiplexed stream by the multiply separating section, or the decoding from the encoded stream by the decoding section. | 2,400 |
7,590 | 7,590 | 15,492,362 | 2,422 | A UHD display presents HD video in the native resolution of HD, leaving some portions of the UHD display unused for presenting the HD video. Ancillary information received, for example, in real time with the HD video or in parallel with the HD video over the Internet is presented in the unused portions of the UHD display along with the HD video. | 1. Assembly comprising:
ultra high definition (UHD) display configured for presenting video in 2160 pixel lines or 4320 pixel lines; processor configured for controlling the UHD display; a browser that is executed by the processor; and computer readable storage medium bearing instructions executable by the processor to: present high definition (HD) video on the UHD display using at least 1440 of the pixel lines, wherein portions of the display do not present HD video when HD video is being presented elsewhere on the display; and present ancillary information in the portions of the display that do not present HD video, the ancillary information being receivable from a source of TV signals or from the Internet in real time with the HD video. 2. The assembly of claim 1, wherein the processor when executing the instructions presents the HD video using at least 1920 lines of the UHD display. 3. The assembly of claim 1, wherein the ancillary information is received from the source of TV signals along with the HD video in a common channel with the HD video. 4. The assembly of claim 1, wherein the ancillary information is received from the Internet. 5. The assembly of claim 1, comprising a user input device configured for communicating with the processor to input first and second user commands, the first user command being to present the HD video on the entire UHD display by upscaling the HD video, the second user command being to present the HD video on a portion of the UHD display and to present on the UHD display the ancillary information along with the HD video. 6. The assembly of claim 1, wherein the ancillary information is configured for being ignored by non-UHD assemblies. 7. Method comprising:
receiving, at an ultra high definition (UHD) display characterized by a first resolution, high definition (HD) video characterized by a second resolution less than the first resolution; presenting the HD video on the UHD display without upscaling the HD video to fill the entire UHD display to thereby render portions of the UHD display that do not present the HD video; and presenting ancillary information in the portions of the UHD display that do not present the HD video. 8. The method of claim 7, wherein the UHD display presents video in 2160 pixel lines or 4320 pixel lines and the HD video uses at least 1440 of the pixel lines on the UHD display. 9. The method of claim 7, wherein the HD video uses at least 1920 lines of the UHD display. 10. The method of claim 7, comprising receiving the ancillary information from a source of TV signals along with the HD video in a common channel with the HD video. 11. The method of claim 7, comprising receiving the ancillary information from the Internet. 12. The method of claim 7, comprising receiving from a user input device first and second user commands, the first user command being to present the HD video on the entire UHD display by upscaling the HD video, the second user command being to present the HD video on a portion of the UHD display and to present on the UHD display the ancillary information along with the HD video. 13. The method of claim 7, wherein the ancillary information is configured for being ignored by non-UHD assemblies. 14. An ultra high definition (UHD) display device comprising:
a UHD display configured for presenting non-UHD video in a native resolution of the non-UHD video, leaving some portions of the UHD display unused for presenting non-UHD video to establish unused display portions; a processor configured for controlling the UHD display to present demanded images, the processor configured for causing ancillary information received in real time with the non-UHD video or in parallel with the non-UHD video over the Internet to be presented in the unused display portions of the UHD display along with the non-UHD video; and a browser that is executed by with the processor. 15. The device of claim 14, wherein the ancillary information is received from a source of TV signals along with the non-UHD video in a common channel with the non-UHD video. 16. The device of claim 14, wherein the ancillary information is received from the Internet. 17. The device of claim 14, comprising a user input device configured for communicating with the processor to input first and second user commands, the first user command being to present the non-UHD video on the entire UHD display by upscaling the non-UHD video, the second user command being to present the non-UHD video on a portion of the UHD display and to present on the UHD display the ancillary information along with the non-UHD video. 18. The device of claim 14, wherein the ancillary information is configured for being ignored by non-UHD assemblies. | A UHD display presents HD video in the native resolution of HD, leaving some portions of the UHD display unused for presenting the HD video. Ancillary information received, for example, in real time with the HD video or in parallel with the HD video over the Internet is presented in the unused portions of the UHD display along with the HD video.1. Assembly comprising:
ultra high definition (UHD) display configured for presenting video in 2160 pixel lines or 4320 pixel lines; processor configured for controlling the UHD display; a browser that is executed by the processor; and computer readable storage medium bearing instructions executable by the processor to: present high definition (HD) video on the UHD display using at least 1440 of the pixel lines, wherein portions of the display do not present HD video when HD video is being presented elsewhere on the display; and present ancillary information in the portions of the display that do not present HD video, the ancillary information being receivable from a source of TV signals or from the Internet in real time with the HD video. 2. The assembly of claim 1, wherein the processor when executing the instructions presents the HD video using at least 1920 lines of the UHD display. 3. The assembly of claim 1, wherein the ancillary information is received from the source of TV signals along with the HD video in a common channel with the HD video. 4. The assembly of claim 1, wherein the ancillary information is received from the Internet. 5. The assembly of claim 1, comprising a user input device configured for communicating with the processor to input first and second user commands, the first user command being to present the HD video on the entire UHD display by upscaling the HD video, the second user command being to present the HD video on a portion of the UHD display and to present on the UHD display the ancillary information along with the HD video. 6. The assembly of claim 1, wherein the ancillary information is configured for being ignored by non-UHD assemblies. 7. Method comprising:
receiving, at an ultra high definition (UHD) display characterized by a first resolution, high definition (HD) video characterized by a second resolution less than the first resolution; presenting the HD video on the UHD display without upscaling the HD video to fill the entire UHD display to thereby render portions of the UHD display that do not present the HD video; and presenting ancillary information in the portions of the UHD display that do not present the HD video. 8. The method of claim 7, wherein the UHD display presents video in 2160 pixel lines or 4320 pixel lines and the HD video uses at least 1440 of the pixel lines on the UHD display. 9. The method of claim 7, wherein the HD video uses at least 1920 lines of the UHD display. 10. The method of claim 7, comprising receiving the ancillary information from a source of TV signals along with the HD video in a common channel with the HD video. 11. The method of claim 7, comprising receiving the ancillary information from the Internet. 12. The method of claim 7, comprising receiving from a user input device first and second user commands, the first user command being to present the HD video on the entire UHD display by upscaling the HD video, the second user command being to present the HD video on a portion of the UHD display and to present on the UHD display the ancillary information along with the HD video. 13. The method of claim 7, wherein the ancillary information is configured for being ignored by non-UHD assemblies. 14. An ultra high definition (UHD) display device comprising:
a UHD display configured for presenting non-UHD video in a native resolution of the non-UHD video, leaving some portions of the UHD display unused for presenting non-UHD video to establish unused display portions; a processor configured for controlling the UHD display to present demanded images, the processor configured for causing ancillary information received in real time with the non-UHD video or in parallel with the non-UHD video over the Internet to be presented in the unused display portions of the UHD display along with the non-UHD video; and a browser that is executed by with the processor. 15. The device of claim 14, wherein the ancillary information is received from a source of TV signals along with the non-UHD video in a common channel with the non-UHD video. 16. The device of claim 14, wherein the ancillary information is received from the Internet. 17. The device of claim 14, comprising a user input device configured for communicating with the processor to input first and second user commands, the first user command being to present the non-UHD video on the entire UHD display by upscaling the non-UHD video, the second user command being to present the non-UHD video on a portion of the UHD display and to present on the UHD display the ancillary information along with the non-UHD video. 18. The device of claim 14, wherein the ancillary information is configured for being ignored by non-UHD assemblies. | 2,400 |
7,591 | 7,591 | 15,656,691 | 2,422 | A multichromic filtering coating is applied to a projector screen to pass to the projector screen substrate only those wavelengths produced by the projector, to accentuate selective wavelengths of light to be reflected by the screen. The screen can be a passive black substrate or an active grayscale screen such as e-ink paper, and un-reflected light reaches the screen which selectively tunes its grayscale to accentuate the brightness or darkness of the color video image being projected onto it. | 1. An assembly comprising:
at least one substrate against which color video can be projected by a projector, the substrate comprising pixels actuatable to establish grayscale values on the substrate; and at least one multichromic filtering coating disposed on the substrate, the multichromic filtering coating comprising molecules passing to the substrate only those wavelengths produced by the projector. 2. The assembly of claim 1, comprising a projector configured to project color video onto the substrate. 3. The assembly of claim 1, wherein the substrate comprises e-ink. 4. The assmibly of claim 2, wherein the projector includes an ultra-short throw (UST) projector. 5. The assembly of claim 1, wherein the multichromic filtering coating comprises multichromic filtering particles (MFP), the MFP in combination passing frequencies of projector light from to the substrate and not passing to the substrate at least one filtered frequency of ambient light. 6. The assembly of claim 1, wherein the multichromic filtering coating comprises a layer containing multichromic filtering particles (MFP) that are linearly disposed in the coating along parallel lines of molecules. 7. A method comprising:
identifying at least multiple visible light frequencies characteristic of a color projector, at least a non-characteristic visible light frequency not being characteristic of the color projector; and coating a projector screen substrate with multichromic material that passes the multiple visible light frequencies to the substrate and that does not pass to the substrate the non-characteristic visible light frequency, the multichromic material comprising: a mixture of multichromic particles that pass light from the color projector and that do not pass light at wavelengths other than light produced by projector. 8. The method of claim 7, comprising:
identifying at least first, second, and third visible light frequencies characteristic of the color projector, the multichromic material passing the first, second, and third visible light frequencies. 9. The method of claim 7, comprising activating elements of the projector screen substrate to establish plural grayscale values thereon. 10. The method of claim 9, comprising projecting color light onto the projector screen substrate. 11. The method of claim 10, comprising synchronizing video represented by the color light with the grayscale values of the projector screen substrate. 12. An assembly comprising:
at least one substrate against which color video pixels can he projected by at least one projector to impinge against the substrate in plural projector-produced wavelengths, the substrate comprising screen pixels actuatable to establish grayscale values on the substrate, the color video pixels being larger than the screen pixels; and at least one multichromic substance (MS) disposed on the substrate, the MS passing to the substrate the plural projector-produced wavelengths and no other wavelengths. 13. The assembly of claim 12, comprising a projector configured to project color video onto the substrate. 14. The assembly of claim 12, wherein the substrate comprises e-ink. 15. The assembly of claim 13, wherein the projector includes an ultra-short throw (UST) projector. 16. The assembly of claim 12, wherein the MS passes no other light other than red, green, and blue. 17. The assembly of claim 12, wherein the MS passes yellow light. 18. The assembly of claim 12, wherein the MS passes to the substrate wavelengths between 440 nm and 450 nm, 635 nm-645 nm, and 525-540 nm and no other wavelengths. 19. The assembly of claim 12, wherein the MS primarily passes to the substrate wavelengths of 445 nm, 638-639 nm, and 530 nm or 545 nm and substantially no other wavelengths. 20. The assembly of claim 12, wherein the MS passes to the substrate all wavelengths in the range 445 nm-639 nm. | A multichromic filtering coating is applied to a projector screen to pass to the projector screen substrate only those wavelengths produced by the projector, to accentuate selective wavelengths of light to be reflected by the screen. The screen can be a passive black substrate or an active grayscale screen such as e-ink paper, and un-reflected light reaches the screen which selectively tunes its grayscale to accentuate the brightness or darkness of the color video image being projected onto it.1. An assembly comprising:
at least one substrate against which color video can be projected by a projector, the substrate comprising pixels actuatable to establish grayscale values on the substrate; and at least one multichromic filtering coating disposed on the substrate, the multichromic filtering coating comprising molecules passing to the substrate only those wavelengths produced by the projector. 2. The assembly of claim 1, comprising a projector configured to project color video onto the substrate. 3. The assembly of claim 1, wherein the substrate comprises e-ink. 4. The assmibly of claim 2, wherein the projector includes an ultra-short throw (UST) projector. 5. The assembly of claim 1, wherein the multichromic filtering coating comprises multichromic filtering particles (MFP), the MFP in combination passing frequencies of projector light from to the substrate and not passing to the substrate at least one filtered frequency of ambient light. 6. The assembly of claim 1, wherein the multichromic filtering coating comprises a layer containing multichromic filtering particles (MFP) that are linearly disposed in the coating along parallel lines of molecules. 7. A method comprising:
identifying at least multiple visible light frequencies characteristic of a color projector, at least a non-characteristic visible light frequency not being characteristic of the color projector; and coating a projector screen substrate with multichromic material that passes the multiple visible light frequencies to the substrate and that does not pass to the substrate the non-characteristic visible light frequency, the multichromic material comprising: a mixture of multichromic particles that pass light from the color projector and that do not pass light at wavelengths other than light produced by projector. 8. The method of claim 7, comprising:
identifying at least first, second, and third visible light frequencies characteristic of the color projector, the multichromic material passing the first, second, and third visible light frequencies. 9. The method of claim 7, comprising activating elements of the projector screen substrate to establish plural grayscale values thereon. 10. The method of claim 9, comprising projecting color light onto the projector screen substrate. 11. The method of claim 10, comprising synchronizing video represented by the color light with the grayscale values of the projector screen substrate. 12. An assembly comprising:
at least one substrate against which color video pixels can he projected by at least one projector to impinge against the substrate in plural projector-produced wavelengths, the substrate comprising screen pixels actuatable to establish grayscale values on the substrate, the color video pixels being larger than the screen pixels; and at least one multichromic substance (MS) disposed on the substrate, the MS passing to the substrate the plural projector-produced wavelengths and no other wavelengths. 13. The assembly of claim 12, comprising a projector configured to project color video onto the substrate. 14. The assembly of claim 12, wherein the substrate comprises e-ink. 15. The assembly of claim 13, wherein the projector includes an ultra-short throw (UST) projector. 16. The assembly of claim 12, wherein the MS passes no other light other than red, green, and blue. 17. The assembly of claim 12, wherein the MS passes yellow light. 18. The assembly of claim 12, wherein the MS passes to the substrate wavelengths between 440 nm and 450 nm, 635 nm-645 nm, and 525-540 nm and no other wavelengths. 19. The assembly of claim 12, wherein the MS primarily passes to the substrate wavelengths of 445 nm, 638-639 nm, and 530 nm or 545 nm and substantially no other wavelengths. 20. The assembly of claim 12, wherein the MS passes to the substrate all wavelengths in the range 445 nm-639 nm. | 2,400 |
7,592 | 7,592 | 15,656,495 | 2,422 | A multichromic reflective coating is applied to a projector screen to reflect only those wavelengths produced by the projector, to accentuate selective wavelengths of light to be reflected. The screen can be a passive black substrate or an active grayscale screen such as e-ink paper, and un-reflected light reaches the screen which selectively tunes its grayscale to accentuate the brightness or darkness of the color video image being projected onto it and reflected by the multichromic reflective coating. | 1. An assembly comprising:
at least one substrate against which color video can be projected by at least one projector, the substrate comprising pixels actuatable to establish grayscale values on the substrate; and at least one multichromic reflective coating disposed on the substrate, the multichromic reflective coating reflecting only wavelengths of light produced by the projector. 2. The assembly of claim 1, comprising a projector configured to project color video onto the substrate. 3. The assembly of claim 1, herein the substrate comprises e-ink and the grayscale values are derived from full color video. 4. The assembly of claim 2, wherein the projector includes an ultra-short throw (UST) projector. 5. The assembly of claim 2, wherein the multichromic reflective coating comprises a single layer of multichromic reflective particles (MRP) mixed together, each MRP reflecting red, green, or blue light such that the single layer reflects red, green, and blue light. 6. The assembly of claim 1, wherein the multichromic reflective coating comprises at least first and second sublayers, the first sublayer including at least first multichromic reflective particles (MRP) reflecting at least a first frequency of visible light, the second sublayer including MRP reflecting at least a second frequency of visible light different from the first frequency of visible light. 7. A method comprising:
identifying at least multiple visible light frequencies characteristic of a color projector, at least a non-characteristic visible light frequency not being characteristic of the color projector; and coating a projector substrate with multichromic material that reflects the multiple visible light frequencies and that does not reflect the non-characteristic visible light frequency, the multichromic material comprising: a mixture of multichromic reflective particles that reflect light from the color projector and that do not reflect light at wavelengths other than light produced by projector, the reflective particles being mixed into a layer of plastic disposable onto the projector substrate. 8. The method of claim 7, comprising:
identifying at least first, second, and third visible light frequencies characteristic of the color projector, the multichromic material reflecting the first, second, and third visible light frequencies. 9. The method of claim 7, comprising activating elements of the projector substrate to establish plural grayscale values thereon. 10. The method of claim 9, comprising projecting color light onto the projector substrate. 11. The method of claim 10, comprising synchronizing video represented by the color light with the grayscale values of the projector substrate. 12. An assembly comprising:
at least one substrate against which color video can be projected by at least one projector to impinge against the substrate in plural projector-produced wavelengths, the substrate comprising pixels actuatable to establish grayscale values on the substrate; and at least one multichromic substance (MS) disposed on the substrate, the MS reflecting the plural projector-produced wavelengths and no other wavelengths. 13. The assembly of claim 12, comprising a projector configured to project color video onto the substrate. 14. The assembly of claim 12, wherein the substrate comprises e-ink. 15. The assembly of claim 13, wherein the projector includes an ultra-short throw (UST) projector, 16. The assembly of claim 12, wherein the MS reflects no other light other than red, green, and blue. 17. The assembly of claim 12, wherein the MS reflects yellow light. 18. The assembly of claim 12, wherein the MS reflects wavelengths between 440 nm and 450 nm, 635 nm-645 nm, and 525-540 nm and no other wavelengths. 19. The assembly of claim 12, wherein the MS primarily reflects wavelengths of 445 nm, 638-639 nm, and 530 nm or 545 nm and substantially no other wavelengths. 20. The assembly of claim 12, wherein the MS reflects all wavelengths in the range 445 nm 639 nm. | A multichromic reflective coating is applied to a projector screen to reflect only those wavelengths produced by the projector, to accentuate selective wavelengths of light to be reflected. The screen can be a passive black substrate or an active grayscale screen such as e-ink paper, and un-reflected light reaches the screen which selectively tunes its grayscale to accentuate the brightness or darkness of the color video image being projected onto it and reflected by the multichromic reflective coating.1. An assembly comprising:
at least one substrate against which color video can be projected by at least one projector, the substrate comprising pixels actuatable to establish grayscale values on the substrate; and at least one multichromic reflective coating disposed on the substrate, the multichromic reflective coating reflecting only wavelengths of light produced by the projector. 2. The assembly of claim 1, comprising a projector configured to project color video onto the substrate. 3. The assembly of claim 1, herein the substrate comprises e-ink and the grayscale values are derived from full color video. 4. The assembly of claim 2, wherein the projector includes an ultra-short throw (UST) projector. 5. The assembly of claim 2, wherein the multichromic reflective coating comprises a single layer of multichromic reflective particles (MRP) mixed together, each MRP reflecting red, green, or blue light such that the single layer reflects red, green, and blue light. 6. The assembly of claim 1, wherein the multichromic reflective coating comprises at least first and second sublayers, the first sublayer including at least first multichromic reflective particles (MRP) reflecting at least a first frequency of visible light, the second sublayer including MRP reflecting at least a second frequency of visible light different from the first frequency of visible light. 7. A method comprising:
identifying at least multiple visible light frequencies characteristic of a color projector, at least a non-characteristic visible light frequency not being characteristic of the color projector; and coating a projector substrate with multichromic material that reflects the multiple visible light frequencies and that does not reflect the non-characteristic visible light frequency, the multichromic material comprising: a mixture of multichromic reflective particles that reflect light from the color projector and that do not reflect light at wavelengths other than light produced by projector, the reflective particles being mixed into a layer of plastic disposable onto the projector substrate. 8. The method of claim 7, comprising:
identifying at least first, second, and third visible light frequencies characteristic of the color projector, the multichromic material reflecting the first, second, and third visible light frequencies. 9. The method of claim 7, comprising activating elements of the projector substrate to establish plural grayscale values thereon. 10. The method of claim 9, comprising projecting color light onto the projector substrate. 11. The method of claim 10, comprising synchronizing video represented by the color light with the grayscale values of the projector substrate. 12. An assembly comprising:
at least one substrate against which color video can be projected by at least one projector to impinge against the substrate in plural projector-produced wavelengths, the substrate comprising pixels actuatable to establish grayscale values on the substrate; and at least one multichromic substance (MS) disposed on the substrate, the MS reflecting the plural projector-produced wavelengths and no other wavelengths. 13. The assembly of claim 12, comprising a projector configured to project color video onto the substrate. 14. The assembly of claim 12, wherein the substrate comprises e-ink. 15. The assembly of claim 13, wherein the projector includes an ultra-short throw (UST) projector, 16. The assembly of claim 12, wherein the MS reflects no other light other than red, green, and blue. 17. The assembly of claim 12, wherein the MS reflects yellow light. 18. The assembly of claim 12, wherein the MS reflects wavelengths between 440 nm and 450 nm, 635 nm-645 nm, and 525-540 nm and no other wavelengths. 19. The assembly of claim 12, wherein the MS primarily reflects wavelengths of 445 nm, 638-639 nm, and 530 nm or 545 nm and substantially no other wavelengths. 20. The assembly of claim 12, wherein the MS reflects all wavelengths in the range 445 nm 639 nm. | 2,400 |
7,593 | 7,593 | 14,785,653 | 2,433 | A mobile provisioning system, method, and apparatus are provided. The mobile provisioning method is disclosed to enable a first mobile device to provision or write one or more guest identification objects to a second mobile device. The guest identification objects may be written only if the first mobile device has the appropriate permissions and may further be limited in their use as compared to non-guest identification objects. | 1. A method, comprising:
establishing a device-to-device connection between a trusted mobile device and a visitor mobile device and during the device-to-device connection, receiving, at the trusted mobile device, at least some information describing the visitor mobile device; generating at the trusted mobile device a request for a guest credential to be issued to the visitor mobile device, the request containing the at least some information describing the visitor mobile device and sending said request to a credential issuer; analyzing the request by the credential issuer to determine that the trusted mobile device is allowed to provision the visitor mobile device with the guest credential; and based on the analysis of the request, determining whether or not to generate the guest credential. 2. The method of claim 1, further comprising:
determining that the trusted mobile device is allowed to provision the visitor mobile device with the guest credential; and generating the guest credential. 3. The method of claim 2, further comprising:
transmitting the guest credential to the visitor mobile device. 4. The method of claim 3, wherein the guest credential is transmitted to the visitor mobile device via the trusted mobile device. 5. The method of claim 4, wherein the trusted mobile device writes the guest credential to the visitor mobile device using Near Field Communications. 6. The method of claim 4, wherein the trusted mobile device writes the guest credential to the visitor mobile device using Bluetooth. 7. The method of claim 2, further comprising:
determining that one or more limitations of use are to be placed on the guest credential; and incorporating the one or more limitations of use in the generated guest credential. 8. The method of claim 7, wherein the one or more limitations include at least one of: (i) an escort restriction; (ii) a time of use restriction; (iii) and a locational restriction. 9. The method of claim 2, wherein determining that the trusted mobile device is allowed to provision the visitor mobile device with the guest credential comprises analyzing information about at least one of the trusted mobile device and a user of the trusted mobile device. 10. The method of claim 9, wherein a location of the trusted mobile device is used to determine whether the trusted mobile device is allowed to provision the visitor mobile device with the guest credential. 11. The method of claim 9, wherein a credential provided by the trusted mobile device is analyzed to determine whether the trusted mobile device is allowed to provision the visitor mobile device with the guest credential. 12. A mobile device, comprising:
a mobile device interface enabling the mobile device to establish a device-to-device connection with a visitor mobile device and receive at least some information describing the visitor mobile device; and a credential request unit configured to generate and send a request for the guest credential to a credential issuer on behalf of the visitor mobile device, the request containing the at least some information describing the visitor mobile device. 13. The mobile device of claim 12, wherein the credential request unit is further configured to receive the guest credential from the credential issuer and provide the guest credential to the mobile device interface for writing to the visitor mobile device. 14. The mobile device of claim 12, wherein the mobile device interface comprises a Near Field Communications interface. 15. The mobile device of claim 14, wherein the Near Field Communications interface is configured to write the guest credential to the visitor mobile device in a transparent writing mode. | A mobile provisioning system, method, and apparatus are provided. The mobile provisioning method is disclosed to enable a first mobile device to provision or write one or more guest identification objects to a second mobile device. The guest identification objects may be written only if the first mobile device has the appropriate permissions and may further be limited in their use as compared to non-guest identification objects.1. A method, comprising:
establishing a device-to-device connection between a trusted mobile device and a visitor mobile device and during the device-to-device connection, receiving, at the trusted mobile device, at least some information describing the visitor mobile device; generating at the trusted mobile device a request for a guest credential to be issued to the visitor mobile device, the request containing the at least some information describing the visitor mobile device and sending said request to a credential issuer; analyzing the request by the credential issuer to determine that the trusted mobile device is allowed to provision the visitor mobile device with the guest credential; and based on the analysis of the request, determining whether or not to generate the guest credential. 2. The method of claim 1, further comprising:
determining that the trusted mobile device is allowed to provision the visitor mobile device with the guest credential; and generating the guest credential. 3. The method of claim 2, further comprising:
transmitting the guest credential to the visitor mobile device. 4. The method of claim 3, wherein the guest credential is transmitted to the visitor mobile device via the trusted mobile device. 5. The method of claim 4, wherein the trusted mobile device writes the guest credential to the visitor mobile device using Near Field Communications. 6. The method of claim 4, wherein the trusted mobile device writes the guest credential to the visitor mobile device using Bluetooth. 7. The method of claim 2, further comprising:
determining that one or more limitations of use are to be placed on the guest credential; and incorporating the one or more limitations of use in the generated guest credential. 8. The method of claim 7, wherein the one or more limitations include at least one of: (i) an escort restriction; (ii) a time of use restriction; (iii) and a locational restriction. 9. The method of claim 2, wherein determining that the trusted mobile device is allowed to provision the visitor mobile device with the guest credential comprises analyzing information about at least one of the trusted mobile device and a user of the trusted mobile device. 10. The method of claim 9, wherein a location of the trusted mobile device is used to determine whether the trusted mobile device is allowed to provision the visitor mobile device with the guest credential. 11. The method of claim 9, wherein a credential provided by the trusted mobile device is analyzed to determine whether the trusted mobile device is allowed to provision the visitor mobile device with the guest credential. 12. A mobile device, comprising:
a mobile device interface enabling the mobile device to establish a device-to-device connection with a visitor mobile device and receive at least some information describing the visitor mobile device; and a credential request unit configured to generate and send a request for the guest credential to a credential issuer on behalf of the visitor mobile device, the request containing the at least some information describing the visitor mobile device. 13. The mobile device of claim 12, wherein the credential request unit is further configured to receive the guest credential from the credential issuer and provide the guest credential to the mobile device interface for writing to the visitor mobile device. 14. The mobile device of claim 12, wherein the mobile device interface comprises a Near Field Communications interface. 15. The mobile device of claim 14, wherein the Near Field Communications interface is configured to write the guest credential to the visitor mobile device in a transparent writing mode. | 2,400 |
7,594 | 7,594 | 13,891,983 | 2,457 | In some implementations, a website can be certified by a push notification service operator to send push notifications to user devices. A web browser on the user's device can communicate with the website to advertise the user device's ability to receive push notifications. The website can provide to the web browser a certificate indicating that the website is authorized to utilize the push notification service. If the certificate is valid and has not been revoked, the browser can prompt the user to allow push notifications from the website. If the user authorizes push notifications, a device token can be provided to the website that allows the website to send push notifications to the user device through the push notification service. In some implementations, the web browser can be configured to provide websites access to APIs for accessing information stored on a user device. | 1. A method comprising:
receiving, at a web browser application executing on a computing device, a certificate associated with a push notification provider; determining, by the web browser, that the certificate is valid; in response to the determination, presenting a prompt on a user interface of the web browser requesting that a user approve receiving push notifications from the push notification provider; receiving approval from the user for the website to send push notifications to the computing device; and in response to receiving approval from the user, transmitting, from the web browser to the push notification provider, a device token that identifies the computing device to a push notification service. 2. The method of claim 1, wherein the certificate indicates that the push notification provider is trusted to send push notifications to user devices. 3. The method of claim 1, further comprising:
receiving, at an operating system service of the computing device, a push notification from a push notification server; and displaying the push notification on a user interface of an operating system of the computing device. 4. The method of claim 1, further comprising:
downloading, by the web browser, a webpage of the website; obtaining a network address from the webpage; and downloading the certificate based on the network address. 5. A method comprising:
receiving, at a web browser application executing on a computing device, a certificate associated with a website; determining, by the web browser, that the certificate is valid; in response to the determination, presenting a prompt on a user interface of the web browser, the prompt requesting that a user allow the website to access a web browser API for accessing information on the computing device; receiving approval from the user for the website to access the API; and in response to receiving approval from the user, sending the website approval to access the API. 6. The method of claim 5, further comprising:
receiving, at the web browser, an invocation of the API from the website, the API allowing the website to access information stored on the computing device. 7. The method of claim 5, further comprising:
allowing, through the web browser, the website to access contacts information stored on the computing device through the API. 8. A non-transitory computer-readable medium including one or more sequences of instructions which, when executed by one or more processors, causes:
receiving, at a web browser application executing on a computing device, a certificate associated with a push notification provider; determining, by the web browser, that the certificate is valid; in response to the determination, presenting a prompt on a user interface of the web browser requesting that a user approve receiving push notifications from the push notification provider; receiving approval from the user for the website to send push notifications to the computing device; and in response to receiving approval from the user, transmitting, from the web browser to the push notification provider, a device token that identifies the computing device to a push notification service. 9. The non-transitory computer-readable medium of claim 8, wherein the certificate indicates that the push notification provider is trusted to send push notifications to user devices. 10. The non-transitory computer-readable medium of claim 8, wherein the instructions cause:
receiving, at an operating system service of the computing device, a push notification from a push notification server; and displaying the push notification on a user interface of an operating system of the computing device. 11. The non-transitory computer-readable medium of claim 8, wherein the instructions cause:
downloading, by the web browser, a webpage of the website; obtaining a network address from the webpage; and downloading the certificate based on the network address. 12. A non-transitory computer-readable medium including one or more sequences of instructions which, when executed by one or more processors, causes:
receiving, at a web browser application executing on a computing device, a certificate associated with a website; determining, by the web browser, that the certificate is valid; in response to the determination, presenting a prompt on a user interface of the web browser, the prompt requesting that a user allow the website to access a web browser API for accessing information on the computing device; receiving approval from the user for the website to access the API; and in response to receiving approval from the user, sending the website approval to access the API. 13. The non-transitory computer-readable medium of claim 12, wherein the instructions cause:
receiving, at the web browser, an invocation of the API from the website, the API allowing the website to access information stored on the computing device. 14. The non-transitory computer-readable medium of claim 12, wherein the instructions cause:
allowing, through the web browser, the website to access contacts information stored on the computing device through the API. 15. A system comprising:
one or more processors; and a computer-readable medium including one or more sequences of instructions which, when executed by the one or more processors, causes:
receiving, at a web browser application executing on a computing device, a certificate associated with a push notification provider;
determining, by the web browser, that the certificate is valid;
in response to the determination, presenting a prompt on a user interface of the web browser requesting that a user approve receiving push notifications from the push notification provider;
receiving approval from the user for the website to send push notifications to the computing device; and
in response to receiving approval from the user, transmitting, from the web browser to the push notification provider, a device token that identifies the computing device to a push notification service. 16. The system of claim 15, wherein the certificate indicates that the push notification provider is trusted to send push notifications to user devices. 17. The system of claim 15, wherein the instructions cause:
receiving, at an operating system service of the computing device, a push notification from a push notification server; and displaying the push notification on a user interface of an operating system of the computing device. 18. The system of claim 15, wherein the instructions cause:
downloading, by the web browser, a webpage of the website; obtaining a network address from the webpage; and downloading the certificate based on the network address. 19. A system comprising:
one or more processors; and a computer-readable medium including one or more sequences of instructions which, when executed by the one or more processors, causes:
receiving, at a web browser application executing on a computing device, a certificate associated with a website;
determining, by the web browser, that the certificate is valid;
in response to the determination, presenting a prompt on a user interface of the web browser, the prompt requesting that a user allow the website to access a web browser API for accessing information on the computing device;
receiving approval from the user for the website to access the API; and
in response to receiving approval from the user, sending the website approval to access the API. 20. The system of claim 19, wherein the instructions cause:
receiving, at the web browser, an invocation of the API from the website, the API allowing the website to access information stored on the computing device. 21. The system of claim 19, wherein the instructions cause:
allowing, through the web browser, the website to access contacts information stored on the computing device through the API. | In some implementations, a website can be certified by a push notification service operator to send push notifications to user devices. A web browser on the user's device can communicate with the website to advertise the user device's ability to receive push notifications. The website can provide to the web browser a certificate indicating that the website is authorized to utilize the push notification service. If the certificate is valid and has not been revoked, the browser can prompt the user to allow push notifications from the website. If the user authorizes push notifications, a device token can be provided to the website that allows the website to send push notifications to the user device through the push notification service. In some implementations, the web browser can be configured to provide websites access to APIs for accessing information stored on a user device.1. A method comprising:
receiving, at a web browser application executing on a computing device, a certificate associated with a push notification provider; determining, by the web browser, that the certificate is valid; in response to the determination, presenting a prompt on a user interface of the web browser requesting that a user approve receiving push notifications from the push notification provider; receiving approval from the user for the website to send push notifications to the computing device; and in response to receiving approval from the user, transmitting, from the web browser to the push notification provider, a device token that identifies the computing device to a push notification service. 2. The method of claim 1, wherein the certificate indicates that the push notification provider is trusted to send push notifications to user devices. 3. The method of claim 1, further comprising:
receiving, at an operating system service of the computing device, a push notification from a push notification server; and displaying the push notification on a user interface of an operating system of the computing device. 4. The method of claim 1, further comprising:
downloading, by the web browser, a webpage of the website; obtaining a network address from the webpage; and downloading the certificate based on the network address. 5. A method comprising:
receiving, at a web browser application executing on a computing device, a certificate associated with a website; determining, by the web browser, that the certificate is valid; in response to the determination, presenting a prompt on a user interface of the web browser, the prompt requesting that a user allow the website to access a web browser API for accessing information on the computing device; receiving approval from the user for the website to access the API; and in response to receiving approval from the user, sending the website approval to access the API. 6. The method of claim 5, further comprising:
receiving, at the web browser, an invocation of the API from the website, the API allowing the website to access information stored on the computing device. 7. The method of claim 5, further comprising:
allowing, through the web browser, the website to access contacts information stored on the computing device through the API. 8. A non-transitory computer-readable medium including one or more sequences of instructions which, when executed by one or more processors, causes:
receiving, at a web browser application executing on a computing device, a certificate associated with a push notification provider; determining, by the web browser, that the certificate is valid; in response to the determination, presenting a prompt on a user interface of the web browser requesting that a user approve receiving push notifications from the push notification provider; receiving approval from the user for the website to send push notifications to the computing device; and in response to receiving approval from the user, transmitting, from the web browser to the push notification provider, a device token that identifies the computing device to a push notification service. 9. The non-transitory computer-readable medium of claim 8, wherein the certificate indicates that the push notification provider is trusted to send push notifications to user devices. 10. The non-transitory computer-readable medium of claim 8, wherein the instructions cause:
receiving, at an operating system service of the computing device, a push notification from a push notification server; and displaying the push notification on a user interface of an operating system of the computing device. 11. The non-transitory computer-readable medium of claim 8, wherein the instructions cause:
downloading, by the web browser, a webpage of the website; obtaining a network address from the webpage; and downloading the certificate based on the network address. 12. A non-transitory computer-readable medium including one or more sequences of instructions which, when executed by one or more processors, causes:
receiving, at a web browser application executing on a computing device, a certificate associated with a website; determining, by the web browser, that the certificate is valid; in response to the determination, presenting a prompt on a user interface of the web browser, the prompt requesting that a user allow the website to access a web browser API for accessing information on the computing device; receiving approval from the user for the website to access the API; and in response to receiving approval from the user, sending the website approval to access the API. 13. The non-transitory computer-readable medium of claim 12, wherein the instructions cause:
receiving, at the web browser, an invocation of the API from the website, the API allowing the website to access information stored on the computing device. 14. The non-transitory computer-readable medium of claim 12, wherein the instructions cause:
allowing, through the web browser, the website to access contacts information stored on the computing device through the API. 15. A system comprising:
one or more processors; and a computer-readable medium including one or more sequences of instructions which, when executed by the one or more processors, causes:
receiving, at a web browser application executing on a computing device, a certificate associated with a push notification provider;
determining, by the web browser, that the certificate is valid;
in response to the determination, presenting a prompt on a user interface of the web browser requesting that a user approve receiving push notifications from the push notification provider;
receiving approval from the user for the website to send push notifications to the computing device; and
in response to receiving approval from the user, transmitting, from the web browser to the push notification provider, a device token that identifies the computing device to a push notification service. 16. The system of claim 15, wherein the certificate indicates that the push notification provider is trusted to send push notifications to user devices. 17. The system of claim 15, wherein the instructions cause:
receiving, at an operating system service of the computing device, a push notification from a push notification server; and displaying the push notification on a user interface of an operating system of the computing device. 18. The system of claim 15, wherein the instructions cause:
downloading, by the web browser, a webpage of the website; obtaining a network address from the webpage; and downloading the certificate based on the network address. 19. A system comprising:
one or more processors; and a computer-readable medium including one or more sequences of instructions which, when executed by the one or more processors, causes:
receiving, at a web browser application executing on a computing device, a certificate associated with a website;
determining, by the web browser, that the certificate is valid;
in response to the determination, presenting a prompt on a user interface of the web browser, the prompt requesting that a user allow the website to access a web browser API for accessing information on the computing device;
receiving approval from the user for the website to access the API; and
in response to receiving approval from the user, sending the website approval to access the API. 20. The system of claim 19, wherein the instructions cause:
receiving, at the web browser, an invocation of the API from the website, the API allowing the website to access information stored on the computing device. 21. The system of claim 19, wherein the instructions cause:
allowing, through the web browser, the website to access contacts information stored on the computing device through the API. | 2,400 |
7,595 | 7,595 | 13,517,326 | 2,483 | A method for encoding at least one video stream (IV 1, IV 2 ), includes the steps of : receiving said at least one input video stream (IV 1, IV 2 ), construction of a sequence of predicted pixel blocks (PPB 1, PPB 2 ), processing said sequence of predicted pixel blocks (PPB 1, PPB 2 ) and corresponding blocks of said at least one input video stream (IV 1, IV 2 ) to obtain a sequence of processed residual pixel data (QRPD 1, QRPD 2 ), wherein said sequence of predicted pixel blocks (PPB) is constructed from input encoding structure data (IESD) from reference input data (IREF), said input encoding structure data (IESD) further undergoing a combined entropy encoding step with said processed residual pixel data (QRPD) to thereby obtain at least one encoded video stream (EV 1, EV 2 ). An encoder and several arrangements comprising such an encoder are disclosed as well. | 1. Method for encoding at least one video stream (V , 1V2), said method includes the steps of:
receiving said at least one input video stream (IV1, V2) constructing of a sequence of predicted pixel blocks (PPB1,PPB2), processing said sequence of predicted pixel blocks (PPB1,PPB2) and corresponding blocks of said at least one input video stream (IV1, IV2) to obtain a sequence of processed residual pixel data (QRPD1, QRPD2), wherein said sequence of predicted pixel blocks (PPB1,PPP2) is constructed from input encoding structure data (IESD) from reference input data (IREF), said input encoding structure data (IESD) further undergoing a combined entropy encoding step with said processed residual pixel data (QRPD1, QRPD2) to thereby obtain at least one encoded video stream (EV1, EV2). 2. Method according to claim 1 wherein said processing comprises generating a sequence of residual pixel blocks (RPB1, RPB2) from the difference between said predicted pixel blocks (PPB1,PPB2) and corresponding blocks of said at least one input video stream (IV1, IV2) , transforming and quantizing said sequence of residual pixel blocks (RPB1,RPB2) to thereby obtain said sequence of processed residual pixel data (QRPD1,QRPD2). 3. Method according to claim 1 wherein said reference input data (IREF) comprises encoded input encoding structure data (EIESD) such that the input encoding structure data (IESD) is derived from said reference input data (IREF) by entropy decoding said reference input data (IREF). 4. Method according to claim 1 further including a step of comparing configuration data of said at least one input video stream (IV1) with said input encoding structure data (IESD) and that, if the data do not match, said at least one input video stream (IV1) is further preprocessed to thereby generate at least one updated input video stream (UIV1) such that the residual pixel blocks are determined from the difference between said predicted pixel blocks (PPB1) and corresponding blocks of said at least one updated video stream. 5. Method according to claim 1 further comprising a step of extracting said reference input data (IREF) from an encoded reference video stream (EVREF, EVREFh). 6. Method according to claim 5 further comprising a step of encoding a reference video stream (VREF) to provide said encoded reference video stream (EVREF). 7. Method according to claim 6 wherein said at least one input video stream (IV1,IV2) is generated from said reference video stream (VREF) and input modification data (delta1, delta2). 8. Method for encoding a plurality of video streams (IV1,1V2), said method including a step of selecting one of said video streams (IV1,1V2) as said reference video stream (VREF) which is further encoded to obtain said encoded reference video stream, and whereby the other video streams are further encoded in accordance with claim 5. 9. Encoder (E1-E8) for encoding at least one video stream (IV1,1V2), said encoder including at least one input terminal (IN1,1N2) for receiving said at least one input video stream (IV1,1V2), said encoder being further adapted to construct a sequence of predicted pixel blocks (PPB1, PPB2), to process said sequence of predicted pixel blocks (PPB1, PPB2) and corresponding blocks of said at least one input video stream (IV1,IV2) to thereby obtain a sequence of processed residual pixel data (QPRD1, QPRD2),
wherein said encoder further includes an additional input terminal (INRef) for receiving reference input data (IREF) , and wherein said encoder is further adapted to construct said sequence of predicted pixel blocks (PPB1, PPB2) from input encoding structure data (IESD) from said reference input data (IREF) and to entropy encode said reference input data (IREF) in combination with said processed residual pixel data (QPRD1, QPRD2) to thereby generate at least one encoded video stream (EV1, EV2) for provision to at least one output terminal (OUT1, OUT2) of said encoder. 10. Encoder (E1-E8) according to claim 9 further comprising an entropy encoder and a combiner (C; C1, C2). 11. Encoder (E1-E8) according to claim 9, further being adapted to process said predicted pixel blocks (PPB1, PPB2) and corresponding blocks of said at least one input video stream (IV1, IV8) by generating a sequence of residual pixel blocks (RPB1, RPB2) from the difference between said predicted pixel blocks (PPB1, PPB2) and corresponding blocks of said at least one input video stream (IV1, IV2) , transforming and quantizing said sequence of residual pixel blocks (RPB1, RPB2) to thereby obtain said sequence of processed residual pixel data (QRPD1, QRPD2). 12. Encoder (E2, E3, E4, E6,) according to claim 9 wherein said reference input data (IREF) comprises encoded input encoding structure data (EIESD) and wherein said encoder (E2) further comprises an entropy decoder (ED1) for entropy decoding said reference input data (IREF) for generating said input encoding structure data (IESD). 13. Encoder (E8) according to claim 9 further being adapted to compare configuration data of said at least one input video stream (IV1) with said input encoding structure data (IESD) and, if the data do not match, to preprocess said at least one input video stream (IV1) to thereby generate at least one updated input video stream (UIV1) such that said residual pixel blocks (PPB1) are determined from the difference between said predicted pixel blocks and corresponding blocks of said at least one updated input video stream (UIV1). 14. First arrangement (A1) including an encoder (E1-E8) according to claim 9 and an apparatus (A,B) adapted to extract said reference input data (IREF) from an encoded reference video stream (EVREF, EVREFh) for provision to said encoder (E1-E8). 15. Second arrangement (A2) comprising a first arrangement (A1) according to claim 14 and an encoder (ET) for encoding a reference video stream (VREF) such as to provide the thus obtained encoded reference stream (EVREF) to said first arrangement (A1). 16. Third arrangement (A3) comprising a second arrangement (A2) according to claim 15 and comprising at least one video combining means (VCM1, VCM2) for generating said at least one input video stream (IV1,1V2) from said input reference video stream (VREF) and from input modification data (delta1,delta2) for provision to said second arrangement (A2). 17. Fourth arrangement (A4; A4 b) adapted to receive a plurality of input video streams (IV1,IV2) and comprising selection means (S) for selecting an input video stream (IV1) of said plurality as a reference video stream, further comprising an encoder (ET) for encoding said reference video stream to thereby generate an encoded reference video stream (EV1) for provision to a first output of said fourth arrangement (A4, A4 b) and for provision to a first arrangement (A1) according to claim 14 comprised within said fourth arrangement, said first arrangement being further adapted to encode the other input video stream (IV2) of said plurality, and to provide the other encoded video stream (EV2) to other outputs of said fourth arrangement (A4, A4 b). | A method for encoding at least one video stream (IV 1, IV 2 ), includes the steps of : receiving said at least one input video stream (IV 1, IV 2 ), construction of a sequence of predicted pixel blocks (PPB 1, PPB 2 ), processing said sequence of predicted pixel blocks (PPB 1, PPB 2 ) and corresponding blocks of said at least one input video stream (IV 1, IV 2 ) to obtain a sequence of processed residual pixel data (QRPD 1, QRPD 2 ), wherein said sequence of predicted pixel blocks (PPB) is constructed from input encoding structure data (IESD) from reference input data (IREF), said input encoding structure data (IESD) further undergoing a combined entropy encoding step with said processed residual pixel data (QRPD) to thereby obtain at least one encoded video stream (EV 1, EV 2 ). An encoder and several arrangements comprising such an encoder are disclosed as well.1. Method for encoding at least one video stream (V , 1V2), said method includes the steps of:
receiving said at least one input video stream (IV1, V2) constructing of a sequence of predicted pixel blocks (PPB1,PPB2), processing said sequence of predicted pixel blocks (PPB1,PPB2) and corresponding blocks of said at least one input video stream (IV1, IV2) to obtain a sequence of processed residual pixel data (QRPD1, QRPD2), wherein said sequence of predicted pixel blocks (PPB1,PPP2) is constructed from input encoding structure data (IESD) from reference input data (IREF), said input encoding structure data (IESD) further undergoing a combined entropy encoding step with said processed residual pixel data (QRPD1, QRPD2) to thereby obtain at least one encoded video stream (EV1, EV2). 2. Method according to claim 1 wherein said processing comprises generating a sequence of residual pixel blocks (RPB1, RPB2) from the difference between said predicted pixel blocks (PPB1,PPB2) and corresponding blocks of said at least one input video stream (IV1, IV2) , transforming and quantizing said sequence of residual pixel blocks (RPB1,RPB2) to thereby obtain said sequence of processed residual pixel data (QRPD1,QRPD2). 3. Method according to claim 1 wherein said reference input data (IREF) comprises encoded input encoding structure data (EIESD) such that the input encoding structure data (IESD) is derived from said reference input data (IREF) by entropy decoding said reference input data (IREF). 4. Method according to claim 1 further including a step of comparing configuration data of said at least one input video stream (IV1) with said input encoding structure data (IESD) and that, if the data do not match, said at least one input video stream (IV1) is further preprocessed to thereby generate at least one updated input video stream (UIV1) such that the residual pixel blocks are determined from the difference between said predicted pixel blocks (PPB1) and corresponding blocks of said at least one updated video stream. 5. Method according to claim 1 further comprising a step of extracting said reference input data (IREF) from an encoded reference video stream (EVREF, EVREFh). 6. Method according to claim 5 further comprising a step of encoding a reference video stream (VREF) to provide said encoded reference video stream (EVREF). 7. Method according to claim 6 wherein said at least one input video stream (IV1,IV2) is generated from said reference video stream (VREF) and input modification data (delta1, delta2). 8. Method for encoding a plurality of video streams (IV1,1V2), said method including a step of selecting one of said video streams (IV1,1V2) as said reference video stream (VREF) which is further encoded to obtain said encoded reference video stream, and whereby the other video streams are further encoded in accordance with claim 5. 9. Encoder (E1-E8) for encoding at least one video stream (IV1,1V2), said encoder including at least one input terminal (IN1,1N2) for receiving said at least one input video stream (IV1,1V2), said encoder being further adapted to construct a sequence of predicted pixel blocks (PPB1, PPB2), to process said sequence of predicted pixel blocks (PPB1, PPB2) and corresponding blocks of said at least one input video stream (IV1,IV2) to thereby obtain a sequence of processed residual pixel data (QPRD1, QPRD2),
wherein said encoder further includes an additional input terminal (INRef) for receiving reference input data (IREF) , and wherein said encoder is further adapted to construct said sequence of predicted pixel blocks (PPB1, PPB2) from input encoding structure data (IESD) from said reference input data (IREF) and to entropy encode said reference input data (IREF) in combination with said processed residual pixel data (QPRD1, QPRD2) to thereby generate at least one encoded video stream (EV1, EV2) for provision to at least one output terminal (OUT1, OUT2) of said encoder. 10. Encoder (E1-E8) according to claim 9 further comprising an entropy encoder and a combiner (C; C1, C2). 11. Encoder (E1-E8) according to claim 9, further being adapted to process said predicted pixel blocks (PPB1, PPB2) and corresponding blocks of said at least one input video stream (IV1, IV8) by generating a sequence of residual pixel blocks (RPB1, RPB2) from the difference between said predicted pixel blocks (PPB1, PPB2) and corresponding blocks of said at least one input video stream (IV1, IV2) , transforming and quantizing said sequence of residual pixel blocks (RPB1, RPB2) to thereby obtain said sequence of processed residual pixel data (QRPD1, QRPD2). 12. Encoder (E2, E3, E4, E6,) according to claim 9 wherein said reference input data (IREF) comprises encoded input encoding structure data (EIESD) and wherein said encoder (E2) further comprises an entropy decoder (ED1) for entropy decoding said reference input data (IREF) for generating said input encoding structure data (IESD). 13. Encoder (E8) according to claim 9 further being adapted to compare configuration data of said at least one input video stream (IV1) with said input encoding structure data (IESD) and, if the data do not match, to preprocess said at least one input video stream (IV1) to thereby generate at least one updated input video stream (UIV1) such that said residual pixel blocks (PPB1) are determined from the difference between said predicted pixel blocks and corresponding blocks of said at least one updated input video stream (UIV1). 14. First arrangement (A1) including an encoder (E1-E8) according to claim 9 and an apparatus (A,B) adapted to extract said reference input data (IREF) from an encoded reference video stream (EVREF, EVREFh) for provision to said encoder (E1-E8). 15. Second arrangement (A2) comprising a first arrangement (A1) according to claim 14 and an encoder (ET) for encoding a reference video stream (VREF) such as to provide the thus obtained encoded reference stream (EVREF) to said first arrangement (A1). 16. Third arrangement (A3) comprising a second arrangement (A2) according to claim 15 and comprising at least one video combining means (VCM1, VCM2) for generating said at least one input video stream (IV1,1V2) from said input reference video stream (VREF) and from input modification data (delta1,delta2) for provision to said second arrangement (A2). 17. Fourth arrangement (A4; A4 b) adapted to receive a plurality of input video streams (IV1,IV2) and comprising selection means (S) for selecting an input video stream (IV1) of said plurality as a reference video stream, further comprising an encoder (ET) for encoding said reference video stream to thereby generate an encoded reference video stream (EV1) for provision to a first output of said fourth arrangement (A4, A4 b) and for provision to a first arrangement (A1) according to claim 14 comprised within said fourth arrangement, said first arrangement being further adapted to encode the other input video stream (IV2) of said plurality, and to provide the other encoded video stream (EV2) to other outputs of said fourth arrangement (A4, A4 b). | 2,400 |
7,596 | 7,596 | 15,068,819 | 2,421 | The present disclosure presents an improved system and method for tracking and tagging objects of interest in a broadcast, including expert indications of desirable and undesirable locations on golf course terrain. | 1. A method for tracking and tagging objects of interest in a broadcast, comprising:
providing an indication of terrain of a golf course; and rendering graphics in a broadcast over and relative to the terrain of said golf course, the graphics indicative of a golf expert's indications of desirable and/or undesirable locations for golf play; wherein said desirable and undesirable locations are indicated as good and bad spots to land a ball from a preceding position; wherein said indication by an expert takes into account objective and subjective factors; and wherein said factors include expert opinion on places that might leave a player in a good or bad position in considering a following shot. 2. A method in accordance with claim 1, wherein said graphics provide indication of desirable and undesirable locations for golf play. 3. A method in accordance with claim 2, wherein said desirable and undesirable locations are indicated as good and bad spots to land a ball from a preceding position. 4. A method in accordance with claim 3, wherein said preceding position comprises a tee shot. 5. A method in accordance with claim 2, wherein said indication by an expert takes into account objective and subjective factors. 6. A method in accordance with claim 5, wherein said factors include expert opinion on places that might leave a player in a good or bad position in considering a following shot. 7. A method in accordance with claim 3, wherein the position, size or shape of the indicated good and bad spots vary according to one or more of: the length of the preceding shot; anticipated conditions of the terrain; time of day; lighting; wind conditions; physical capabilities of players; contestant skill sets; and unanticipated hazards. 8. A method in accordance with claim 3, wherein said the position, size and shape of the indicated good and bad spots are generated prior to game play. 9. A method in accordance with claim 3, wherein a position, size or shape of the indicated good or bad spots are changed prior to or during game play to reflect a change in at least one factor. 10. A method in accordance with claim 9, wherein said at least one factor comprises one or more of: the length of the preceding shot; anticipated conditions of the terrain; time of day; lighting; wind conditions; physical capabilities of players; contestant skill sets; and unanticipated hazards. 11. A method in accordance with claim 2, wherein said indication is provided by a user interface, comprising one or more of: a computer terminal; a tablet; a touchscreen product; and a mobile device. 12. A method in accordance with claim 2, wherein said indication is provided on a 3D rendering of golf course terrain relative to at least one pre-determined camera shot, which the indication overlaid relative to said terrain. 13. A method in accordance with claim 2, wherein said indication is provided as an overlay utilizing red color for undesirable locations and green color for desirable locations. 14. A method in accordance with claim 2, further comprising providing a broadcast extraction window via a computer system, wherein said broadcast extraction window is configured to position in accordance with tracking data received by said computer system. 15. A method in accordance with claim 14, wherein said extraction window is configured to pan, scan or zoom in response to said tracking data. 16. A system for tracking and tagging objects of interest in a broadcast, comprising:
a user interface configured to accept a golf expert's indications of desirable and/or undesirable locations for golf play; and a computer system, including a processor that is configured to render graphics in a broadcast over and relative to a golf course terrain, said graphics providing said indications of desirable and/or undesirable locations for golf play. 17. A system in accordance with claim 16, wherein said graphics provide indication of desirable and undesirable locations for golf play. 18. A system in accordance with claim 17, wherein said desirable and undesirable locations are indicated as good and bad spots to land a ball from a preceding position. 19. A system in accordance with claim 18, wherein said preceding position comprises a tee shot. 20. A system in accordance with claim 17, wherein said indication by an expert takes into account objective and subjective factors. 21. A system in accordance with claim 20, wherein said factors include expert opinion on places that might leave a player in a good or bad position in considering a following shot. 22. A system in accordance with claim 17, wherein the position, size or shape of the indicated good and bad spots vary according to one or more of: the length of the preceding shot;
anticipated conditions of the terrain; time of day; lighting; wind conditions; physical capabilities of players; contestant skill sets; and unanticipated hazards. 23. A system in accordance with claim 17, wherein said the position, size and shape of the indicated good and bad spots are generated prior to game play. 24. A system in accordance with claim 17, wherein a position, size or shape of the indicated good or bad spots are changed prior to or during game play to reflect a change in at least one factor. 25. A system in accordance with claim 24, wherein said at least one factor comprises one or more of: the length of the preceding shot; anticipated conditions of the terrain; time of day; lighting; wind conditions; physical capabilities of players; contestant skill sets; and unanticipated hazards. 26. A system in accordance with claim 17, wherein said indication is provided by a user interface, comprising one or more of: a computer terminal; a tablet; a touchscreen product; and a mobile device. 27. A system in accordance with claim 17, wherein said indication is provided on a 3D rendering of golf course terrain relative to at least one pre-determined camera shot, which the indication overlaid relative to said terrain. 28. A system in accordance with claim 17, wherein said indication is provided as an overlay utilizing red color for undesirable locations and green color for desirable locations. 29. A system in accordance with claim 17, further comprising a broadcast extraction window provided by said computer system, wherein said broadcast extraction window is configured to position in accordance with tracking data received by said computer system. 30. A system in accordance with claim 29, wherein said extraction window is configured to pan, scan or zoom in response to said tracking data. | The present disclosure presents an improved system and method for tracking and tagging objects of interest in a broadcast, including expert indications of desirable and undesirable locations on golf course terrain.1. A method for tracking and tagging objects of interest in a broadcast, comprising:
providing an indication of terrain of a golf course; and rendering graphics in a broadcast over and relative to the terrain of said golf course, the graphics indicative of a golf expert's indications of desirable and/or undesirable locations for golf play; wherein said desirable and undesirable locations are indicated as good and bad spots to land a ball from a preceding position; wherein said indication by an expert takes into account objective and subjective factors; and wherein said factors include expert opinion on places that might leave a player in a good or bad position in considering a following shot. 2. A method in accordance with claim 1, wherein said graphics provide indication of desirable and undesirable locations for golf play. 3. A method in accordance with claim 2, wherein said desirable and undesirable locations are indicated as good and bad spots to land a ball from a preceding position. 4. A method in accordance with claim 3, wherein said preceding position comprises a tee shot. 5. A method in accordance with claim 2, wherein said indication by an expert takes into account objective and subjective factors. 6. A method in accordance with claim 5, wherein said factors include expert opinion on places that might leave a player in a good or bad position in considering a following shot. 7. A method in accordance with claim 3, wherein the position, size or shape of the indicated good and bad spots vary according to one or more of: the length of the preceding shot; anticipated conditions of the terrain; time of day; lighting; wind conditions; physical capabilities of players; contestant skill sets; and unanticipated hazards. 8. A method in accordance with claim 3, wherein said the position, size and shape of the indicated good and bad spots are generated prior to game play. 9. A method in accordance with claim 3, wherein a position, size or shape of the indicated good or bad spots are changed prior to or during game play to reflect a change in at least one factor. 10. A method in accordance with claim 9, wherein said at least one factor comprises one or more of: the length of the preceding shot; anticipated conditions of the terrain; time of day; lighting; wind conditions; physical capabilities of players; contestant skill sets; and unanticipated hazards. 11. A method in accordance with claim 2, wherein said indication is provided by a user interface, comprising one or more of: a computer terminal; a tablet; a touchscreen product; and a mobile device. 12. A method in accordance with claim 2, wherein said indication is provided on a 3D rendering of golf course terrain relative to at least one pre-determined camera shot, which the indication overlaid relative to said terrain. 13. A method in accordance with claim 2, wherein said indication is provided as an overlay utilizing red color for undesirable locations and green color for desirable locations. 14. A method in accordance with claim 2, further comprising providing a broadcast extraction window via a computer system, wherein said broadcast extraction window is configured to position in accordance with tracking data received by said computer system. 15. A method in accordance with claim 14, wherein said extraction window is configured to pan, scan or zoom in response to said tracking data. 16. A system for tracking and tagging objects of interest in a broadcast, comprising:
a user interface configured to accept a golf expert's indications of desirable and/or undesirable locations for golf play; and a computer system, including a processor that is configured to render graphics in a broadcast over and relative to a golf course terrain, said graphics providing said indications of desirable and/or undesirable locations for golf play. 17. A system in accordance with claim 16, wherein said graphics provide indication of desirable and undesirable locations for golf play. 18. A system in accordance with claim 17, wherein said desirable and undesirable locations are indicated as good and bad spots to land a ball from a preceding position. 19. A system in accordance with claim 18, wherein said preceding position comprises a tee shot. 20. A system in accordance with claim 17, wherein said indication by an expert takes into account objective and subjective factors. 21. A system in accordance with claim 20, wherein said factors include expert opinion on places that might leave a player in a good or bad position in considering a following shot. 22. A system in accordance with claim 17, wherein the position, size or shape of the indicated good and bad spots vary according to one or more of: the length of the preceding shot;
anticipated conditions of the terrain; time of day; lighting; wind conditions; physical capabilities of players; contestant skill sets; and unanticipated hazards. 23. A system in accordance with claim 17, wherein said the position, size and shape of the indicated good and bad spots are generated prior to game play. 24. A system in accordance with claim 17, wherein a position, size or shape of the indicated good or bad spots are changed prior to or during game play to reflect a change in at least one factor. 25. A system in accordance with claim 24, wherein said at least one factor comprises one or more of: the length of the preceding shot; anticipated conditions of the terrain; time of day; lighting; wind conditions; physical capabilities of players; contestant skill sets; and unanticipated hazards. 26. A system in accordance with claim 17, wherein said indication is provided by a user interface, comprising one or more of: a computer terminal; a tablet; a touchscreen product; and a mobile device. 27. A system in accordance with claim 17, wherein said indication is provided on a 3D rendering of golf course terrain relative to at least one pre-determined camera shot, which the indication overlaid relative to said terrain. 28. A system in accordance with claim 17, wherein said indication is provided as an overlay utilizing red color for undesirable locations and green color for desirable locations. 29. A system in accordance with claim 17, further comprising a broadcast extraction window provided by said computer system, wherein said broadcast extraction window is configured to position in accordance with tracking data received by said computer system. 30. A system in accordance with claim 29, wherein said extraction window is configured to pan, scan or zoom in response to said tracking data. | 2,400 |
7,597 | 7,597 | 15,276,585 | 2,426 | Systems and methods for a passenger vehicle entertainment system configured to access media data files on a passenger's personal electronic device and play the media data files on a video monitor of the entertainment system installed at a passenger seat. The system includes an onboard display system having a computing device, a wireless communication module and a video monitor. The system has a media player software application executable by the computing device and configured to program the display system to establish a wireless network connection to the personal electronic device using the wireless communication module, access media data files stored on the personal electronic device via the wireless network connection, display the media data files on the video monitor and allow a passenger to browse the media data files and select a media data file to play, and play a selected media data file on the video monitor. | 1. An entertainment system for a passenger vehicle having seats configured to present media from a media data file stored on a passenger's personal electronic device on a video monitor of the entertainment system installed at one of the seats, the entertainment system comprising:
an onboard display system installed in the passenger vehicle, the onboard display system including a computing device having a processor, memory and a storage device, a wireless communication module operatively coupled to the computing device, and a video monitor operatively coupled to the computing device; a media player software application stored on the storage device and configured to program the display system to establish a wireless network connection to a passenger's personal electronic device using the wireless communication module, access data file folders having media data files and/or media data files stored on the personal electronic device via the wireless network connection, display the data file folders and media data files on the video monitor and allow a passenger to browse the data file folders and/or media data files and select a media data file, and play the selected media data file using a media player software program stored on the storage device to present media from the selected media data file on the video monitor. 2. The entertainment system of claim 1, wherein the onboard display system includes a wired network and wireless access point coupled via an electrical conductor to the wired network, and the wireless network connection is established with the passenger's personal electronic device using the wireless network access point. 3. The entertainment system of claim 1, wherein the onboard display system comprises a plurality of video monitors each installed at a respective seat of the passenger vehicle, and the media player software application is further configured to program the display system to execute a wireless connection function which displays a wireless connection screen on the video monitor and allows the passenger to enter credentials for the display system to establish the wireless network connection to the passenger's personal electronic device, and upon receiving the credentials, the display system uses the credentials to establish the wireless network connection and associate the personal electronic device to a particular video monitor among the plurality of video monitors. 4. The entertainment system of claim 1, wherein the media player software application is further configured to program the display system to associate the first media data file with the media player software program for playing the first media data file. 5. The entertainment system of claim 1, wherein the personal electronic device is selected from the group consisting of a wireless access point, a cellular phone configured to function as a wireless access point, a cellular hotspot device, a tablet computer configured to function as a wireless access point, and a personal computer configured to function as a wireless access point. 6. The entertainment system of claim 1, wherein the onboard display system is an in-seat display system configured to be installed at seats of the passenger vehicle, and the entertainment system comprises a plurality of onboard display systems including the onboard display system, each onboard display system installed at a respective seat and including a computing device having a processor, memory, and a storage device, a wireless communication module and a video monitor operatively coupled to the computing device, and wherein the media player software application is stored on the respective storage device of each onboard display system such that each display system is programmed to establish a wireless network connection to the passenger's personal electronic device, access data file folders having media data files and/or media data files stored on the personal electronic device, display the data file folders and media data files on the respective video monitor and allow a passenger to browse the data file folders and/or media data files and select a media data file using the video monitor, and present the media from a respective selected media data file on the respective video monitor. 7. The entertainment system of claim 6, wherein each of the onboard display systems comprises a smart monitor. 8. The entertainment system of claim 1, wherein the onboard display system comprises a central onboard management system containing the computing device and wireless communication module, and the central onboard management system is operatively connected to a plurality of video monitors including the video monitor, each video monitor installed at a respective passenger seat, the central onboard management system configured to present media on each of the video monitors. 9. The entertainment system of claim 1, further comprising another video monitor operatively coupled to the onboard display system, and the media player software application is further configured to display the data file folders and media data files on the another video monitor, allow another passenger to browse the data file folders and/or media data files and select another media data file, and present the media from the another media data file on the another video monitor. 10. The entertainment system of claim 9, wherein the second selected media data file is one of the selected media data file or a different media data file. 11. The entertainment system of claim 1, wherein the passenger vehicle is a commercial airplane and the onboard display system is an in-flight entertainment system. 12. An entertainment system for a vehicle having seats for passengers configured to present media from a data file stored on a passenger's personal electronic device, the system comprising:
a plurality of video monitors, each video monitor installed at a respective seat; a central onboard management system comprising a computing device having a processor, memory and a storage device, and a wireless communication module, the central onboard managements system operatively connected to each of the video monitors and configured to present media on each of the video monitors; a software application stored on the storage device and configured to program the central onboard management system to establish a wireless network connection to a passenger's personal electronic device, associating the wireless connection session with a video monitor located at a seat for a passenger, access data file folders having data files and/or data files stored on the personal electronic device via the wireless network connection, display the data file folders and data files on the video monitor and allow the passenger to browse the data file folders and/or data files and select a data file, and open the data file using a media software application stored on the storage device to display media from the selected data file on the video monitor. 13. The entertainment system of claim 12, wherein the software application is further configured to program the central onboard management system to: execute a wireless connection function which displays a wireless connection screen on the video monitor and allows the passenger to enter credentials for the central onboard management system to establish the wireless network connection to the passenger's personal electronic device; upon receiving the credentials, to use the credentials to establish the wireless network connection; and wherein associating the wireless connection session with the video monitor located at the seat for the passenger is based on the video monitor at which the credentials were entered. 14. The entertainment system of claim 12, wherein the software application is further configured to associate media data files with a media player software program for playing media data files. 15. The entertainment system of claim 14, wherein the software application is further configured to allow a user to create a playlist of media data files stored on the personal electronic device. 16. The entertainment system of claim 12, wherein the personal electronic device is selected from the group consisting of a wireless access point, a cellular phone configured to function as a wireless access point, a cellular hotspot device, a tablet computer configured to function as a wireless access point, and a personal computer configured to function as a wireless access point. 17. The entertainment system of claim 12, wherein the software application is further configured to associate the wireless connection session with another video monitor located at another seat for another passenger, access data file folders having data files and/or data files stored on the personal electronic device, display the data file folders and data files on the another video monitor and allow the another passenger to browse the data file folders and/or data files and select a data file, and display information from the selected data file on the another video monitor. 18. The entertainment system of claim 17, wherein the selected data file by the passenger and the another passenger is one of the same selected data file or a different data file. 19. A method for presenting media from a data file stored on a passenger's personal electronic device on a video display installed at respective passenger seat on a passenger vehicle, the method comprising:
providing an onboard display system in a passenger vehicle, the onboard display system including a computing device having a processor, memory and a storage device, a wireless communication module operatively coupled to the computing device, a video monitor operatively coupled to the computing device, and a media player software application stored on the storage device; establishing a wireless connection between the onboard display system and a passenger's personal electronic device; accessing data file folders having media data files and/or media data files stored on the personal electronic device via the wireless network connection with the onboard display system; displaying the data file folders and media data files on the video monitor and allowing a passenger to browse the data file folders and/or media data files and select a media data file; receiving by the onboard display system a selection of a media data file stored on the personal electronic device; and playing the selected media data file using a media player software program stored on the storage device and presenting media from the selected media data file on the video monitor. 20. The method of claim 19, further comprising:
executing a wireless connection function which displays a wireless connection screen on the video monitor and allows the passenger to enter credentials for the onboard display system to establish the wireless network connection to the passenger's personal electronic device; receiving by the onboard display system the credentials; and after receiving the credentials, causing the onboard display system to use the credentials to establish the wireless network connection and associate the personal electronic device to the video monitor among a plurality of video monitors. | Systems and methods for a passenger vehicle entertainment system configured to access media data files on a passenger's personal electronic device and play the media data files on a video monitor of the entertainment system installed at a passenger seat. The system includes an onboard display system having a computing device, a wireless communication module and a video monitor. The system has a media player software application executable by the computing device and configured to program the display system to establish a wireless network connection to the personal electronic device using the wireless communication module, access media data files stored on the personal electronic device via the wireless network connection, display the media data files on the video monitor and allow a passenger to browse the media data files and select a media data file to play, and play a selected media data file on the video monitor.1. An entertainment system for a passenger vehicle having seats configured to present media from a media data file stored on a passenger's personal electronic device on a video monitor of the entertainment system installed at one of the seats, the entertainment system comprising:
an onboard display system installed in the passenger vehicle, the onboard display system including a computing device having a processor, memory and a storage device, a wireless communication module operatively coupled to the computing device, and a video monitor operatively coupled to the computing device; a media player software application stored on the storage device and configured to program the display system to establish a wireless network connection to a passenger's personal electronic device using the wireless communication module, access data file folders having media data files and/or media data files stored on the personal electronic device via the wireless network connection, display the data file folders and media data files on the video monitor and allow a passenger to browse the data file folders and/or media data files and select a media data file, and play the selected media data file using a media player software program stored on the storage device to present media from the selected media data file on the video monitor. 2. The entertainment system of claim 1, wherein the onboard display system includes a wired network and wireless access point coupled via an electrical conductor to the wired network, and the wireless network connection is established with the passenger's personal electronic device using the wireless network access point. 3. The entertainment system of claim 1, wherein the onboard display system comprises a plurality of video monitors each installed at a respective seat of the passenger vehicle, and the media player software application is further configured to program the display system to execute a wireless connection function which displays a wireless connection screen on the video monitor and allows the passenger to enter credentials for the display system to establish the wireless network connection to the passenger's personal electronic device, and upon receiving the credentials, the display system uses the credentials to establish the wireless network connection and associate the personal electronic device to a particular video monitor among the plurality of video monitors. 4. The entertainment system of claim 1, wherein the media player software application is further configured to program the display system to associate the first media data file with the media player software program for playing the first media data file. 5. The entertainment system of claim 1, wherein the personal electronic device is selected from the group consisting of a wireless access point, a cellular phone configured to function as a wireless access point, a cellular hotspot device, a tablet computer configured to function as a wireless access point, and a personal computer configured to function as a wireless access point. 6. The entertainment system of claim 1, wherein the onboard display system is an in-seat display system configured to be installed at seats of the passenger vehicle, and the entertainment system comprises a plurality of onboard display systems including the onboard display system, each onboard display system installed at a respective seat and including a computing device having a processor, memory, and a storage device, a wireless communication module and a video monitor operatively coupled to the computing device, and wherein the media player software application is stored on the respective storage device of each onboard display system such that each display system is programmed to establish a wireless network connection to the passenger's personal electronic device, access data file folders having media data files and/or media data files stored on the personal electronic device, display the data file folders and media data files on the respective video monitor and allow a passenger to browse the data file folders and/or media data files and select a media data file using the video monitor, and present the media from a respective selected media data file on the respective video monitor. 7. The entertainment system of claim 6, wherein each of the onboard display systems comprises a smart monitor. 8. The entertainment system of claim 1, wherein the onboard display system comprises a central onboard management system containing the computing device and wireless communication module, and the central onboard management system is operatively connected to a plurality of video monitors including the video monitor, each video monitor installed at a respective passenger seat, the central onboard management system configured to present media on each of the video monitors. 9. The entertainment system of claim 1, further comprising another video monitor operatively coupled to the onboard display system, and the media player software application is further configured to display the data file folders and media data files on the another video monitor, allow another passenger to browse the data file folders and/or media data files and select another media data file, and present the media from the another media data file on the another video monitor. 10. The entertainment system of claim 9, wherein the second selected media data file is one of the selected media data file or a different media data file. 11. The entertainment system of claim 1, wherein the passenger vehicle is a commercial airplane and the onboard display system is an in-flight entertainment system. 12. An entertainment system for a vehicle having seats for passengers configured to present media from a data file stored on a passenger's personal electronic device, the system comprising:
a plurality of video monitors, each video monitor installed at a respective seat; a central onboard management system comprising a computing device having a processor, memory and a storage device, and a wireless communication module, the central onboard managements system operatively connected to each of the video monitors and configured to present media on each of the video monitors; a software application stored on the storage device and configured to program the central onboard management system to establish a wireless network connection to a passenger's personal electronic device, associating the wireless connection session with a video monitor located at a seat for a passenger, access data file folders having data files and/or data files stored on the personal electronic device via the wireless network connection, display the data file folders and data files on the video monitor and allow the passenger to browse the data file folders and/or data files and select a data file, and open the data file using a media software application stored on the storage device to display media from the selected data file on the video monitor. 13. The entertainment system of claim 12, wherein the software application is further configured to program the central onboard management system to: execute a wireless connection function which displays a wireless connection screen on the video monitor and allows the passenger to enter credentials for the central onboard management system to establish the wireless network connection to the passenger's personal electronic device; upon receiving the credentials, to use the credentials to establish the wireless network connection; and wherein associating the wireless connection session with the video monitor located at the seat for the passenger is based on the video monitor at which the credentials were entered. 14. The entertainment system of claim 12, wherein the software application is further configured to associate media data files with a media player software program for playing media data files. 15. The entertainment system of claim 14, wherein the software application is further configured to allow a user to create a playlist of media data files stored on the personal electronic device. 16. The entertainment system of claim 12, wherein the personal electronic device is selected from the group consisting of a wireless access point, a cellular phone configured to function as a wireless access point, a cellular hotspot device, a tablet computer configured to function as a wireless access point, and a personal computer configured to function as a wireless access point. 17. The entertainment system of claim 12, wherein the software application is further configured to associate the wireless connection session with another video monitor located at another seat for another passenger, access data file folders having data files and/or data files stored on the personal electronic device, display the data file folders and data files on the another video monitor and allow the another passenger to browse the data file folders and/or data files and select a data file, and display information from the selected data file on the another video monitor. 18. The entertainment system of claim 17, wherein the selected data file by the passenger and the another passenger is one of the same selected data file or a different data file. 19. A method for presenting media from a data file stored on a passenger's personal electronic device on a video display installed at respective passenger seat on a passenger vehicle, the method comprising:
providing an onboard display system in a passenger vehicle, the onboard display system including a computing device having a processor, memory and a storage device, a wireless communication module operatively coupled to the computing device, a video monitor operatively coupled to the computing device, and a media player software application stored on the storage device; establishing a wireless connection between the onboard display system and a passenger's personal electronic device; accessing data file folders having media data files and/or media data files stored on the personal electronic device via the wireless network connection with the onboard display system; displaying the data file folders and media data files on the video monitor and allowing a passenger to browse the data file folders and/or media data files and select a media data file; receiving by the onboard display system a selection of a media data file stored on the personal electronic device; and playing the selected media data file using a media player software program stored on the storage device and presenting media from the selected media data file on the video monitor. 20. The method of claim 19, further comprising:
executing a wireless connection function which displays a wireless connection screen on the video monitor and allows the passenger to enter credentials for the onboard display system to establish the wireless network connection to the passenger's personal electronic device; receiving by the onboard display system the credentials; and after receiving the credentials, causing the onboard display system to use the credentials to establish the wireless network connection and associate the personal electronic device to the video monitor among a plurality of video monitors. | 2,400 |
7,598 | 7,598 | 14,030,189 | 2,448 | There is provide a system and method for providing location aware educational information. The method for providing location aware educational information includes: receiving a user location related to a user; receiving user study data related to the user; determining location aware educational information based on the user location and the user study data; and notifying the user about the location aware educational information. The system for providing location aware educational information has: a location module for receiving a user location related to a user; a study area module for receiving user study data related to the user; a location module for determining location aware educational information based on the user location and the user study data; and a connection module for notifying the user of the location aware educational information. | 1. A method for providing location aware educational information, the method comprising:
receiving a user location related to a user; receiving user study data related to the user; determining location aware educational information based on the user location and the user study data; and notifying the user about the location aware educational information. 2. The method of claim 1, wherein the user location comprises a geographic location of a mobile device of the user. 3. The method of claim 1, wherein the user study data comprises data related to at least one study area that the user is studying. 4. The method of claim 3, wherein the user study data comprises at least one of: user grades, academic goals, user strengths, and user weaknesses. 5. The method of claim 1, wherein the determining location aware educational information comprises:
correlating location aware educational information with the user study data that are within a predetermined location range of the user location; and selecting location aware educational information based on user requirements. 6. The method of claim 5, wherein the user requirements comprise user needs based on user study data relating to user weaknesses. 7. The method of claim 1, wherein the user location is predicted based on user calendar information. 8. The method of claim 1, wherein the user location is a virtual location. 9. A system for providing location aware educational information, the system comprising:
a location module for receiving a user location related to a user; a study area module for receiving user study data related to the user; a location module for determining location aware educational information based on the user location and the user study data; and a connection module for notifying the user of the location aware educational information. 10. The system of claim 9, wherein the user location comprises a geographic location of a mobile device of the user. 11. The system of claim 9, wherein the user study data comprises data related to at least one study area that the user is studying. 12. The system of claim 11, wherein the user study data comprises at least one of: user grades, academic goals, user strengths, and user weaknesses. 13. The system of claim 9, wherein the location module comprises:
a correlation component configured to correlate the user location and the user study data with location aware educational information in a predetermined range of the user location; and a selection component configured to select location aware education information from among the correlated location aware educational information based on user requirements. 14. The system of claim 13, wherein the user requirements comprise user needs based on user study data relating to user weaknesses. 15. The system of claim 8, wherein the user location is predicted based on user calendar information. 16. The system of claim 8, wherein the user location is a virtual location. 17. A non-transitory computer readable medium containing instructions that, when executed, perform the method of claim 1. | There is provide a system and method for providing location aware educational information. The method for providing location aware educational information includes: receiving a user location related to a user; receiving user study data related to the user; determining location aware educational information based on the user location and the user study data; and notifying the user about the location aware educational information. The system for providing location aware educational information has: a location module for receiving a user location related to a user; a study area module for receiving user study data related to the user; a location module for determining location aware educational information based on the user location and the user study data; and a connection module for notifying the user of the location aware educational information.1. A method for providing location aware educational information, the method comprising:
receiving a user location related to a user; receiving user study data related to the user; determining location aware educational information based on the user location and the user study data; and notifying the user about the location aware educational information. 2. The method of claim 1, wherein the user location comprises a geographic location of a mobile device of the user. 3. The method of claim 1, wherein the user study data comprises data related to at least one study area that the user is studying. 4. The method of claim 3, wherein the user study data comprises at least one of: user grades, academic goals, user strengths, and user weaknesses. 5. The method of claim 1, wherein the determining location aware educational information comprises:
correlating location aware educational information with the user study data that are within a predetermined location range of the user location; and selecting location aware educational information based on user requirements. 6. The method of claim 5, wherein the user requirements comprise user needs based on user study data relating to user weaknesses. 7. The method of claim 1, wherein the user location is predicted based on user calendar information. 8. The method of claim 1, wherein the user location is a virtual location. 9. A system for providing location aware educational information, the system comprising:
a location module for receiving a user location related to a user; a study area module for receiving user study data related to the user; a location module for determining location aware educational information based on the user location and the user study data; and a connection module for notifying the user of the location aware educational information. 10. The system of claim 9, wherein the user location comprises a geographic location of a mobile device of the user. 11. The system of claim 9, wherein the user study data comprises data related to at least one study area that the user is studying. 12. The system of claim 11, wherein the user study data comprises at least one of: user grades, academic goals, user strengths, and user weaknesses. 13. The system of claim 9, wherein the location module comprises:
a correlation component configured to correlate the user location and the user study data with location aware educational information in a predetermined range of the user location; and a selection component configured to select location aware education information from among the correlated location aware educational information based on user requirements. 14. The system of claim 13, wherein the user requirements comprise user needs based on user study data relating to user weaknesses. 15. The system of claim 8, wherein the user location is predicted based on user calendar information. 16. The system of claim 8, wherein the user location is a virtual location. 17. A non-transitory computer readable medium containing instructions that, when executed, perform the method of claim 1. | 2,400 |
7,599 | 7,599 | 14,725,593 | 2,439 | A method of redirecting search queries from an untrusted search engine to a trusted search engine is a software application that is used to prevent personal information from being collected by untrusted search engines. The software application receives a search query URL for a desired search engine which corresponds to a search query. The search query is compared to a provided plurality of untrusted URL patterns in order to determine if the desired search engine can be trusted. If the search query URL is not found on in the plurality of untrusted URL patterns, the search is allowed to proceed. If the search query URL is found in the plurality of untrusted URL patterns, the search query is redirected to a trusted search engine. At least one trusted URL pattern is provided so that the search can be redirected to a trusted search engine. | 1. A method of redirecting search queries from an untrusted search engine to a trusted search engine by executing computer-executable instructions stored on a non-transitory computer-readable medium, the method comprises the steps of:
(A) providing at least one trusted Uniform Resource Locator (URL) pattern and a plurality of untrusted URL patterns; (B) receiving a search query URL for a desired search engine, wherein the search query URL corresponds to a search query; (C) comparing the search query URL to each of the plurality of untrusted URL patterns in order to find the desired search engine amongst the plurality of untrusted URL patterns; (D) permitting the desired search engine to generate search results for the search query URL, if the desired search engine is not found within the plurality of untrusted URL patterns; and (E) redirecting the search query to a trusted search engine according to the at least one trusted URL pattern, if the desired search engine is found within the plurality of untrusted URL patterns. 2. The method of redirecting search queries from an untrusted search engine to a trusted search engine by executing computer-executable instructions stored on a non-transitory computer-readable medium, the method as claimed in claim 1 comprises the steps of:
providing an update server;
periodically polling the update server for untrusted pattern updates;
retrieving the untrusted pattern updates from the update server; and
incorporating the untrusted pattern updates into the plurality of untrusted URL patterns. 3. The method of redirecting search queries from an untrusted search engine to a trusted search engine by executing computer-executable instructions stored on a non-transitory computer-readable medium, the method as claimed in claim 1 comprises the steps of:
providing an update server;
periodically polling the update server for trusted pattern updates;
retrieving the trusted pattern updates from the update server; and
incorporating the trusted pattern updates into the at least one trusted URL pattern. 4. The method of redirecting search queries from an untrusted search engine to a trusted search engine by executing computer-executable instructions stored on a non-transitory computer-readable medium, the method as claimed in claim 1 comprises the steps of:
wherein the desired search engine is not found within the plurality of untrusted URL patterns;
permitting the search query to pass to the desired search engine;
receiving the search results generated by the desired search engine; and
rendering the search results on a user computing device. 5. The method of redirecting search queries from an untrusted search engine to a trusted search engine by executing computer-executable instructions stored on a non-transitory computer-readable medium, the method as claimed in claim 1 comprises the steps of:
wherein the desired search engine is found within the plurality of untrusted URL patterns;
extracting the search query from the search query URL;
passing the search query to a trusted search engine;
receiving trusted search results from the trusted search engine; and
rendering the search results on a user computing device. 6. The method of redirecting search queries from an untrusted search engine to a trusted search engine by executing computer-executable instructions stored on a non-transitory computer-readable medium, the method as claimed in claim 1, wherein steps (A) through (E) are is executed by a user computing device. 7. The method of redirecting search queries from an untrusted search engine to a trusted search engine by executing computer-executable instructions stored on a non-transitory computer-readable medium, the method as claimed in claim 1, wherein:
an intermediate server executes steps (A) through (E); and the search results generated during step (E) are sent to a user computing device. 8. The method of redirecting search queries from an untrusted search engine to a trusted search engine by executing computer-executable instructions stored on a non-transitory computer-readable medium, the method as claimed in claim 1 comprises the steps of:
receiving an encrypted Hypertext Transfer Protocol (HTTP) request associated with the search query URL; and
decrypting the encrypted HTTP request in order to compare the search query URL to each of the plurality of untrusted URL patterns. | A method of redirecting search queries from an untrusted search engine to a trusted search engine is a software application that is used to prevent personal information from being collected by untrusted search engines. The software application receives a search query URL for a desired search engine which corresponds to a search query. The search query is compared to a provided plurality of untrusted URL patterns in order to determine if the desired search engine can be trusted. If the search query URL is not found on in the plurality of untrusted URL patterns, the search is allowed to proceed. If the search query URL is found in the plurality of untrusted URL patterns, the search query is redirected to a trusted search engine. At least one trusted URL pattern is provided so that the search can be redirected to a trusted search engine.1. A method of redirecting search queries from an untrusted search engine to a trusted search engine by executing computer-executable instructions stored on a non-transitory computer-readable medium, the method comprises the steps of:
(A) providing at least one trusted Uniform Resource Locator (URL) pattern and a plurality of untrusted URL patterns; (B) receiving a search query URL for a desired search engine, wherein the search query URL corresponds to a search query; (C) comparing the search query URL to each of the plurality of untrusted URL patterns in order to find the desired search engine amongst the plurality of untrusted URL patterns; (D) permitting the desired search engine to generate search results for the search query URL, if the desired search engine is not found within the plurality of untrusted URL patterns; and (E) redirecting the search query to a trusted search engine according to the at least one trusted URL pattern, if the desired search engine is found within the plurality of untrusted URL patterns. 2. The method of redirecting search queries from an untrusted search engine to a trusted search engine by executing computer-executable instructions stored on a non-transitory computer-readable medium, the method as claimed in claim 1 comprises the steps of:
providing an update server;
periodically polling the update server for untrusted pattern updates;
retrieving the untrusted pattern updates from the update server; and
incorporating the untrusted pattern updates into the plurality of untrusted URL patterns. 3. The method of redirecting search queries from an untrusted search engine to a trusted search engine by executing computer-executable instructions stored on a non-transitory computer-readable medium, the method as claimed in claim 1 comprises the steps of:
providing an update server;
periodically polling the update server for trusted pattern updates;
retrieving the trusted pattern updates from the update server; and
incorporating the trusted pattern updates into the at least one trusted URL pattern. 4. The method of redirecting search queries from an untrusted search engine to a trusted search engine by executing computer-executable instructions stored on a non-transitory computer-readable medium, the method as claimed in claim 1 comprises the steps of:
wherein the desired search engine is not found within the plurality of untrusted URL patterns;
permitting the search query to pass to the desired search engine;
receiving the search results generated by the desired search engine; and
rendering the search results on a user computing device. 5. The method of redirecting search queries from an untrusted search engine to a trusted search engine by executing computer-executable instructions stored on a non-transitory computer-readable medium, the method as claimed in claim 1 comprises the steps of:
wherein the desired search engine is found within the plurality of untrusted URL patterns;
extracting the search query from the search query URL;
passing the search query to a trusted search engine;
receiving trusted search results from the trusted search engine; and
rendering the search results on a user computing device. 6. The method of redirecting search queries from an untrusted search engine to a trusted search engine by executing computer-executable instructions stored on a non-transitory computer-readable medium, the method as claimed in claim 1, wherein steps (A) through (E) are is executed by a user computing device. 7. The method of redirecting search queries from an untrusted search engine to a trusted search engine by executing computer-executable instructions stored on a non-transitory computer-readable medium, the method as claimed in claim 1, wherein:
an intermediate server executes steps (A) through (E); and the search results generated during step (E) are sent to a user computing device. 8. The method of redirecting search queries from an untrusted search engine to a trusted search engine by executing computer-executable instructions stored on a non-transitory computer-readable medium, the method as claimed in claim 1 comprises the steps of:
receiving an encrypted Hypertext Transfer Protocol (HTTP) request associated with the search query URL; and
decrypting the encrypted HTTP request in order to compare the search query URL to each of the plurality of untrusted URL patterns. | 2,400 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.